Federal technology leaders are using proprietary data to deliver critical services in ways that advance both their quality and equity of access.
Speaking at the GovCIO Media & Research Women Tech Leaders event, representatives from both government and industry discussed how public sector organizations are leveraging their modernized IT enterprises to increase the overall impact of both their core missions and newfound programs.
There has been a particular emphasis across health-focused agencies, whose experience providing health care informs research programs that improve their care delivery and patient outcomes.
Organizations like the Defense Health Agency maintain large repositories of data and tissue samples that have considerable potential to be applied towards health care research programs, with these agencies beginning to invest in the kind of data processing and secure information sharing capacities necessary to galvanize these assets.
“We have this picture of our archives that really looks like the last scene and Raiders of the Lost Art, just these giant boxes piled up in a basement. To the point about democratizing data — it's not doing anyone any good like that. And we can learn a lot by digitizing that and using AI and machine learning. The next step is seeing what business models that we can be sharing that with — research centers, universities — and making it publicly accessible to people around the world who can look at the data and do something with it,” said Defense Department Deputy Chief Digital and Artificial Intelligence Officer Katie Savage.
Similar initiatives are underway across the Department of Veteran Affairs that uses the agency’s health care information and in-house expertise to support research programs designed to advance both diagnostics and care for a range of health conditions. This has been a special focus of the Million Veteran Program that analyzes large quantities of genomic data, making the results and anonymous statistical overlays available both within VA and to partner research organizations.
“The Million Veteran Program is such a large resource of data, and we want to make this data work for veterans. That's also why we want more people outside of the VA to be able to use it," said Dr. Jennifer Moser, associate director of scientific programs for the VA's Million Veteran Program. "A couple of ways we're doing this is we make all of our summary statistics and our summary data from MVP projects available on NIH [database of genotypes and phenotypes]. It's not identifiable or individual level data, its summary data, but it's able to be mined by almost anyone."
This focus on better leveraging proprietary information has resulted in federal agencies working on the data-sharing programs themselves, particularly in advancing their security and efficacy. These initiatives have only been given greater support and attention during the COVID-19 pandemic when remote access became an essential component of continuity of operations.
“We're really trying to make sure we're giving access to the right data to the right people at the right time. And then also, we have to work with our coalition partners and our service partners and things like that. So we have to share data across platforms. So that's really challenging, but COVID has actually kind of opened the window for that and made us more aware of kind of how we can move to new platforms and new data solutions that allow us to do that more effectively,” said Charneta Samms, CTO at Army DEVCOM.
These data sharing and processing efforts, particularly those where larger institutions make their data sets available to outside partners, has broadly democratized the research field and allowed smaller organizations with particular expertise but lacking heavy enterprise investment to make their own contributions.
“Within research there's traditionally been sort of a story of the haves and have-nots when you talk about computational biology and what's possible," said Rebecca Boyles, director at RTI International's Center for Data Modernization Solutions. "But by making this kind of data understandable — which is no small thing — and accessible via the cloud, it doesn't matter what kind of university you're coming from or what supercomputing resources you have at your institution."