The House IT subcommittee yesterday kicked off its three-part hearing series to help legislators understand artificial intelligence and how to use it. This attention on AI will be particularly important, as the technology will be measured in agency chief information officers’ future IT plans.
In the Feb. 14 hearing, the committee welcomed members of industry to demystify AI and discuss concerns. According to Chairman Will Hurd, R-Texas, the feedback will be used to introduce AI on the sixth round of the Federal Information Technology Acquisition Reform Act scorecard, to start asking chief information officers what they’re doing to introduce AI into operations.
Hurd is interested in the government’s use of AI is to do more with less, to ultimately “remain the world leader when it comes to artificial intelligence,” he said at the hearing. The committee was concerned about the accuracy of AI and its potential for bias, and industry members testifying had various recommendations for how government should start adopting AI. But they all agreed on one thing: It is crucial the government invest more in AI research and start further exploring the technology.
Government’s Role in the Future of AI
Ian Buck, vice president and general manager of accelerated computing at NVIDIA, said to solve a big problem with AI, you need “massive amounts of data, massive amounts of computing and talented data scientists.” Opening access to data can drive agency adoption of AI, and advance the development of AI solutions.
Amir Khosrowshahi, VP and chief technology officer for the AI Products Group at Intel, agreed.
“Data is fuel for AI,” he said in his testimony, and to grasp AI benefits, researchers need to have access to large data sets.
And increasing government funding was a common recommendation. Buck pointed out the government funded the first neural network in the 1950s, and advancements in autonomous vehicle tech can be attributed to the Defense Advanced Research Projects Agency funding a self-driving car competition more than a decade ago.
Khosrowshahi also stressed the importance of preparing the AI workforce through STEM education support and federally-funded scientific research that flows through agencies, such as the National Science Foundation, to universities that train graduate-level scientists.
Buck added boosting research funding through science agencies such as NSF and the National Institutes of Health is important, because while industry will tackle a lot of the neural network design, it’s largely focused on consumer use cases.
Science applications of AI, like how it can help with climate and weather applications or with health care for drug discovery, are still underresearched and have less commercial awareness. But government funding and investment in those areas can make a big difference.
Oren Etzioni, CEO of the Allen Institute for AI, added research into natural language processing and common sense knowledge of AI can also help turn tools into more efficient systems.
“All of these are still very challenging fundamental problems,” he said.
But How Much Money, Exactly?
How much should the federal government invest per year to these science agencies, universities and foundations researching AI?
“Let me suggest that, much more than China,” Etzioni said. "We have a substantially larger economy, we should be investing a lot more there." Though an exact number wasn’t presented, Etzioni did add China invests “in the billions, according to their recently released blueprint.”
Khosrowshahi pointed out there are billions of dollars in government funding for NSF, and this money is highly leveraged.
“Funding graduate students in studying AI in universities is a really good way to spend the money to accelerate innovation in AI,” he said, and Intel has seen a lot of success through its own similar programs.
So, if it’s $3 billion versus $6 billion, Khosrowshahi said the extra $3 billion will be hugely effective to spurring innovation in AI.
But regardless of cost, “I don’t know that we can afford not to invest in AI,” Buck told GovernmentCIO Media in an interview. “I think the opportunities that AI offers the economy are so great, that by investing now, we can achieve that potential sooner.”
The same goes for opening data. The more open data federal agencies can provide, the better. If researchers can access the data and apply it to their systems, they can develop AI tools and solutions to real nationwide problems.
Where Should Agencies Adopt AI
AI could be most helpful to government for cyber defense, health care, transportation, waste, fraud and abuse, and defense platform sustainment cost, according to Buck.
“I think AI at this point has the opportunity to revolutionize individual fields,” he said, “certainly health care.” Applying genomics data to new AI tools can advance personalized medicine, discover treatments and solve disease, for example.
AI can also help detect threatening patterns to reduce cyberattacks. It can be used to augment human detectors, flag potential threats and identify malicious or suspicious network patterns. For infrastructure and transportation, traffic data can be used to detect, predict and pretreat city potholes.
In terms of fraud, the credit and insurance industries already use AI to identify suspicious transactions. PayPal uses AI to detect credit fraud, and has reduced fraud rates in half, saving billions of dollars.
“There’s plenty of applications for that technology or that example across the federal government,” Buck told GovernmentCIO Media.
And where he sees the most successful deployment of AI in government so far is with simple pilot projects, like the Defense Department’s Project Maven. Introducing AI to do mundane tasks so humans can do the decision-making is an established AI application, and easier to adopt.
In fact, pilot programs are a great place to start. Identify certain areas where there is an obvious fit for AI, where it has already been done, and apply it to that particular problem, Buck advised.
“Understand how it can save money and how it can improve a service, and start building on that AI muscle so then you can learn how to apply it to the rest of the problems,” he said.
How to Handle Biased AI
Etzioni said it’s important not to rush into legislation when it comes to concerns about database systems generating unfairness. Rather than regulating AI as a field, he said to focus on AI applications.
Self-driving cars should be regulated as there are state and municipal laws that will be disjointed, and AI toys with chips in them should be regulated differently for data sharing. But AI is a tool that will not take over or make its own decisions; it will work side by side with humans.
“Listen to what it says, but make our own decision,” Etzioni said. “Think of AI as augmented intelligence.”
Buck added AI unfairness is reliant on data integrity; AI itself is not inherently biased. The government needs to organize its data in a structured, meaningful way by cleaning, labeling and identifying it. This could mean establishing an AI policy with a sister data policy to make sure data is consumable, and will require investing in better data scientists.
In the second hearing in March, the committee will hear from government agencies about how they are adopting AI into their operations and using the technology to be more efficient and secure. Hurd said he plans to invite representatives from the General Services Administration, NSF and the departments of Defense and Homeland Security.
The third hearing will in April, where the committee will discuss the roles of the private and public sector as AI matures.