Recent advances in artificial intelligence and deep learning are being applied to expand the reach and power of face recognition systems, allowing law enforcement agencies, for instance, to more readily identify known criminal or terrorists. But some researchers warn AI-enabled face recognition systems also raise privacy concerns.
The use of advanced neural networks in deep learning systems and other AI technologies are taking face recognition to new levels. A train station in Beijing is using face recognition to confirm passengers’ tickets. The United Kingdom likewise is testing face recognition technology as a way to replace tickets on public transportation. Another Chinese firm is using face recognition to validate the users of financial apps. Several companies also look to use small, low-power systems to add AI-powered face recognition to wearable police cameras, with the potential for performing on-the-spot background checks. When it comes to consumer devices, Samsung and Apple are among the companies supporting face recognition for unlocking their latest smartphones.
How Far Can it Go?
In September, a pair of Stanford University researchers caused a stir when they reported an AI system could determine a person’s sexual orientation with a high degree of accuracy, an approach that could also be potentially applied to, say, determining a person’s political leanings, IQ or terroristic inclinations. While some initial headlines played up the sensational (“AI can tell if you’re straight or gay,” etc.), subsequent reports contended the results were more modest. Nevertheless, the researchers’ tests did show how AI systems have employed advanced deep-learning algorithms in the key area of pattern recognition, which can be applied in a host of areas.
A separate Stanford study, for instance, found machine-learning algorithms examining images from Petri dishes were better than doctors at detecting signs of lung cancer. Another line of research explores the possibilities of using deep-learning algorithms to identify people trying to conceal their faces with hats, sunglasses, scarves or even fake beards.
Despite recent advances, AI and deep learning still have limits in face recognition.
Just this month, within a week of Apple’s release of the iPhone X, a group of hackers reported using a 3-D-printed mask to fool the phone’s face recognition and unlock it. And recent research by Georgetown University’s law school has found flaws in face recognition systems that would result in innocent people being tagged by law enforcement.
Government organizations already use face recognition for such things as building access and combing through databases of criminals — as well as many other Americans, whose driver’s license and other photos have been collected. U.S. Customs and Border Protection also has been testing face recognition at airports.
And although issues of privacy and transparency need to be addressed — and a certain amount of public backlash is inevitable — the ongoing breakthroughs in AI and deep learning also hold promise for agencies that rely on face and pattern recognition applications, whether in medical, transportation, security or other areas.
-- Sign up for our weekly newsletter to receive the latest analysis and insights on emerging federal technologies and IT modernization.