Love Your Robot? You're Not Alone.

Love Your Robot? You're Not Alone.

 

Migration project

All articles

All podcasts

(this only appears for logged-in users)

The interaction between humans and robots has been a common theme in science fiction and popular culture, but robotic machines don’t have to be humanoid or even have much personality for people to develop a relationship with them.

Never mind Data in “Star Trek” or BB-8 in “Star Wars,” a Georgia Tech study released back in 2010 found people had become emotionally attached to their Roomba robotic vacuum cleaners, giving them names and assigning them genders. A 2013 study at the University of Washington found members of the Army’s Explosive Ordnance Disposal units had developed bonds with their bomb disposal robots and although those feelings hadn’t affected their performance, raised the question of whether they could eventually compromise decision-making.

-- Sign up for our weekly newsletter to receive the latest analysis and insights on emerging federal technologies and IT modernization.

When robots of one kind or another develop personalities, through machine learning, natural language processing and other artificial intelligence technologies, human operators can get to “know” them, even like them. But the bottom line for government organizations looking to make increasing use of human-machine teaming is whether humans can trust them.

Partnering with Machines

For government organizations, human-machine teaming can take a variety of forms.

NASA, for instance, is working with San Jose State University on human-autonomy teaming (or HAT, which uses an alternate term for the same idea) in aviation. Human-machine teaming has a central role in the Defense Department’s Third Offset Strategy in areas ranging from cybersecurity and manned/unmanned aircraft teams to new-tech weapons such as directed energy lasers, electromagnetic rail guns and hypersonic drones. Teaming also is used in medical fields such as diagnosis, treatment and mental health, as well as other work-like fraud detection.

Artificial intelligence, machine learning, deep learning and cognitive systems bring a lot to the table, including speed and the ability to conduct analysis and detect patterns in massive volumes of data beyond the capacity of a human analyst. But it’s also been established that AI systems work best when paired with human operators, hence the focus on the ability of both parties to understand and trust each other.

Pentagon’s research arm is among the organizations researching the fairly new field of Explainable AI, looking to find ways to get advanced computing systems to do what they can’t do now — explain in human terms why and how they reached a certain conclusion based on the available evidence. Researchers for the Defense Advanced Research Projects Agency hope a clear conversation between human and machine also will help operators understand how a machine might behave in the future, as well.

Trust is a Two-way Street

The need for trust also works both ways. A research presentation by NASA’s Ames Research Center notes while transparency on the part of the machine is necessary, the machine also needs to understand the human operator, because if it “does not know what the human is trying to do, it is difficult for the system to know to engage in ways that are useful.”

The Air Force, for one, is studying the dynamics of the trust-collaboration process with regard to pilots, intelligence, surveillance and reconnaissance operators and analysts, maintenance domains, and advanced human-robot teaming concepts. The service last year gave SRA International a $7.5 million contract the work, which is researching the “socio-emotional elements of interpersonal team/trust dynamics” to better understand human-robot teams, according to the Air Force.

Part of the equation is in recognizing that machines, so to speak, are human too — that is, they can make mistakes.

“Lack of trust is often cited as one of the critical barriers to more widespread use of AI and autonomous systems,” Julie Shah, an assistant professor at MIT and a co-leader of a research project with MIT and the Singapore University of Technology and Design, told MIT News. “These studies make it clear that increasing a user’s trust in the system is not the right goal. Instead we need new approaches that help people to appropriately calibrate their trust in the system, especially considering these systems will always be imperfect.”