I had just landed at Balad, Iraq, in 2008 when I watched an MQ-9 "Reaper" drone taxi by. The pilotless plane had missiles hanging from its wings and was headed out to target America's enemies.
The only thing that made that situation less like the Terminator movies is I knew an Air Force officer in Nevada was controlling that particular killer robot.
But as more and more robots are enlisted to help America's military, controlling all of them has grown intensely complex. That's why the Defense Advanced Research Project Agency is looking for ways to make life easier for the humans who will control future robots.
"A major benefit of increasingly advanced automation and artificial intelligence technology is decreased workload and greater safety for humans — whether it’s driving a vehicle, piloting an airplane, or patrolling a dangerous street in a deployed location with the aid of autonomous ground and airborne squad mates," the agency said in a news release. "But when there’s a technology glitch and machines don’t function as designed, human partners in human-machine teams may quickly become overwhelmed trying to understand their environment at a critical moment — especially when they’ve become accustomed to and reliant on the machine’s capabilities."
The Navy is testing drone ships and the Army is playing with drone tanks these days. The Space Force has satellites, which are really the first drones, and the Air Force is experimenting with pilotless fighters and bombers.
Each of those machines means fewer American troops in harm's way. It is how war will be fought in this century.
But none of those machines are good at one thing the military must plan for: Things going horribly wrong.
"Current (artificial intelligence) systems tend to be brittle — they don’t handle unexpected situations well — and warfare is defined by the unexpected," said Bart Russell, from DARPA’s Defense Sciences Office, in a statement.
That's where humans come in – we're still better than computers at dealing with the unexpected.
But with so many robots in use, it will be difficult for humans to get up to speed when they need to fix wayward machines. DARPA scientists say we're already seeing problems in the human relationship with robots, citing examples including crashes of Boeing airliners that have been blamed on computer glitches.
"This reality played out in crashes of modern jetliners in recent years killing hundreds, because advanced automated systems failed in flight and pilots weren’t able to assess the situation and respond appropriately in time," DARPA explained.
DARPA is now looking for proposals on how to get a better handoff between machines and the humans who are supposed to control them.
And if they can come up with a fix, the Pentagon could eventually see it used in the civilian world, making everything from self-driving cars to fully automated household helpers a reality in the future.