On Sept. 11, 2001, the U.S. military possessed just handful of robot aircraft. Today, the Air Force alone operates more than 50 drone “orbits,” each composed of four Predator or Reaper aircraft plus their ground-based control systems and human operators. Smaller Navy, Marine and Army drones number in the thousands.
Since they do not need to provide the oxygen, avionics and other equipment a pilot needs, drones can fly longer, for less, than most manned planes. And that’s not all. “Unmanned systems are machines that operate under a ‘sense,’ ‘think,’ ‘act’ paradigm,” wrote Peter Singer, a Brookings Institution analyst and author of Wired for War. In other words, drones can, in theory, pile their own intelligence and initiative onto that of their human masters.
Unmanned Aerial Systems are arguably the defining weapon of the post-9/11 era of warfare — and have enjoyed investment commensurate with their status: no less than $25 billion annually, year after year and continuing. The coming decade could see even more profound drone development as technology and acceptance reach critical mass. “Automation is key to increasing effects, while potentially reducing cost, forward footprint and risk,” then-Col. Eric Mathewson wrote in the Air Force’s 2009 UAS Flight Plan.
But there’s an artificial limit on this potential. It’s not technology or even funding that really constrains robotic warplane development. Rather, it’s the willingness of human beings to surrender increasing degrees of control to mobile, lethal, thinking machines whose autonomy may mean they fall outside existing law. Trust and accountability are holding back our robots.
Autonomous UAS and human warriors all make mistakes. Missions fail. Innocents get hurt or die. When a person screws up, he’s tried and punished if guilty. Justice is served. You can’t, however, take a robot to court.