What the helicopter was to the Vietnam war, the drone is becoming to the Afghan conflict: both a crucial weapon in the American armory and a symbol of technological might pitted against stubborn resistance. Pilotless aircraft can hit targets without placing a pilot in harm's way.
They have proved particularly useful for assassinations. On Feb. 17, for example, Sheikh Mansoor, an al-Qaida leader in the Pakistani district of North Waziristan, was killed by a drone-borne Hellfire missile. The United States wants to increase drone operations.
Assassinating "high value targets," such as Mansoor, often involves a moral quandary. A certain amount of collateral damage has always been accepted in the rough-and-tumble of the battlefield, but direct attacks on civilian sites, even if they have been commandeered for military use, causes queasiness in thoughtful soldiers.
Errors are not only tragic, but also counterproductive. Sympathetic local politicians will be embarrassed and previously neutral noncombatants may take the enemy's side. Moreover, the operators of drones, often on the other side of the world, are far removed from the sight, sound and smell of the battlefield. They may make decisions to attack that a commander on the ground might not, treating warfare as a video game.
Ronald Arkin of the Georgia Institute of Technology's School of Interactive Computing has a suggestion that might ease some of these concerns. He proposes involving the drone itself — or, rather, the software that is used to operate it — in the decision to attack. In effect, he plans to give the machine a conscience.
The software conscience that Arkin and his colleagues have developed is called the Ethical Architecture. Its judgment may be better than a human's because it operates so fast and knows so much. And — like a human but unlike most machines — it can learn.
The drone would initially be programmed to understand the effects of the blast of the weapon it is armed with. It would be linked to both the Global Positioning System and the Pentagon's Global Information Grid, a vast database that contains, among many other things, the locations of buildings in military theaters and what is known about their current use.
After each strike the drone would be updated with information about the actual destruction caused. It would note any damage to nearby buildings and would subsequently receive information from other sources, such as soldiers in the area, fixed cameras on the ground and other aircraft. Using this information, it could compare the level of destruction it expected with what actually happened. If it did more damage than expected — for example, if a nearby cemetery or mosque was harmed by an attack on a suspected terrorist safe house — then it could use this information to restrict its choice of weapon in the future. It could also pass the information to other drones.
No commander is going to give a machine a veto, of course, so the Ethical Architecture's decisions could be overridden. That, however, would take two humans — both the drone's operator and his commanding officer. That might not save a target from destruction but it would, at least, provide room for a pause for reflection before the pressing of the "fire" button.