Using artificial intelligence in military systems is highly risky, argues journalist
The use of artificial intelligence in military applications is increasing. Kelsey Atherton, military technology journalist at Wars of Future Past, says this comes with immense risk because new autonomous systems introduce the possibility of new types of errors that happen in war.
- Atherton said AI-driven weapons systems will act on information humans would not expect them to if they interpret input based on training data.
- He cited a scenario in 2003 during the Iraq War when American troops accidentally shot down a British fighter jet at the recommendation of their automated systems.
- A challenge for the Defense Department is finding accurate data to use for the systems, said Atherton.
PITTSBURGH, Pa. (Aug. 3, 2018) Rear Adm. David Hahn, chief of naval research, tours the National Robotics Engineering Center (NREC) during a visit to Carnegie Mellon University (CMU) in Pittsburgh, Pa. Hahn was at CMU to attend the Artificial Intelligence (AI) & Autonomy for Humanitarian Assistance and Disaster Relief (HADR) workshop, co-hosted by the Office of Naval Research and CMU. (U.S. Navy photo by John F. Williams/Released)
Senior leaders from the Marine Corps Recruiting Command and Manpower and Reserve Affairs visited Johns Hopkins APL, September 22, 2021. The Marines received presentations from the APL staff members on how Machine Learning and Artificial Intelligence can support the Marine Corps’ force design efforts of recruiting and retaining talented Marines. (Photo from Department of Defense)