Skip to main content
ai

DoD Releases Principles for Ethical AI in Combat

These principles will do little to assuage critics who say AI-augmented weapons could be a "killer robot" storyline ripped from Sarah Connor's nightmares
article cover

Francis Scialabba

less than 3 min read

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Believe it or not, the organization with Earth's largest weapons arsenal now has an AI code of conduct. After a 15-month review, the U.S. Department of Defense formally adopted "ethical principles" for AI on Monday.

The principles cover five main areas

  1. Responsible. Use "appropriate levels of judgment and care."
  2. Equitable. Minimize "unintended bias."
  3. Traceable. Don't let AI systems operate like a black box.
  4. Reliable. No buggy algorithms or hardware.
  5. Governable. All autonomous systems should have an "off" button in case stuff hits the fan.

These principles will do little to assuage critics who say AI-augmented weapons could lead to a "killer robot" storyline ripped from Sarah Connor's nightmares. But for now, the DoD says humans have veto power over the actions of armed robots.

Bottom line: The Pentagon seems most focused on deploying AI into non-combat arenas like surveillance, intelligence, and logistics. But even those efforts are bound to meet resistance from civil society groups and contracted tech employees.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.