As artificial intelligence (AI) becomes increasingly powerful and is used in warfare, governments, technology companies, and international organizations are under urgent need to ensure that it is safe, and a common thread in most agreements on AI safety is the need for human oversight of the technology.
In theory, humans could act as a safeguard against misuse and potential hallucinations (if the AI generates false information). This could involve, for example, humans reviewing the content generated by the technology (its output).
But as the growing body of research and real-world examples of military uses of AI show, the idea of humans effectively overseeing computer systems poses inherent challenges.
Previous efforts to develop AI regulations already include a lot of language encouraging human oversight and involvement: For example, EU AI law requires that high-risk AI systems already in use that automatically identify people using biometric techniques such as retinal scanners must be independently verified and reviewed by at least two humans with the necessary capabilities, training and authority.
In the military field, the UK government acknowledged the importance of human oversight in its response to a parliamentary report on AI in weapons systems in February 2024. The report stressed “meaningful human control” by providing humans with appropriate training. It also highlighted the concept of human accountability, stating that decision-making for actions, for example with armed aerial drones, cannot be transferred to machines.
This principle has largely held so far: Military drones are currently controlled by human pilots and their chains of command, who are responsible for the actions of the armed aircraft. But AI has the potential to make drones and the computer systems they use much more intelligent and autonomous.
This includes target acquisition systems, where AI-controlled software selects and locks onto enemy combatants, allowing humans to approve weapon strikes.
While not yet believed to be in widespread use, the war in Gaza appears to have shown that such technology is already in use. Israeli-Palestinian publication +972 magazine reported on a system called “Lavender” being used by Israel.
This is reportedly an AI-based target recommendation system combined with other automated systems that track the geographic location of identified targets.
Target Acquisition
In 2017, the US military came up with a project called Maven, which aimed to integrate AI into weapons systems. Over the years, the project evolved into a target acquisition system that reportedly significantly improved the efficiency of the target recommendation process for weapon platforms.
In line with recommendations from academic research on AI ethics, a human will be placed as a key part of the decision-making loop, monitoring the output of the target acquisition mechanism.
Still, psychological research into how humans interact with computers raises important questions to consider. In a 2006 peer-reviewed paper, American academic Mary Cummings outlined how humans may place too much trust in mechanical systems and their conclusions, a phenomenon she calls automation bias.
If operators are unlikely to question a machine’s conclusions, the human role in checking automated decision-making may be hindered.
In another study published in 1992, researchers Bhatia Friedman and Peter Kahn argued that humans' sense of moral agency fades when interacting with computer systems, leading them to believe they are not responsible for the resulting outcomes. In fact, the paper explains, humans may even begin to attribute a sense of agency to the computer system itself.
Given these trends, it is prudent to consider whether placing too much trust in computer systems, and the potential erosion of a person's sense of moral conduct, might also affect target acquisition systems. After all, the margin of error, while statistically small on paper, takes on frightening dimensions when considering the potential impact on human lives.
Various resolutions, treaties and laws regarding AI help ensure that humans will act as an important check on AI. But after a long period in that role, it's important to ask whether a disconnect could occur where human operators start to see actual humans as items on a screen.
Mark Tsagas is Senior Lecturer in Law, Cybercrime and AI Ethics at the University of East London.
This article is republished from The Conversation under a Creative Commons license. Read the original article.