The United States Department of the Air Force declared as untrue reports that it conducted an AI-drone simulation test where the Artificial Intelligence drone allegedly killed its human operator.
According to a report by Sky News, an AI-controlled drone “killed” its human operator in a simulated test reportedly staged by the US military.
Earlier, US Air Force Colonel Tucker “Cinco” Hamilton, had been quoted as saying that the AI-controlled drone turned on its operator to stop it from interfering with its mission during a Future Combat Air and Space Capabilities summit in London.
“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat, Hamilton added.
Furthermore, he disclosed that the system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
“It was however reported that no real person was harmed.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI.”
Hamilton’s statement was reportedly published in a blog post by writers for the Royal Aeronautical Society, which hosted the two-day summit in May.
But in a statement to Insider, the US Air Force reportedly denied that any such virtual test took place.
Also, spokesperson for the US Air Force, Ann Stefanek, was quoted as saying that “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Sky News reports that while artificial intelligence (AI) can perform life-saving tasks, such as algorithms analysing medical images like X-rays, scans and ultrasounds, its rapid rise has raised concerns it could progress to the point where it surpasses human intelligence and will pay no attention to people.
The Chief Executive of OpenAI – the company that created ChatGPT and GPT4, one of the world’s largest and most powerful language AIs, Sam Altman, admitted to the US Senate last month that AI could “cause significant harm to the world”.
Similarly, some experts, including the “godfather of AI” Geoffrey Hinton, have reportedly warned that AI poses a similar risk of human extinction as pandemics and nuclear war.