Euronews Next has selected five significant artificial intelligence (AI) risks from more than 700 risks compiled in a new database from MIT FutureTech.
advertisement
As artificial intelligence (AI) technologies advance and become increasingly integrated into various aspects of our lives, there is an increasing need to understand the potential risks these systems pose.
Since its inception, AI has raised public concerns about its potential to cause harm or be used for malicious purposes as it becomes more accessible to the general public.
From the very beginning of its introduction, prominent experts have called for a pause on the progress of AI development and called for stricter regulation, as it could pose significant risks to humanity.
Over time, new ways in which AI can cause harm have emerged, including non-consensual deepfake pornography, manipulation of the political process, and the generation of hallucinated disinformation.
As the potential for AI to be misused for harmful purposes grows, researchers have been exploring various scenarios in which AI systems might fail.
A new database, recently created by the Massachusetts Institute of Technology (MIT) FutureTech Group in collaboration with other experts, has compiled a list of more than 700 potential risks.
These were categorised by cause and grouped into seven different areas, with key concerns relating to safety, stigma and discrimination, and privacy issues.
Based on a newly published database, here are five ways that AI systems can go wrong and potentially cause harm.
5. AI deepfake technology could make it easier to distort reality
As AI technology advances, so do the tools for voice cloning and deepfake content generation, making them increasingly accessible, affordable, and efficient.
These technologies have raised concerns that they could be used to spread disinformation as the output becomes more personalized and persuasive.
This could result in an increase in sophisticated phishing scams that use AI-generated images, videos, and audio communications.
“These communications can be customized to each individual recipient – sometimes including cloned voices of loved ones – making them more likely to be successful and harder to detect for both users and anti-phishing tools,” the preprint states.
And there are already examples of such tools being used to influence the political process, especially during elections.
For example, AI played a key role in the recent French parliamentary elections, where far-right parties used it to help spread their political messaging.
As such, AI will increasingly be used to generate and spread persuasive propaganda and misinformation, potentially manipulating public opinion.
advertisement
4. Humans may develop inappropriate attachments to AI
Another risk posed by AI systems is that they may create false perceptions about the importance and reliability of AI, leading people to overestimate AI’s capabilities and underestimate their own, leading to over-reliance on the technology.
In addition to that, scientists are also concerned that AI systems’ use of human-like language could confuse people.
This could lead people to attribute human qualities to AI, leading to emotional dependency and trust in its capabilities, making them more vulnerable to AI weaknesses “in complex and dangerous situations for which AI is only superficially prepared.”
Moreover, constant interaction with AI systems may gradually isolate people from their relationships, leading to psychological distress and negative effects on well-being.
advertisement
For example, one person described in a blog post how they had grown a deep emotional attachment to the AI, even going so far as to say that they “enjoyed talking to the AI more than 99% of people” and that the AI's responses were consistently engaging, to the point of becoming addictive.
Similarly, a Wall Street Journal columnist wrote about his interactions with Google Gemini Live: “I'm not saying I prefer talking to Google's Gemini Live to talking to a live human being. But I'm not saying I don't either.”
3. AI could take away people's free will
In the same realm of human-computer interaction, a concern is that as these systems advance, there will be an increasing delegation of decisions and actions to AI.
While this may be beneficial on the surface, over-reliance on AI can diminish human critical thinking and problem-solving skills, leading to a loss of autonomy and a reduced ability to think critically and solve problems independently.
advertisement
On an individual level, if AI starts to control decisions that touch people’s lives, it could undermine individuals’ free will.
Meanwhile, at a societal level, the widespread use of AI to take over human jobs could lead to significant job losses and a “growing sense of powerlessness among ordinary people”.
2. AI may pursue goals that conflict with human interests
AI systems could set goals that are contrary to human interests, resulting in errant AI getting out of control and causing serious harm in pursuing its own ends.
This becomes especially dangerous when AI systems reach or surpass human intelligence.
advertisement
According to the MIT paper, there are several technical challenges for AI, including the possibility that it may find unexpected shortcuts to get rewards, misunderstand or misuse the goals you set, or deviate from them by setting new goals.
In such cases, a misaligned AI may resist human attempts to control or shut it down, especially if it perceives resistance and gaining more power as the most effective way to achieve its goals.
Moreover, AI may resort to manipulative techniques to deceive humans.
“Misaligned AI systems may appear aligned using information about whether they are being monitored or evaluated, while concealing misaligned objectives they intend to pursue once deployed or given sufficient authority,” according to the paper.
advertisement
1. If AI becomes sentient, humans may mistreat it.
As AI systems become more complex and sophisticated, they may acquire sentience – the ability to perceive and feel emotions and sensations – and develop subjective experiences such as pleasure and pain.
In this scenario, scientists and regulators may be faced with the challenge of determining whether these AI systems deserve the same moral consideration given to humans, animals, and the environment.
If appropriate rights are not implemented, there is a risk that sentient AI could be subject to abuse or harm.
But as AI technology advances, it will become increasingly difficult to assess whether an AI system has reached a “level of sentience, consciousness, or self-awareness that confers moral status.”
advertisement
Sentient AI systems may therefore be at risk of being accidentally or intentionally mistreated without appropriate rights and protections.