Russia is using generative artificial intelligence to carry out online fraud operations, but its efforts have not been successful, according to a Metasecurity report published on Thursday.
The parent company of Facebook and Instagram has found that AI-powered tactics have so far “provided only modest productivity and content generation benefits” for bad actors, and Meta has been able to thwart deceptive influence campaigns.
Meta's efforts to combat “systematic fraud” on its platform come amid growing concerns that generative AI could be used to deceive or confuse people in elections in the U.S. and other countries.
David Agranovich, Meta's director of security policy, told reporters that Russia remains the largest source of “coordinated illicit activity” using fake Facebook and Instagram accounts.
Since Russia invaded Ukraine in 2022, these efforts have focused on weakening Ukraine and its allies, according to the report.
As the US election approaches, Meta expects Russian-backed online fraud campaigns to attack political candidates who support Ukraine.
Facebook has long been accused of being a powerful platform for election disinformation, and Russian operatives have used it and other U.S.-based social media sites to incite political tensions during several U.S. elections, including the 2016 election won by Donald Trump.
Experts are concerned that generative AI tools such as ChatGPT and the Dall-E image generator make it easy to create on-demand content in seconds, which could lead to an unprecedented flood of disinformation by bad actors on social networks.
According to the report, AI is being used to create images and videos, translate and generate text, and create fake news articles and summaries.
When Meta scouts for fraudulent activity, we look at the behavior of accounts, not the content they post.
Skip Newsletter Promotions
Every week Alex Hahn delves into how technology is changing our lives
Privacy Notice: Our newsletter may contain information about charities, online advertising and externally funded content. For more information, please see our privacy policy. We use Google reCaptcha to protect our website and are subject to the Google Privacy Policy and Terms of Use.
After newsletter promotion
Influence campaigns tend to span different online platforms, and Meta noticed that posts from X (formerly Twitter) were being used to give credibility to fabricated content. Meta said it shared its findings with X and other internet companies, and that a coordinated defense was needed to stop misinformation.
“With regards to Twitter (X), we are still in the process of transitioning,” Agranovic said when asked if Meta sees X acting on scam reports. “Many people we've dealt with there in the past have already gone elsewhere.”
X has dismantled its trust and safety team and scaled back content moderation efforts once used to curb misinformation, leaving it what researchers call a hotbed of disinformation.