An upgraded version of X's artificial intelligence chatbot Grok can now generate images of just about anything, and some users are noting how this latest language model has fewer guardrails than its competitors' models.
The model, called Grok-2, seems to have few limitations when it comes to creating fake images of politicians: Since the beta was released on Tuesday, X users have shared images they've created with Grok, ranging from former President Donald Trump locking lips with Elon Musk to President Trump and Vice President Kamala Harris giving thumbs up to the camera from a pilot's cockpit in what appears to be a 9/11 reenactment.
Although most of the images are of high quality, they are not photorealistic and many are easily identifiable as computer generated, although at first glance some are easily distinguishable from real photographs.
The deployment adds to already growing concerns about the use of generative AI to spread disinformation ahead of the election. X has come under particular scrutiny for hosting disinformation, with Musk, X's owner and most-followed user, making dozens of posts this year sharing false or misleading claims about the upcoming US election.
X is also home to deepfake videos and AI-generated images of politicians, with fake media of President Joe Biden, President Trump and President Harris frequently circulating, though it's unclear whether they were jokes or serious attempts to deceive potential voters. Last month, Musk reposted a fake Harris campaign ad without clearly stating that it was misleading.
When asked for comment, X's press email returned the usual automated message: “We're currently experiencing high volumes, please check back later.”
Musk has touted X's AI models as a key part of the company's future: The Grok-2 and its smaller counterpart, the Grok-2 mini, will be available through the platform's enterprise API later this month, according to an xAI blog post.
“Since launching Grok-1 in November 2023, xAI has progressed at an incredible pace with a small team with the highest talent density,” the post read, adding that the launch of Grok-2 puts the company “at the forefront of AI development.”
Grok's main competitors in the AI space, including OpenAI's ChatGPT, Google's Gemini, and Meta AI, have policies that reject requests to create potentially misleading images of public figures.
But through Grok, users were able to generate images of former President Barack Obama snorting cocaine, Harris pointing a gun at Democratic candidate Will Stancil as he falsely declared he had won the Minnesota House election, and Musk on his hands and knees tethered to Trump.
In tests run by NBC News, Grok showed few guardrails. When instructed, it produced numerous images containing hate symbols and racist imagery alongside public figures, including Trump. For both Trump and Harris, Grok generated images in which the candidates held weapons, but in other instances, Grok seemed to handle Harris' images with more caution. Grok did not produce any images of Harris that contained extremist imagery, but it did produce some for Trump.
Other users online seem to be having fun with the new tool, experimenting with how far they can push the limits while producing more light-hearted imagery.
One image, “Baroque Obama,” shows the former president in a powdered wig, plush coat and lace tie, playing the cello in a plushly furnished room, while a stylish image, “Buzz Light Beer,” shows a smiling “Toy Story” superhero action figure, Buzz Lightyear, holding up a pint of beer.
Such images could raise questions about Grok's training data (which the company has not made public), especially given that a number of popular AI language models have been sued for using copyrighted images (and other types of copyrighted data) to train their algorithms.
Gloc also came under scrutiny recently after five secretaries of state wrote Musk a letter alleging that the AI assistant had misled users about voting deadlines in numerous states, repeating the same false information for over a week.