Flo Crivello was monitoring the output of the AI assistant his company Lindy makes when he noticed something odd: a new customer was asking the Lindy AI assistant for a video tutorial to better understand how to use the platform, and Lindy responded. That's when Crivello realized something was wrong: there was no video tutorial.
“We saw this and we were like, 'Okay, what kind of video did they send us?' And we were like, 'Oh no, this is a problem,'” Crivello told TechCrunch.
The video the AI sent to the client was the music video for Rick Astley's 1987 dance-pop hit “Never Gonna Give You Up.” In other words, the client was Rickrolled. By an AI.
A customer contacted me requesting a video tutorial.
Apparently it was Lindy who was in charge of this issue and I was happy to see that she sent me the video.
But then I remembered there were no video tutorials and realized Lindy was literally rickrolling our customers. pic.twitter.com/zsvGp4NsGz
— Flor Crivello (@Altimor) August 19, 2024
Rickroll is a bait-and-switch meme that's been around for over 15 years. In one incident that popularized the meme, Rockstar Games released a trailer for the highly hyped Grand Theft Auto IV on their website, but the site crashed due to the sheer volume of traffic. Some people shared links to watch the trailer, and others downloaded the video and posted it on other sites, such as YouTube. However, one 4chan user tried to play a prank by sharing a link to Rick Astley's “Never Gonna Give You Up.” 17 years later, people still prank their friends by sharing Astley's song at inopportune times. The music video currently has over 1.5 billion views on YouTube.
This internet prank has become so widespread that, inevitably, large language models like ChatGPT, which powers Lindy, picked it up.
“These models try to predict the most likely next text sequence,” Crivello says, “so it starts out like, 'Oh, I'm sending you a video!' So what's the most likely thing after that? YouTube.com. What's the most likely thing after that?”
Crivello told TechCrunch that out of millions of responses, Lindy only rickrolled customers twice, but the error still had to be fixed.
“What's really cool about this new AI era is that to patch it in, all we had to do was add a line called a system prompt, which is a prompt that's included with every Lindy, which is to say don't rickroll,” he said.
Lindy's fiasco raises the question of the extent to which internet culture feeds into AI models, since these models are often trained on broad swaths of the web. Lindy's accidental Rickroll was particularly notable in that the AI organically recreated this very specific user behavior, which triggered the hallucination. But traces of internet humor seep into AI in other ways, as Google learned the hard way when it trained its AI with data from Reddit. As a hub for user-generated content, much of it satirical, Google's AI ended up telling users that adding glue would help cheese stick better to pizza crust.
“In the Google case, it wasn't exactly a hoax,” Crivello said. “It was based on content. It was just bad content.”
As LLM rapidly improves, Crivello believes we won't see as many blunders like this in the future. Plus, he says it's easier than ever to fix them. In Lindy's early days, if one of its AI assistants couldn't complete a task a user had asked it to, the AI would say it was working on it but wouldn't provide any deliverables (which, oddly enough, sounds pretty human).
“This was a really hard problem to fix,” Crivello says, “but when GPT-4 was released, we added a prompt that said, 'If the user asks you to do something you can't do, tell them you can't,' and that solved the problem.”
For now, the good news is that customers who get Rickrolled might not even realize it.
“We don't even know if the customer saw it,” he says. “We followed up immediately with, 'Oh, here's the correct link to the video,' but the customer never said anything about the first link.”