“Hopefully I won't have to cover Elon Musk for a while,” I thought to myself after sending out TechScape to our readers last week. Then I got a message from my news editor: “Could you please keep an eye on Elon Musk's Twitter feed this week?”
I ended up perusing the writings of the world’s most powerful posting addict, and my brain turned to liquid and dripped out of my ears.
His briefest overnight break came on Saturday night, when he logged out after retweeting a meme likening the Metropolitan police to the SS, then returned online four and a half hours later to retweet a tweet from a cryptocurrency influencer complaining about Britons being jailed for taking part in protests.
But somehow I was still surprised by what I found. I'd covered Musk for years, so I knew the broad contours of his online presence: a trifecta of promotion for his day jobs, Tesla and SpaceX, enthusiastic reposting of cheesy nerd humor, and increasingly right-wing political demagoguery.
But following Musk in real time has shown us that his chaotic demeanor is skewed by a right-wing slant. His Tesla pitches are increasingly taking on culture war overtones, particularly his Cybertruck ad, which uses language to suggest buying one will help Democrats win the US presidential election in November. His aforementioned cheesy nerd humor is tinged with anger that he's not seen as the coolest guy in the world. And his right-wing political tirades are increasingly fringe.
Musk's involvement in the UK unrest seems to have pushed him further into the far-right's pocket than ever before. This month, he first tweeted Lauren Southern, a far-right Canadian internet personality who is famous in the UK for being banned by Theresa May's government for her Islamophobia. He doesn't just tweet; she also supports him financially by sending him around £5 a month through Twitter's subscription feature. In addition, a tweet from the co-leader of Britain First drew attention. That alone might have led one to conclude that Musk didn't know what pond he was swimming in, but two weeks have passed and the pattern is becoming clearer. Now they are his supporters.
Well, that's fine.
Today I will give you a good example from the world of AI that shows the difference between a scientific press release and a scientific paper. A press release from the University of Bath:
AI does not pose an existential threat to humanity, new study finds.
LLMs have a superficial ability to follow instructions and are verbally adept, but have no potential to learn new skills without explicit instruction, which means they remain essentially controllable, predictable and safe.
This means that they are inherently controllable, predictable, and safe.
The paper by Lu et al.:
It has been claimed that large-scale language models, consisting of billions of parameters and pre-trained on extensive web-scale corpora, can acquire certain capabilities without special training…We present a new theory to explain emergent capabilities by taking into account potential confounding factors, and rigorously validate this theory through over 1,000 experiments. Our findings suggest that the claimed emergent capabilities are not in fact emergent, but result from a combination of in-context learning, model memory, and linguistic knowledge.
Our work is a foundational step in explaining the performance of language models, provides a template for using them efficiently, and reveals the paradox that they excel in some cases and fall short in others. Thus, we show that we should not overestimate the power of language models.
The press release version of this story has been hot for predictable reasons: everyone loves to see a Silicon Valley bigwig get taken down, and the existential risk of AI has been a divisive topic in recent years.
But the paper is quite far removed from what the university's PR office wants to claim, which is a shame, because what it shows is interesting and important in any case: It focuses on so-called “emergent” capabilities of state-of-the-art models, that is, tasks or capabilities that the AI system demonstrates in practice that were not present in the training data.
These new capabilities are worrying to those concerned about existential risks because they suggest that ensuring AI safety is harder than we'd like. If an AI can do things it wasn't trained to do, there's no easy way to ensure that future AI systems will be safe. You can leave something out of the training data, but the AI is likely to figure out how to do it anyway.
The paper shows that, at least in some circumstances, those emergent capabilities are not as such. Rather, they are the result of what happens when you mold an LLM like GPT into a chatbot and ask it to solve problems in the form of a question-and-answer conversation. This process, the paper suggests, means that you can't truly give a chatbot “zero-shot” questions that it has no prior training on. The technique that prompts ChatGPT is essentially to teach it a bit about what form the answers should take.
It's an interesting finding. It doesn't prove that an AI apocalypse is impossible, but if you're looking for good news, it does suggest that it's unlikely to happen tomorrow.
The pain of training
Nvidia is accused of “unjust enrichment.” Photo: Dado Luvitch/Reuters
Nvidia scraped YouTube to train its AI system, and now it's coming back:
Skip Newsletter Promotions
Every week Alex Hahn delves into how technology is changing our lives
Privacy Notice: Our newsletter may contain information about charities, online advertising and externally funded content. For more information, please see our privacy policy. We use Google reCaptcha to protect our website and are subject to the Google privacy policy and terms of use.
After newsletter promotion
The federal lawsuit alleges that Nvidia, which is focused on designing chips for AI, stole videos from YouTube creator David Millette to train its own AI. The lawsuit accuses Nvidia of “unjust enrichment and unfair competition” and seeks class action status to include other YouTube content creators who make similar claims.
Nvidia was illegally “scraping” YouTube videos to train its Cosmos AI software, according to a complaint filed Wednesday in the U.S. District Court for the Northern District of California. Citing an Aug. 5 404 Media report, the complaint said Nvidia used software on a commercial server to evade YouTube detection and download “approximately 80 years' worth of video content per day.”
This lawsuit is unusual in the AI world, if only because Nvidia has been somewhat tight-lipped about the source of its training data. Most AI companies that have faced lawsuits have been proudly open about their disregard for copyright restrictions. Take Stable Diffusion, for example, which sourced its training data from the open-source LAION dataset.
Judge Orrick found that the artists had a reasonable claim that the companies were infringing their rights by unlawfully storing their works, and that the AI image-generation tool in question, Stable Diffusion, was built “to a substantial extent on copyrighted works” and “intentionally designed to facilitate that infringement.”
Of course, not all AI companies are on an equal footing. Google has a unique advantage: everyone gives Google permission to train their AI with their materials. Why? Because if they didn't, they'd be shut out of search altogether.
Many site owners say they can’t afford to block Google’s AI-powered summarization of their content.
Publishers say that's because the Google tools that sift through web content to come up with AI answers are the same ones that track web pages in search results. Blocking Alphabet Inc.'s Google in the same way it has blocked some of its AI competitors would make the sites less discoverable online.
Ask Me Anything
What was I thinking? Feel free to ask me this or any other tech related question.
One more ego-indulgent bit: after 11 years, I'm leaving the Guardian at the end of this month, and September 2 will be my last TechScape. To close, I'll be answering readers' questions, big and small. From tech recommendations to industry gossip, if there's anything you'd like me to answer, hit reply and email me.
The Wider TechScape
TikTok is boring you. Photo: Jag Images/Getty Images/Image Source
Source link