During an earnings call in July, Meta CEO Mark Zuckerberg laid out his vision for the company's value-for-money ad service, further enhanced by artificial intelligence.
“In the next few years, AI will also be able to generate creative for advertisers and personalize it for people to see,” he said.
But Meta's use of AI may already be putting the company in hot water as trillion-dollar companies try to revolutionize advertising tech.
On Thursday, a bipartisan group of lawmakers led by Republican Rep. Tim Walberg of Michigan and Democrat Rep. Kathy Kastl of Florida sent a letter to Zuckerberg demanding that he answer questions about Meta's advertising service.
The letter follows a Wall Street Journal report in March that revealed federal prosecutors were investigating the company for its involvement in illegal drug sales on its platform.
“Meta appears to be shirking its social responsibilities and continuing to ignore its community guidelines,” the letter said. “Protecting online users, especially children and teens, is one of our top priorities. We remain concerned that Meta has failed to meet that mandate, and this dereliction of duty must be addressed.”
Zuckerberg has already faced senators grilling him about the safety of kids on the Meta social media site, and during a Senate hearing he stood up and apologized to families who feel their children have been harmed by social media use.
The nonprofit watchdog Tech Transparency Project reported in July that Meth continues to make revenue from hundreds of ads promoting the sale of illegal and recreational drugs, including cocaine and opioids, which are banned by its advertising policies.
“Many of the ads make no secret of their intent, showing pictures of prescription bottles, piles of pills or powder, or chunks of cocaine and urging users to place an order,” the watchdog wrote.
“Our systems are designed to proactively detect and police violating content, and we have rejected hundreds of thousands of ads that violate our drug policies,” a Mehta spokesperson told Business Insider, reiterating a statement provided to The Wall Street Journal: “We will continue to devote resources to further policing this type of content. Our hearts go out to those who are suffering the tragic consequences of this epidemic. It will take all of us working together to stop it.”
The spokesperson declined to discuss how Meta uses AI to manage ads.
Ads poke holes in Meta’s AI system
The exact process by which Meta approves and moderates ads is not publicly available.
What is known is that the company is using artificial intelligence in part to moderate content, The Wall Street Journal reported, which noted that using photos to display drugs could allow ads to slip through Meta's moderation system.
Here's what Meta revealed about its “ad review system.”
“Our ad review system relies primarily on automated technology to apply our ad standards to millions of ads that run on our Meta technology. However, we also use human reviewers to improve and train our automated systems and, in some cases, may manually review ads.”
The company also said it continues to work on further automating the review process to reduce reliance on humans.
But the revelation of drug ads on Meth's platform shows that policy-violating content can still slip through the company's automated systems, despite Zuckerberg's promises of improved targeting and his portrayal of a sophisticated advertising service that uses generative AI to create content for advertisers.
Difficulties in deploying Meta’s AI
Meta has experienced difficulties rolling out its AI-powered services outside of advertising technology.
Less than a year after Meta introduced its celebrity AI assistants, the company discontinued the products to focus on enabling users to create their own AI bots.
Meta is also continuing to iron out glitches with its chatbot and AI assistant, Meta AI, which has sometimes given hallucinatory answers and, in the case of BI's Rob Price, acted like a user by giving out his phone number to strangers.
Not just Meta, but the technical and ethical issues prevalent in AI products are a concern for many major U.S. companies.
A survey by Arize AI, a research firm that conducts research on AI technology, revealed that 56% of Fortune 500 companies consider AI a “risk factor,” the Financial Times reported.
Broken down by industry, 86% of technology companies, including Salesforce, said they believe AI poses a business risk, according to the report.
But these concerns come at a time when tech companies are pushing AI into every corner of their products, even as the path to monetization remains unclear.
“The development and deployment of AI involves significant risks,” Mehta said in its 2023 annual report. “There can be no assurance that the use of AI will improve our products or services or be beneficial to our business, including its efficiency or profitability.”