Sign up for The Texas Tribune's daily newsletter, “The Brief,” to keep readers up to date on the most important Texas news.
State lawmakers began considering the complex world of artificial intelligence Tuesday, offering an early glimpse into how Texas might try to regulate the burgeoning technology.
During a nearly four-hour hearing, the Texas Senate Commerce Committee heard a wide range of concerns about the potential risks of AI, including the spread of misinformation, biased decision-making, and violations of consumer privacy. By the end of the hearing, at least some of the 11 committee members seemed convinced that the state should enact laws regulating how and when private companies use artificial intelligence.
“When you really think about it, this is a dystopian world we could live in,” Republican Sen. Lois Kolkhost of Brenham said at the hearing. “I think our challenge is how do we go in there and put in those safeguards?”
Artificial intelligence is a broad term that covers a variety of technologies, including chatbots that use language processing to answer user questions, generative AI that creates unique content, and tools that automate decisions like how much a home insurance claim should be or whether a job candidate should attend an interview. Artificial intelligence can even be used to create digital replicas of artists' works.
Amanda Crawford, chief information officer for the Texas Department of Information Resources, told lawmakers on Tuesday that more than 100 of the state's 145 agencies already use AI in some form. Crawford is a member of a new AI council created this year by Gov. Greg Abbott, Lt. Gov. Dan Patrick and House Speaker Dade Phelan. The council is tasked with examining how state agencies use AI and assessing whether the state needs an AI ethics code. The council is expected to release a report by the end of the year.
Leaders of several state agencies testified that artificial intelligence has saved them significant time and money. For example, Texas Workforce Commission Executive Director Edward Serna said a chatbot the commission created in 2020 helped answer 23 million questions. Tina McLeod, public information officer for the Attorney General's office, said employees have saved at least an hour a week with an AI tool that helps them review long-running child support cases.
But in other cases, officials testified, AI technology could be used to harm Texans.
Country singer Josh Abbott said he was concerned that AI could be used to imitate his voice and generate new songs for Spotify.
“AI-enabled fraud doesn't matter if you're famous or not,” Abbott said. “AI-enabled fraud and deepfakes affect everyone.”
Grace Geddie, a policy analyst at Consumer Reports, said private companies are already using biased AI models to make important housing and employment decisions that disadvantage consumers. She said lawmakers could require companies that rely on AI to make decisions to audit their technology and disclose to consumers how it is evaluated.
Gheddie pointed to New York City, which enacted a law requiring employers who use automated hiring tools to audit those tools, but said few employers actually did so.
Renzo Soto, executive director of TechNet, which represents tech company CEOs, said states need to tread carefully when passing laws to crack down on harms while avoiding creating laws that inadvertently ban beneficial uses of artificial intelligence.
“You have to look at it industry by industry,” Soto said.
Texas already passed a law in 2019 making it a crime to fabricate fake videos with the intent of influencing an election, and last year lawmakers passed another law banning the use of deepfake videos for pornography.
Ben Sheffner, an attorney for the Motion Picture Association, said that as lawmakers consider future bills to curb artificial intelligence, they need to be careful not to infringe on First Amendment free speech protections.
Throughout the hearing, lawmakers repeatedly asked whether they could look to other states or countries for guidance on how to craft AI policy, after a patchwork of state and federal regulations have tried to restrict its use with limited success.
California lawmakers have introduced a bill that would require AI developers and adopters to mitigate the risks of “catastrophic harm” from the technology; tech companies are fighting to repeal the law. Colorado also passed a law restricting the use of AI in certain “high-risk” scenarios, including education, employment and healthcare. Colorado's governor has already said the law needs to be revised before it goes into effect in 2026.
Time is running out to get your tickets to TribFest!
From September 5-7, enjoy more than 100 unforgettable conversations with over 300 speakers, including Stacey Abrams, Colin Allred, Liz Cheney, Richard Linklater, Nancy Pelosi, Rick Perry, and Gretchen Whitmer.
Hurry – buy your tickets now!