AI needs governance
Getty
There is endless talk about adopting artificial intelligence (or any other new technology) before your competitors do. The results are always going to be dire: slow adopters will be like the dinosaurs that get hit by an asteroid. But business leaders don't seem to buy into this idea. Governance, security, and company culture need to be taken into account before AI becomes a critical part of operations and decision-making.
According to a survey of 205 executives published in MIT Technology Review and funded by Boomi, nearly all business leaders (98%) said they would be “willing to give up being first to use AI” if it could be delivered safely and reliably.
Ambitions for AI are high, but few companies have expanded beyond the pilot stage, the survey found. 95% of companies surveyed are already using AI, and 99% plan to use it in the future. But few organizations move beyond pilot projects, the survey found. The vast majority, 76%, have only deployed AI in one to three use cases.
Welcome to the cutting edge of AI. What will organizations need to cross the finish line where AI is fully operational and delivering the promised benefits?
Industry leaders agree that it's time to move away from the AI hype and expectations and focus on hard outcomes. “It's natural to be skeptical of headlines that claim every new technological innovation is going to change everything,” said Raj Sharma, global managing partner for growth and innovation at EY.
Despite his caution, Sharma remains optimistic about what AI will ultimately bring: “Generative AI and AI-driven large-scale language models seem poised to deliver on this promise. Last year was considered the year of AI hype, but now we are witnessing 2024 being the year AI becomes reality. Enterprises are looking for large-scale transformation and regulators are focused on introducing new AI codes and regulations.”
That’s why 45% of respondents to the MIT-Boomi survey cited governance, security and privacy as the biggest obstacles to the speed of AI adoption.
“When it comes to AI, the camouflage approach is no longer a viable strategy,” said Mrinal Manohar, CEO of Casper Labs. “The main difficulty is determining how to adopt an appropriate governance framework for high-risk applications while driving responsible AI innovation and adoption.”
“People drive faster when they fasten their seat belts,” Manohar added. Governance and risk management will help AI reach its full potential and cross the finish line. But the technology has yet to reach its potential. “That's mainly due to a lack of governance and uniform standards. Strong AI governance fosters faster innovation and more reliable real-world deployments, positioning AI for success.”
Corporate culture is a key factor in AI governance and risk management. “While many organizations currently adopt internal policies regarding the use of AI tools, such policies are only useful if they are actually implemented and followed throughout the organization, rather than remaining locked away in a drawer or on a rarely visited intranet site,” says Anna Westfelt, partner and head of the data privacy practice at law firm Gunderson Dettmer.
“Build a culture of accountability and compliance when it comes to privacy and security, and train employees to exercise caution when using AI tools,” Westfeldt urged, including “ongoing monitoring of AI by the organization's IT team and regular training for employees to ensure they are aware of the limitations of using AI tools.”
Currently, “many organizations have access to more secure, restricted, commercial versions of popular, publicly available GenAI tools,” Westfeldt continued. “Employees should be advised not to use tools that have not been validated by their organization.” An inventory of tools used and training provided should also be tracked, he said.
Ultimately, the goal of AI governance and risk management is to ensure the responsible and ethical use of technology developed within an organization. “Moving forward, we need to integrate proactive risk management into every stage of our AI transformation,” says Sharma. “This will help us build trust, increase agility, and navigate disruption effectively.”