Klong Kaew | Moment | Getty Images
Eighteen months after restricting employees' use of artificial intelligence-generated solutions like ChatGPT, JPMorgan Chase CEO Jamie Dimon has introduced the bank's homegrown AI assistant. Almost coming full circle, the solution is built on technology from OpenAI, the creators of ChatGPT. The service, called LLM Suite, is already being used by 60,000 employees at the banking giant for tasks such as writing reports and emails.
The shift from restricting employee use of Gen AI to building in-house solutions with guardrails is becoming a common theme among both large and small enterprises looking to harness the power of the technology. According to Cisco's latest Data Privacy Benchmark Report, more than a quarter of organizations (27%) have at least temporarily banned public Gen AI applications, and the majority of companies are restricting which solutions employees can use and how they can use them.
Meanwhile, many of the same companies said their employees tend to use the restricted applications anyway, according to a recent survey by cybersecurity firm Extrahop, creating a clear need to offer alternative solutions with sufficient safeguards.
According to a recent report from enterprise data management company Veritas, the main concerns about employees’ use of gen AI are leaking confidential information, hallucinations, and industry compliance.
The outputs that AI platforms generate don't come out of thin air. Regarding concerns about exposing sensitive information, some large-scale language models store user input (what users type into the chat and the generative AI platform responds to) and use it to train or improve the generative AI capabilities. This could put sensitive information about a company or its customers at risk, so many organizations choose to ban or restrict it until they know how to govern the technology themselves.
Walmart's approach to employee AI
JP Morgan isn't the only world-famous company to bring Generation AI in-house from a limited perspective: Last year, Walmart released My Assistant to 50,000 employees and has since expanded its availability to an additional 25,000 employees across 11 countries.
“We've been proactively defining the principles that will guide our use of AI,” says David Glick, Walmart's senior vice president of enterprise business services. Those principles include listening to how employees want help, such as summarizing large amounts of information, helping the company's workforce better navigate its employee resource planning (ERP) system, and automating certain tasks.
While My Assistant is a broad, large-scale tool, Glick also focuses on smaller, more precise projects. For example, Walmart is using gen AI to assist its benefits helpdesk team in helping all employees understand its 300-page benefits guide. Rather than replacing team members with limited, error-prone chatbots, gen AI enhances search and support capabilities for team members.
“First generation AI will probably be a series of small initiatives that we undertake from the bottom up to improve the lives of our employees and make them more efficient every day,” Glick said.
Prevent data from leaking “outside”
Ensono, an IT managed services provider, is another company that has changed its thinking about employee access to AI tools. The company's chief technology officer, Tim Beerman, decided to restrict employees' use of large-scale language models such as ChatGPT as a way to protect sensitive data. But Beerman still wanted to “leverage the vast amount of unstructured data about our company to provide value to our employees without leaking it to the outside world.”
Ensono rolled out its in-house AI assistant to all 3,500 employees this summer. The solution is built on GPT-4o but remains flexible enough to deploy different language models depending on the type of data it wants to use. “We give our employees a single interface to these tools,” Bierman said.
Meanwhile, Ensono is developing smaller language models for different departments to address more specific use cases, such as root cause analysis, responding to requests for proposals (RFPs), etc. In all of this, model flexibility is key. “What's right today won't be right 12 months from now,” says Beerman.
Jason Hishmeh, chief technology officer at startup software developer Varyence, previously banned private, confidential or restricted data from being input into any generation of AI solutions.
Currently, Varyence and the startups it partners with use an internal system that automatically prompts data classification whenever someone creates an email, Word document, etc. The company's internal gen AI platform has built-in guardrails to keep private data safe and provides a means for employees to improve it.
“Everyone wants to use it and see how it can help their department be more efficient,” Hishmeh said of gen AI, explaining why bans are only a Band-Aid and why companies of all sizes must ultimately provide solutions that allow their employees to work safely.
As many companies move from banning Gen AI to adopting it internally, their outlook on how the technology will benefit employees is likely to change. But as the Age of AI keeps pace with the Age of Dogs in a climate of rapid innovation, companies will need to remain flexible in how they implement their Gen AI policies, even as safety and security remain at the forefront of the discussion.