Sean McManus
Technological journalist
Getty images
Many staff members use AI not approved at work
“It is easier to obtain forgiveness than authorization,” said John, software engineer in a technology company for financial services. “Contine you. And if you are in trouble later, then erase it.”
He is one of the many people who use their own AI tools at work, without the authorization of their computer division (which is why we do not use John’s full name).
According to a Software AG survey, half of all knowledge workers use personal AI tools.
Research defines knowledge workers as “those who work mainly on a desktop or computer”.
For some, it is because their computer team does not offer AI tools, while others said they wanted their own choice of tools.
John’s company provides a Github co -pilot for the development of software supported by AI, but it prefers the cursor.
“It is largely a glorified automatic entry, but it is very good,” he says. “It completes 15 lines at a time, then you look at it and say:” Yes, that’s what I would have hit “. It frees you. You feel more commonly.”
Its unauthorized use does not violate a policy, it’s just easier than risking a long process of approval, he said. “I am too lazy and well paid to continue spending,” he adds.
John recommends that companies remain flexible in their choice of AI tools. “I tell people at work not to renew team licenses for a year at a time, because in three months, the entire landscape changes,” he said. “Everyone is going to want to do something different and will feel trapped by the cast cost.”
The recent version of Deepseek, an AI model available for free from China, should only extend the IA options.
Peter (not his real name) is a product manager in a data storage company, which offers its inhabitants the Google Gemini AI chatbot.
External AI tools are prohibited, but Peter uses Chatgpt via the Kagi search tool. He finds that the greatest advantage of AI comes from the disputed from his reflection when he asks the chatbot to respond to his plans from different customers.
“The AI does not give you so many answers, as to give you a training partner,” he said. “As a product manager, you have a lot of responsibilities and you don’t have many good outlets to openly discuss the strategy. These tools allow this without obstacle and unlimited capacity.”
The version of Chatgpt he uses (4o) can analyze the video. “You can get summaries of competitors’ videos and have an entire conversation (with the AI tool) on the points of the videos and how they overlap your own products.”
In a 10 -minute chatgpt conversation, he can review the equipment that would take two or three hours watching the videos.
He believes that his increased productivity is equivalent to the company which obtains a third party of an additional person working for free.
He does not know why the company has prohibited external AI. “I think it’s a control thing,” he said. “Companies want to have a say on the tools that their employees use. This is a new border and they just want to be conservative.”
The use of unauthorized AI applications is sometimes called “Shadow AI”. It is a more specific version of “Shadow it”, that is to say when someone uses software or services that the IT service has not approved.
Harmonic security helps identify the shadow AI and prevent the company’s data that has entered the AI tools inappropriately.
It follows more than 10,000 AI applications and saw more than 5,000 used.
These include personalized chatgpt versions and company software that added AI features, such as the Slack communication tool.
Also popular, Shadow AI has risks.
Modern AI tools are built by digesting enormous amounts of information, in a process called training.
About 30% of Harmonic Security applications saw a train used using information entered by the user.
This means that user information is part of the AI tool and could be released to other users in the future.
Companies can be concerned about the fact that their commercial secrets are exposed by the responses of the AI tool, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks that this is unlikely. “It is quite difficult to remove the data directly from these (AI) tools,” he says.
However, companies will be concerned with the storage of their data in AI services of which they have no control, no awareness and which can be vulnerable to data violations.
Micaela Karina
AI can give young workers a step ahead says Simon Haighton-Williams
It will be difficult for companies to combat the use of AI tools, as they can be extremely useful, especially for young workers.
“(AI) allows you to enter five years of experience in 30 seconds of fast engineering,” explains Simon Haighton-Williams, CEO of The Adaptavist Group, a group of software services based in the United Kingdom.
“It does not replace entirely (experience), but it is a good step in the same way as having a good encyclopedia or a calculator allows you to do things that you could not have done without these tools.”
What would he say to companies that discover that they have the use of the shadow AI?
“Welcome to the club. I think everyone is doing it. Be patient and understand what people use and why, and determine how you can kiss and manage it rather than demand it. You don’t want to be Left into account as the organization that has not (adopted AI).
Lauri Pitkänen
Karoliina TORTTILA says that employees must show good judgment on AI
Trimble provides software and hardware to manage data on the built environment. To help its employees use AI safely, the company has created Trimble Assistant. It is an internal AI tool based on the same AI models that are used in Chatgpt.
Employees can consult Trimble Assistant for a wide range of applications, including product development, customer support and market studies. For software developers, the company provides a Github co -pilot.
Karoliina TORTTILA is Director of AI in Trimble. “I encourage everyone to explore all kinds of tools in their personal life, but I recognize that their professional life is a different space and that there are guarantees and considerations there,” she said .
The company encourages employees to explore new models and online AI applications.
“This brings us to a skill that we are all forced to develop: we must be able to understand what is sensitive data,” she said.
“There are places where you will not put your medical information and you must be able to pass this type of judgment (for work data too).”
The experience of employees using Home AI and for personal projects can shape the company’s policy as AI tools evolve, she thinks.
There must be a “constant dialogue on the tools that serve us best,” she says.
More business technology
Source link