Alphabet, the parent company of the Google technology giant, no longer promises that it will never use artificial intelligence (AI) for purposes such as the development of weapons and monitoring tools.
The company rewritten the principles guiding its use of AI, abandoning a section which excluded the uses which were “likely to harm”.
In a blog post, the senior vice-president of Google, James Manyika, and Demis Hassabis, who heads the Laboratory of the Google Deepmind, defended this decision.
They argue that democratic companies and governments must work together on AI which “supports national security”.
There is a debate among experts and AI professionals on how the new powerful technology should be governed in general terms, to what extent commercial gains should be allowed to determine its direction and the best way to protect themselves Against risks for humanity in general.
There is also a controversy on the use of AI on the battlefield and in surveillance technologies.
The blog said that the original IA principles of the company published in 2018 should be updated as technology had evolved.
“Billions of people use AI in their daily lives. AI has become a technology for general use and a platform that countless organizations and individuals use to create applications.
“He went from a niche research subject to the laboratory to a technology that becomes as omnipresent as mobile phones and the Internet itself,” said the blog post.
Consequently, the basic principles of AI have also been developed, which could guide common strategies, he said.
However, Mr. Hassabis and Mr. Manyika said the geopolitical landscape became more and more complex.
“We believe that democracies should lead in the development of AI, guided by fundamental values such as freedom, equality and respect for human rights,” said the blog post.
“And we believe that companies, governments and organizations sharing these values should work together to create an AI that protects people, promotes global growth and supports national security.”
The blog post was published just before the end -of -year financial report of alphabet, showing lower results than market expectations and reversing the course of its action.
It was despite a 10% increase in digital advertising income, its largest employee, stimulated by American electoral expenses.
In its report on the results, the company said that it would spend $ 75 billion ($ 60 billion) on AI projects this year, 29% more than Wall Street analysts did not expect.
The company is investing in infrastructure to manage AI, AI research and applications such as research fueled by AI.
The AI platform of Google Gemini now appears at the top of Google research results, offering a written summary of AI and appears on Google Pixel Phones.
Originally, long before the current elevation of interest in AI ethics, the founders of Google, Sergei Brin and Larry Page, said their currency for the company was “don’t be bad” . When the company was restructured under the name of Inc alphabet in 2015, the parent company went to “do the right thing”.
Since then, Google staff have sometimes rejected the approach adopted by their leaders. In 2018, the company did not renew a contract for the workplace with the American Pentagon following a resignation and a petition signed by thousands of employees.
They feared that “Project Maven” was the first step towards the use of artificial intelligence for deadly purposes.