Manyika and Hassabis say that they defend the move to change its principles because democratic governments and businesses need to use AI to support national security. The pair wrote that they are investing more than ever in both AI research and products that help people and society. At the same time, Google needs to look for ways to improve AI safety while examining and fixing possible risks.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”-Google
Google parent Alphabet reported its Q4 financial numbers the other day. The results failed to top expectations leading the shares of Alphabet to decline. In the report, Google shook up Wall Street by forecasting that it will spend $75 billion on AI projects, 29% more than analysts were expecting. Spending all that money freaked out some analysts and Alphabet shareholders sending the stock lower.
The Gemini app for iOS is not a weapon even though Google no longer says it won’t create an AI weapon. | Image credit-PhoneArena
Google no longer agrees not to develop a weapon using AI nor does it promise not to use the technology to conduct surveillance. That doesn’t mean that the Alphabet unit is going into the business of developing AI weapons. It means that in creating uses for AI that are meant to help people and keep the world safe, Google might no longer feel the need to specifically state that it won’t use AI for these purposes.
Gemini can be found on the Google Search app creating AI overviews to help answer some queries. On top of the app is a shortcut that will open the Gemini app.