Google is a prominent player in the realm of artificial intelligence, largely thanks to its DeepMind division, which is responsible for numerous innovations that influence Google’s main products and services. Recently, Google DeepMind published a safety paper addressing the implications of entering the age of Artificial General Intelligence (AGI). However, this paper has faced criticism from various experts who disagree with its conclusions. AGI, a term popularized by OpenAI, refers to AI systems that can perform any intellectual task that a human can do.
Predictions about the arrival of AGI vary widely among specialists; some express optimism while others are more skeptical. According to Google DeepMind, we could see true AGIs as soon as 2030. The paper states, “We anticipate the development of an Exceptional AGI before the end of the current decade,” defining this as a system capable of tasks comparable to the top 1% of skilled adults in non-physical activities, including metacognitive skills. Despite this potential, the document does not present a wholly favorable view of the future.
It emphasizes the risks that AGIs could bring, including “severe harm” and “existential risks” that might threaten humanity’s survival. DeepMind further elaborates on its risk mitigation strategies, contrasting them with those of companies like Anthropic and OpenAI. Additionally, the DeepMind researchers express skepticism about the feasibility of AI “superintelligence,” attributing this to a lack of significant architectural innovations. They acknowledge the possibility of “recursive AI improvement,” where AI creates more advanced versions of itself, but they caution that this could be dangerous.
Some experts counter the views expressed by DeepMind. Heidy Khlaaf from the AI Now Institute argues that it is premature to scientifically evaluate the AGI concept, while Matthew Guzdial of the University of Alberta deems the notion of recursive AI improvement unrealistic. Sandra Wachter from Oxford raises concerns about training future AIs with inaccurate synthetic data, highlighting the risk of generative AI outputs misleading users in search and information tasks.