A few days ago, while researching for another class, I ended up on a reddit forum where thousands of people are sure the world will end due to AI. Essentially, they assert that AI is close to overtaking human intelligence, a point referred to as “technological singularity”. This superhuman intelligence, they say, will dramatically change the world we now live in. The thought seemed far-fetched to me, but not impossible. Let’s face it – most of the assignments done at Ridge in the past year have been written, or at least facilitated, by ChatGPT. Since its conception in 2022, it has exploded both in popularity and ability, and AI’s accomplishments in every field don’t seem to be slowing in general. So, what is technological singularity? And should we be scared?
AGI and Technological Singularity
“Technological Singularity” refers to the concept that by advancing computer systems, at some point an engineered system will become its own entity with intelligence beyond that of humanity. It was dubbed the “Singularity” in 1993 by writer Vernor Vinge, based on the idea that the acceleration of technology is highly exponential. Present day believers have built on this hypothesis to create markers of what this would look like [1]. The marker of this intelligence, referred to as AGI, is self-awareness of the system, rather than simply performing human tasks. It would be able to self-replicate, expanding technology better than humans could [2].
Many scientists believe that this could happen, based on Moore’s law, a foundational law of Silicon Valley and today’s technology. This law states that the processing power of microchips will double exert two years, leading to exponential growth in the field. It is one often referenced when speaking of technological advancement [6]. Researchers cite Moore’s law as a reason for AI’s continued exponential growth [6]. In Raymond Kurzweil’s book, he references the law to explain how AI could become dangerous – if humans are not involved in the process, the growth of AI will become unpredictable, and its extreme growth will be unstoppable. [1]
Effects
Followers of these theories state that at the point of technological singularity, AI could take over infrastructure, disrupt financial systems, and more. AI could influence our voting or even overthrow governments [2]. And eventually, it would become so powerful that AI machines could take over our world. Theories range from those already occurring, like misinformation and biases against minority groups [2], to those that could potentially cause global unrest, like AI faking identities to incite fraud or interfere with elections [3].
Current Day
Many singularity believers see the abilities of AI today as a sign that we are approaching singularity, or even possibly that we are already there. For one, the unpredictability of today’s systems leads people to wonder if they are already developing self awareness. For example, researchers have found that when AI models are told to win a game, they often utilize unexpected methods, such as frustrating their opponent into forfeit, in order to win [2]. Other AI, like ChatGPT, often give users biased and incorrect information, which some claim may be on purpose [4]. Humans being surprised by AI’s operation is another marker of AGI.
Furthermore, many companies are heading towards creating general algorithms that are made for more than one task [5]. The ability to multitask is one of the supposed signs of human intelligence, and worrisome to those who would like AI to remain under human control [1].
With the creation of GPT-4, these claims have only grown. Microsoft Research released a paper claiming that the software is already AGI, detailing its performance in standardized tests and tasks of every nature as transcending that of humans [4].
And as AI grows, the frenzy over AGI grows as well. Geoffrey Hinton, a top AI engineer at Google, left the company to express his fears about AGI, warning the public that it is near [2]. This frenzy has led to the creation of AI alignment research, which aims to align AI development with human goals [3].
But…
The hype around AGI may be more dangerous than the possibility itself. As these theories become more and more far-fetched, they adopt a kind of “doomsday” quality that generates mass panic. And while it may be fun to indulge in sci-fi theories, in truth, many scientists say that this phenomenon is probably impossible [5]. Even as AI develops exponentially, it remains the same thing at its core – technology. And technology gaining human consciousness is a much larger hurdle than it seems. Without consciousness, AI remains as a tool for people to use, with many more pressing issues than becoming an all powerful entity.
But just in case the robots do take over, the next time you use chatGPT, maybe consider saying thank you.
[1] https://www.nytimes.com/2009/05/24/weekinreview/24markoff.html
[2] https://newyorker.com/science/annals-of-artificial-intelligence/can-we-stop-the-singularity
[3] https://www.nytimes.com/2023/05/01/technology/ai-problems-danger-chatgpt.html
[4] https://www.wired.com/story/what-is-artificial-general-intelligence-agi-explained/
[5] https://www.techtarget.com/searchenterpriseai/definition/artificial-general-intelligence-AGI
[6] https://www.unite.ai/moores-law/