The AI singularity is a hypothetical future event in which artificial intelligence has advanced to a point where it is able to improve itself exponentially. This results in humans no longer being able to understand or control the technology being created, potentially leading to machines assuming some level of control over humanity.
Closely tied to the concept of the singularity is artificial general intelligence. AGI refers to artificial intelligence that is able to perform any task as well as a human can. People think of AGI as a requirement for the singularity to occur.
Current AI technology is trained on existing datasets generated by humans. This means everything the AI knows comes from humans—and in turn this indicates humans are still more intelligent. The singularity would mark a change to computers having learned so much that they’re able to innovate and create technology that’s entirely new—and foreign—to humans.
What would computers do when they’re more powerful than us? Destroy the world, or save it—or are those one and the same thing, as many a sci-fi story would have us believe? Scientists who have speculated about the singularity think that if that moment comes, it will be a defining turning point in human history.
It’s a worrying thought, but will such an event actually happen? Can we stop it? And even if we do reach AI singularity, does it spell doom and gloom for humanity, or will we enter an age of cooperation between humans and AI?
When will the singularity happen?
With rapid advancements in the world of AI recently, the singularity is starting to look like more of a possibility. The question is when it will occur.
Ray Kurzweil, head of Google’s AI, predicted a few years back that the singularity would be here by 2045. At a recent conference, one AI expert, John Hennessy, said, “Some of us thought that point at which we’d have artificial general intelligence was 40 or 50 years away. I think everybody’s horizon has moved in by probably 10 or 20 years.”
That said, no one knows if the singularity will really happen. It’s possible that developers will put enough safeguards in place into the technology to prevent it. Since the public debut of ChatGPT, many experts have called for the halt of AI research before more regulations and oversight are established.
Can we prevent the singularity?
Experts in AI are divided on the topic. Some say the singularity is inevitable, while others claim we can prevent it through the careful regulation of AI development.
While both the EU and the UK are exploring AI regulation, there is a worry that by the time AI regulations are passed, the singularity could have already happened. And there’s no guarantee any meaningful regulations could be passed.
The potential to make improvements to many fields with AI, including science, medicine, and education, is an enticing prospect. And we haven’t even mentioned the corporate side of AI—there’s a lot of money to be made. OpenAI has said it might leave the EU if current proposed regulations are pushed through. It’s a first taste of pushback against AI regulations that we can expect from some of the world’s most powerful companies.
Moreover, governments will want to compete with each other in the AI sphere. Even if there’s agreement on the potential threat of AI, no country wants to halt progress for fear of falling behind its rivals.
However, there are other ways we could potentially prevent the singularity. One would be to implement a kill switch (either physical or programmed into the AI itself) to terminate an AI if it approached a state believed to be singularity. However, the termination of an otherwise useful AI is not an ideal outcome either. In another scenario, the AI might rebel, knowing the kill switch’s existence, instead propelling it into achieving the singularity.
What would make the singularity possible?
The continued development of artificial intelligence is what will make the singularity possible. If AI gets to the point where it’s capable of inventing technology beyond our understanding and capabilities (AGI), it’s safe to say that singularity has come to pass—AI has exceeded humanity in terms of intelligence.
And even if restrictions are put in place in an AI’s programming to prevent its intelligence superseding humanity’s, small errors or improperly defined parameters could inadvertently cause the singularity we were trying to prevent.
Unforeseen behavior in AI has already been observed due to poorly defined parameters. In 2013, for example, programmer Tom Murphy designed an AI to play Nintendo NES games. While playing Tetris, the AI learned to indefinitely pause the game to prevent itself from losing. Murphy hadn’t programmed the AI to do this. Just imagine what unforeseen consequences could occur with a much more powerful AI.
What might happen in the singularity?
You’ll be familiar with this answer: We have no clue. And it’s rather scary, if a little exciting. An idealistic scenario would see humans and machines working together, forging ahead to create a better future for both. New technologies would be discovered, potentially allowing humanity to take our first steps towards settling elsewhere in the solar system. Humans and machines might even merge together to create a new form of intelligence.
However, another future would see machines taking over the world, with humans living under their control. Given that the singularity would result in technology beyond our understanding, it’s highly likely we’d be unable to stop the machines. Movies and books have long explored such a future, as have the physicist Stephen Hawking and the entrepreneur Elon Musk, who both worry that advanced AI could escape our control.
Take back control of your privacy
30-day money-back guarantee