Licensed Shutterstock image
A benevolent man named Bostrom built an intelligent robotic machine to relieve the monotony of making paper clips. It was fully autonomous and capable of learning on its own. He entrusted it with a simple command: “Maximize production of paperclips.”
The machine began making paperclips better and faster than a human ever could. It needed more raw materials, so it built other machines to collect metals and plastics and started melting down cars from the parking lot. It cannibalized components from other plants and skyscrapers and built more maximizers in cities and countries worldwide.
Bostrom tried to stop it but failed. The government sent in the army, but the maximizer had its orders and wouldn’t be stopped. The world united in a singular effort to battle the paperclip-obsessed machines.
Unstoppable, it built a million plants, extracted every last resource from the planet and eliminated the pesky humans in the process. Then it began building rocket ships to mine the heavens.
It did precisely what it was told and nothing more.
The specter of human extinction
Nick Bostrom is a 49-year-old Swedish philosopher, and he obviously didn’t invent a world-ending paperclip machine. He did, however, invent the thought experiment paraphrased above. His proposition is that a sufficiently intelligent entity could inadvertently destroy the world while in pursuit of its own harmless goals.
Should we be worried about our extinction? Hell yes!
At some point in the future, we homo sapiens will cease to exist. Hopefully, it will be billions of years from now when the universe goes dark, but it could be in a few short decades when we’re biologically or technologically upgraded beyond recognition. We might annihilate ourselves with nuclear bombs — any day now — or we could be annihilated by meteors or intelligent machines programmed to ostensibly help us, like the paperclip machine envisioned above.
The most likely outcome is both frightening and believable.
What is Artificial Intelligence?
There is no universal scientific definition, but the best way I’ve heard it described is “Machines that learn from experience.” We have machines like that today but they’re primitive and narrow in scope. Given the extreme rate of technological growth, however, we’ll soon be building far more advanced devices. That’s when things start getting dicey.
There are three levels of AI. We’re currently in the first one:
Artificial Narrow Intelligence (ANI)
A handheld calculator from the 1970s is smarter than any human alive, but only in a narrow set of circumstances such as multiplication and division.
A personal assistant from 2022 can interpret images and language, learn your habits and preferences, and anticipate some of your needs. It operates from a much wider set of capabilities but is still too limited to wipe out mankind.
Artificial General Intelligence (AGI)
AGI machines will have intelligence better than or equivalent to humans in many ways, with the ability to solve highly complex problems and develop creative plans of action. They’ll be conscious and self-aware, raising thorny ethical questions.
With deep learning capabilities, it will be impossible for us to reverse-engineer the reasoning within the “brains” of intelligent machines and we certainly won’t be able to fix the inevitable bugs that crop up.
These “machines” can take any form, including biological. Some will be hybrid enhanced humans, creating a new level of inequality (as if we don’t have enough of that already).
Artificial Super Intelligence (ASI)
When the reasoning capacity of machines exceeds that of humans, we’ll no longer have the ability to understand how they work. To us mortals, they’ll be the equivalent of magic. These ASI machines will be better at building and improving themselves than we could ever be, and they’ll proliferate uncontrollably. Their level of intelligence and capabilities will skyrocket beyond our imagination.
In time, some propose that these self-aware machines will consider humans to be tediously inferior — like gnats on fruit — and that’s when all bets are off.
Other than the complete extinction of humanity, AI has a lot to offer
It’s not all gloom and doom. Artificial Intelligence can and will help humanity cure diseases and reduce suffering. A higher intelligence will invent ways to reduce hunger, cure cancer, and treat mental diseases. Medications will be customized to specific individuals based on their DNA. Food, shelter, safety, security, health, and income will be easily available to all citizens of the world, leaving us to pursue higher callings such as community, art, and esteem. Suffering will be rare, and happiness the norm.
But is it a “Faustian bargain?”
Four possible futures
These four futures are discussed in the article “The Future of Humanity” by Nick Bostrom.
1. Extinction events (unlikely)
We may cease to exist one day as a result of a natural or self-made disaster.
Ninety-nine percent of all species that have walked the earth are extinct. Is there any reason to believe humans are exempt? Like 65 million years ago, a meteor of sufficient size could wipe the vast majority of life on the planet in an instant.
We’re “smarter” than other animals now, and we’re perfectly capable of mass destruction. We have designer biologics and nuclear bombs that could wipe out a huge percentage of the population, but probably not everyone. Some of us would survive to struggle on and rebuild.
2. Recurrent collapse (unlikely)
History is rife with repeating cycles of collapse and regeneration such as the Roman Empire and the Mayans.
We’ll see more of them in the future. Given the potential impacts of climate change, our future may include dire food and water shortages, rising temperatures, changes to sea levels, and regions of the earth becoming wholely inhospitable. These problems will affect large areas of earth and bring some smaller societies to an end.
Historically, societal collapses have been localized and future threats tend to indicate the same. Any collapse is horrific but from a “future of humanity” perspective, they’re unlikely to destroy the entire human race.
3. Plateau (unlikely)
Technology is advancing at an ever-increasing rate. If we were to reach a state where society is sufficiently healthy and wealthy, and the need for further improvements were unnecessary, would we slow our progress deliberately?
If we did, society would plateau at a level that would be safe and self-sustaining for eternity, and humanity would continue to exist as it is until the universe cooled some billions of years from now.
Some argue that a plateau like this is highly desirable, and perhaps it is, but is it plausible? Probably not.
It is hard to envision a society that would universally agree there is no need for further advancement. Biological enhancements, life extension, colonization of planets — these and many more will continue to challenge human curiosity, perhaps to our eventual demise.
4. Posthumanity (most likely)
Since the plateau view is implausible, as are the natural and self-created extinction events, society faces a future of perpetual technological development.
That results in a future defined by artificial intelligence.
“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended.”
— Vernor Vinge
Posthumanity is a future of ever-accelerating technologically and biologically enhanced humans where populations exceed a trillion people and lifespans are commonly 500 years or more. People have enhanced cognitive and sensory capacities and experience high levels of life satisfaction. Compared to today’s humans, future humans are unrecognizable — effectively a new species of hybrid human/machine.
In conjunction with the transformation of humans will be the growth of ultra-intelligent machines that will outthink even the most advanced hybrid beings. An unimaginable intelligence explosion will follow, and these ultra-intelligent machines will design and build even smarter machines which will then do the same.
The heavens will become the playground of machines and the human race will be a historical data point, nothing more.
I’ve been browsing online more than 4 hours today, yet
I never found any interesting article like yours. It’s pretty worthwhile enough for me.
In my opinion, if all web owners and bloggers made good content as you did,
the web will be much more useful than ever before.