The Only Realistic Prediction About AI and AGI
Elon Musk, Sam Altman, and Eliezer Yudkowsky are wrong. And so am I.
The future of artificial intelligence (AI) and artificial general intelligence (AGI) is unknowable. For you, Sam Altman, Eliezer Yudkowsky, Elon Musk, and everybody else. So it's pointless to discuss how both technologies' evolution will affect us. No one knows
We can't know because we have never had artificial intelligence capable of doing what top AIs like ChatGPT do. And we haven't had the first AGI either. So, as a result, we don't know the unexperienced benefits or consequences they could bring to our lives.
We also can't use science to predict them. Scientific theories are, as the name implies, based on theories. They are conjectures about how the world works. A theory is "good" when it explains a phenomenon consistently. Sometimes, a theory is so good that it is a window into possibility. It teaches us how to benefit from using things we once deemed useless. But you can't explain a world or phenomena you don't know exists. Today, we can predict the downsides of some medical, chemical, and nuclear experiments because we have reliable conjectures about how these fields work. But we couldn't do it years ago when these fields didn't exist.
You can't think of solutions to a problem that doesn't exist or that you don't know exists.
In March 2023, The Future of Life Institute released a petition to pause unlimited AI development. The organization filled with smart advisors, such as Saul Perlmutter (UC Berkeley), Alan Guth (MIT), and Elon Musk said:
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
This pause is somewhat worthless.
Experts might check GPT-5's code and find out OpenAI programmed it to be racist. In this case, reviewing the code for known data that makes an AI racist led to a positive intervention.
But problems will still arise. David Deutsch, the founder of quantum computing, says that we won't know if we can avoid AI problems if we pause AI development. "And meanwhile, he continues, "we'd have forgone 6 months of benefits. Including, down the line, forgoing lives saved. That's if the effect is as profound as many expect. That, too, is unknowable."
AI researcher Eliezer Yudkowsky thinks differently:
"Many researchers steeped, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in "maybe possibly some remote chance," but as in "that is the obvious thing that would happen."
His fear results from the belief that (1) AI and AGI might not care about us and (2) we can't challenge a technology smarter than us. These are valid fears. It is the same line of fear we all have when facing someone "superior" to us. An aggressive person might hit someone who bumps into them at a club, but they'll beg for mercy if the guy is Floyd Mayweather. The guy wouldn't stand a chance.
Yudkowsky's solution is to make AI and AGI care about us. But it is insufficient. On the one hand, and he points this out, we don't know how to make these technologies care about us. Conversely, you can't make an AGI care about anything. By definition, an AGI is like a human: unpredictable, creative, and autonomous. It can choose to do what you tell it to or not. You can try to control it, as parents do with their children, but as rebellious kids do, AGIs can ignore them. This can't change in six, twelve, or a hundred months. Because if it could, it would mean that we aren't building an AGI but an AI, which is an entity that acts within preset parameters. And scientists want to create an AGI.
Since the future is unknown, whatever any expert thinks they can do to avoid problems won't work. You can't think of a solution to a problem you can't see. Around 66 million years ago, the giant asteroid that led to the extinction of 75% of all plants and animals hit Earth. If that same asteroid hit Earth today, we wouldn't die. Because we now know asteroids exist, they can collide with us, and they can kill us. We track those close to us to deflect them on time.
But most of us wouldn't make it if a space entity we don't know or understand posed the same threat.
So the answer to what we should do is: we don't and can't know. Only time will reveal the consequences of AI and AGI. Once they show up, we can build theories around how to avoid them. But even then, we'll not predict what these two technologies will cause or how the technologies derived from them will impact us.
This is an unsatisfactory answer, especially for policymakers and those held accountable for whatever AI and AGI do.
David Hume said we can't derive ethical decisions from facts. Ethics is personal, and there are infinite facts. Whatever you think is "right" is based on the theory, values, or goals you use to measure the rightness of the action. Which ones are you choosing?
For example, let's say you have to invest $1M in fighting extreme poverty or cancer. Both decisions lead to benefits and suffering. How will you determine which decision is "right"? The answer is you can't. Yet, the public expects policymakers and people like Musk must make these decisions. So they choose a set of facts and decide which one is right. In this case, what's "right" is to shut down anything related to AI and AGI.
Even though I disagree with Yudkowsky's solution, I agree with his conclusion. "We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die."
Now, I don't think we will all die or that it's rational to believe it. It's a possibility—it always is. Only time can show us an unknowable future. What we can predict, though, is that regardless of whether AI or AGI leads to our extinction, glory, or nothing, someone will always say, "I told you so."
This draws me to think back to non-AI powered technologies -- the human + drone powered technologies that we use unethically and I'm brought, in my mind to this tweet: https://twitter.com/HKaaman/status/1484543766878904330?s=20
At the human level we've not aligned on our values or ethics world wide...