Every time I read someone expressing their fears that humanity will be annihilated by a too-fast developing AI in a not-so-distant future, I get a creepy feeling. A few days ago I questioned myself about why. I found a few sensible reasons to dislike the belief in AI apocalypse:
The AI apocalypse risk idea is ethnocentric. Since when do the most developed sectors of the most developed countries hold a monopoly on world destruction?
The End of the World-ism is always annoying, because it implies that we who are alive now are something very special: The very last specimen of a 300 000 year old species. When people alive now claim that it is unique to be alive right now, I become a bit suspicious.
Belief against belief
Still, the main reason why I worry is another: Everybody seems to believe the same on an issue where it is technically possible to hold different beliefs.
Like everything else of importance, AI is not risk free. With all certainty, there will be serious accidents caused by faulty AI systems. Still, the prospect that the world will be destroyed by an AI gone wild rests on a few assumptions that far from everyone agrees about:
AI will go from being too stupid to be useful, as is the case for most application today, to being too smart to be safe to use.
Humans will voluntarily relinquish control over the physical world to AI systems they don't understand. Not only will they relinquish control over the physical world; they will relinquish control over the physical world in such a way that humans will lack the possibility to pull the plug or even throw a bomb on a misbehaving machine.
None of it is impossible. But it is also far from 100 percent certain that the above scenario will happen. It depends on what one believes about computers, and, above all, what one believes about humans. It is also possible to believe that
3. Human nature is much more complex than any AI, so AI will never be superior to humans in every respect.
4. Humans are wary of relinquishing control over everything to opaque systems. Humans also enjoy supervising things and being in control. Not every human, but enough humans to keep errant machines in check.
If most people find 1 and 2 more credible than 3 and 4, then fair enough. But if 100 percent of people believe in 1 and 2 and zero percent believe in 3 and 4, then I suspect there is no longer a rational exchange of ideas as much as a semi-religious adherence to dogma.
Ingroup in formation
Beliefs are sprawling to their nature. Whenever it is possible to believe different things, people will believe different things. The big exception is when people explicitly gather around shared beliefs, in religious or ideological groups.
Actually, only groups that build on shared belief that way are groups in the narrow sense of the word. If a gathering of people only says things that build on logic and common sense, it never really becomes a group, it is more of a gathering of people. Since anyone can happen to agree about anything people in the gathering says, it never gets the clear outer boundaries of a genuine ingroup.
In order to become an ingroup with boundaries to the outside, a group needs an idea that is weird enough not to appeal to most people. Then people can make a choice: Start to believe the same way as the group and become part of it. Alternatively, don't believe and be part of the outgroup instead.
Leaders can do a great deal to establish such group-forming ideas. Since group members have confidence in their leaders, they will judge the leader's ideas more favorably than non-members will. Within the movement, challenging the leader's beliefs becomes kind of bad manners. If someone replies "this sounds stupid, I don't believe in it", then that person has indirectly called the leader stupid. Ingroup members think twice before doing so, because they know that the leader is indeed not stupid. Meanwhile, outgroup people have no qualms against blurting out "this sounds stupid, I don't believe in it". That way, a division between us and them has been formed.
Thinking or believing?
I don't think it is wrong that rationalists believe in things. Believing things is part of human nature. I hold many beliefs myself.
What worries me is that close to one hundred percent of self-proclaimed rationalists seem to believe the same thing. If no self-identified rationalist blogger ever writes a post saying 'no, I don't believe AI risk is one of the foremost threats against humanity", I think rationalists are getting closer and closer to becoming a community of believers. Believing in existential AI risk is not worse than believing something else. But it is still a belief among beliefs.
Rationalism is the best subculture I have ever seen. I like Astral Codex Ten. I go to every ACX meet-up in my neighborhood. I have never met more interesting and intelligent people in one single place anytime or anywhere else. I most of all hope it continues. I hope that rationalism continues gathering people who share a certain way of thinking rather than people who share certain beliefs.
Wow, I didn't realize you were as involved with AstralCodexTen as that! I do read it regularly and bring some of the ideas into work. But at the same time, my own attitude towards the rationalist movement is far more muted - it's very obviously a clique, they very obviously aren't clear thinkers, and I never comment there because I don't accept any of the core passions. My sense is that they really, really care about certain issues (like AI risk) and aren't interested in hearing ideas or counterarguments that originate from outside of their shared frame of reference. For a while I tried posting at r/TheMotte, but pretty quickly learned that they don't understand the world except through the lens of the rationalist movement. Oh well - they're young!
When the rationalist movement is examined as a group, it's very easy to see the familiar human craziness popping up there. You're right, people tend to focus on their own world and overinflate its importance. And you're right, people have tended to believe the End is Nigh for thousands of years. Rationalists moved on from the Bible with the Book of Revelations; now they have Nick Bostrom and his bestselling Superintelligence. And yes, questioning the dogma of the group does have the flavor of an attack on leaders and a threat to the group.
But for all that, I do think that there is a strong case to be made about the danger of AI dragging the world into a horrible singularity: "Sure, the chance for out-of-control AI gaining power over humanity is far out. It might happen at all. But if it does, the worst case scenario is world-ending." The simple fact is that it pays to have more concern for existential risk than ordinary risks. So while I dislike being categorized alongside Scott Alexander, Eliezer Yudkowsky, or other self-styled rationalists, I do have to bite the bullet and agree with them that AI risk is one of the foremost threats against humanity.
Really, there just aren't that many threats against humanity that we could list. Drug abuse, war in Ukraine, and child hunger may be real problems, but they aren't threats against humanity. As you and Anders have pointed out, global warming is a thing, but not THAT much of a thing. The only actual threats to humanity that I can think of are 1. Attention from unfriendly and incredibly advanced extraterrestrials, 2. Something I prefer not to talk about online (sorry), and 3. the discovery and deployment of a technology with unforeseen consequences. And who are we kidding, right now #3 kind of means "AI."
Your second set of bullet points is numbered 1 and 2 when I think you want 3 and 4.