Wow, I didn't realize you were as involved with AstralCodexTen as that! I do read it regularly and bring some of the ideas into work. But at the same time, my own attitude towards the rationalist movement is far more muted - it's very obviously a clique, they very obviously aren't clear thinkers, and I never comment there because I don't accept any of the core passions. My sense is that they really, really care about certain issues (like AI risk) and aren't interested in hearing ideas or counterarguments that originate from outside of their shared frame of reference. For a while I tried posting at r/TheMotte, but pretty quickly learned that they don't understand the world except through the lens of the rationalist movement. Oh well - they're young!
When the rationalist movement is examined as a group, it's very easy to see the familiar human craziness popping up there. You're right, people tend to focus on their own world and overinflate its importance. And you're right, people have tended to believe the End is Nigh for thousands of years. Rationalists moved on from the Bible with the Book of Revelations; now they have Nick Bostrom and his bestselling Superintelligence. And yes, questioning the dogma of the group does have the flavor of an attack on leaders and a threat to the group.
But for all that, I do think that there is a strong case to be made about the danger of AI dragging the world into a horrible singularity: "Sure, the chance for out-of-control AI gaining power over humanity is far out. It might happen at all. But if it does, the worst case scenario is world-ending." The simple fact is that it pays to have more concern for existential risk than ordinary risks. So while I dislike being categorized alongside Scott Alexander, Eliezer Yudkowsky, or other self-styled rationalists, I do have to bite the bullet and agree with them that AI risk is one of the foremost threats against humanity.
Really, there just aren't that many threats against humanity that we could list. Drug abuse, war in Ukraine, and child hunger may be real problems, but they aren't threats against humanity. As you and Anders have pointed out, global warming is a thing, but not THAT much of a thing. The only actual threats to humanity that I can think of are 1. Attention from unfriendly and incredibly advanced extraterrestrials, 2. Something I prefer not to talk about online (sorry), and 3. the discovery and deployment of a technology with unforeseen consequences. And who are we kidding, right now #3 kind of means "AI."
My threshold for being involved in things is not that high. If I see an announcement that there will be an arrangement in Copenhagen, I'm just like "Wow, an opportunity to meet real people, only 200 kilometers away! I'll be there!" I have few other opportunities to meet people outside of my neighborhood.
I agree that technology poses a risk. Maybe not to humanity as a whole, because rural Africa could be a rather technology-free back-up. But technology is certainly risky. Put an AI in charge of it, and we have... risk.
Still, I think it would be more accurate to talk about technology risk than AI risk. This text https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers has made me especially annoyed at the AI risk talk. It tells about a strawberry picker going rogue because it so much wants to pick red things and throw them on something shiny. So it picks a man's red nose. It throws the strawberries on a streetlight instead of in the bucket. It transforms the whole world into red things to throw on the sun. Maybe it is some kind of joke, but there is serious-seeming text in between. I just can't stand the assumptions it makes about strawberry growers and productive people in general: What is a man with a red nose doing close to an operating machine in the first place? Hasn't he learned that machines are dangerous? Why is a strawberry picker strong? Strawberries need to be handled gently. Why doesn't anybody care that the harvest gets thrown away? And so on and so on. I know my objections are seen as minor. But I don't think it is a minor objection that people who work close to machines know that machines are dangerous and people who grow things care about the harvest. That is part of human psychology and I think that should weigh heavily in discussions about the interactions between humans and machines.
I also don't understand the "singularity" idea. Why would there only be one AI? It seems much more probable to me that there will be many different AI systems.
So I'll focus on the last part of your post. Because of your attitude about thinking and IQ being different things, you may not find this a convincing argument, and that's fine with me - I'm not particularly wedded to this explanation. But here it is:
Generally speaking, it does not make sense for a superintelligent AI to exist on a single platform like a classical robot. Computing power is greater when there's access to more data and more hardware, and optimizing a system is easier the more human interactions there are. Although it might be possible to churn out 10,000 different intelligent computers, there's a strong incentive to link them together and get even more intelligent and well-coordinated behavior.
Now consider the practical applications of such an AI. Having Alexa set your home thermostat and plan your driving route is nice, but the more things it can accomplish, the better for the end users. So humans naturally want Alexa to be taking over many tasks:
* Appointments and Customer service
* Drone deliveries
* Driving and Flight control
* Health care
* Surveillance and Law Enforcement
And so on. It is not hard to imagine something like Alexa 7.0 who books you for a doctor's appointment, reminds you of it, brings a car around to pick you up, fills out the paperwork, takes care of the insurance information, and then controls the robotic nurse who checks your vitals and prepares a report for the human physician. Then Alexa 9.0 replaces the doctor, and Alexa 10.0 replaces everything with an AI controlling a single android in your home capable of carrying out the entire process while making you a cup of tea.
The idea that we would have millions of such androids controlled by an unimaginably sophisticated central computer is what has people worried - that is the scenario where we slowly cede control of the entire planet to the equivalent of a robotic god whose eyes are a horde of drones and satellites, and whose minions are a legion of androids and humans carrying out its instructions as part of a global super economy. If the god considers you a threat, rural Africa will not be a safe place for you to run.
The scenario you paint is scarier than the rogue strawberry picker. Maybe I should start worrying a little after all.
I assume that the fundamental idea is that the AI becomes so good so people can't resist it. Most people wouldn't want to be bossed around by a computer. If they still agree to that, it must be because the computer is so much better than the alternatives.
AI is already good enough that many people can't resist it - the youngest generation is addicted to cellphones.
But really, the main idea is more abstract even than this. We can see feedback loops all over the place in nature or economics, and the situation with computing is currently in a positive feedback loop - the better our technology, the better we get at making technological breakthroughs. The real question then is whether AI will improve faster than the rate at which improving AI gets harder. This is hard to explain without calculus or at least some graph paper, but:
A. If, every time AI increases its intelligence by a factor of ten, the next milestone leading to a tenfold increase in intelligence gets only twice as hard, then, AI will explode into a singularity.
B. If, every time AI doubles its intelligence, the next milestone leading to a doubling of intelligence gets ten times harder, then, AI will stall.
I think A is possible, and dangerous, and should be taken seriously. But I think B is more likely, especially now that Moore's Law has ended. Different people in my family have different attitudes, though. My oldest son is much more optimistic about the singularity - he thinks it's coming, and also he thinks it'll be great when it's here. Personally I doubt it's going to be any better than crabapple pies, though. Three more in the oven right now!
The disaster scenario requires that technology becomes extremely cheap. For the moment, I have the impression that technology is expensive. Most technology is a privilege for those who can afford it. There is so much technology I know exist, but that I don't use, because I can't afford it. The spelling correction of my touch-pad is rather primitive. It can't distinguish between languages, for example. I know I could build a fully automatic irrigation system, but that would be both costly and fragile. Our robotic lawnmower is stupid when it operates, but now it doesn't even operate due to a crack in the limitation cable. When we build our house, we install pipes for electronic management of ventilation and outdoor lamps. But those pipes will remain empty until we have amazing amounts of time in some distant future.
This basic resistance of technology needs to be overcome in order to create an AI disaster. Maybe people who work in programming feel that is what they do, step by step. But here on the ground technology still looks chunky.
I wonder why you stress this objection. If we were limited to the physical technology that exists today, that would definitely reduce the possibility of a positive feedback loop producing machine super intelligence, but it wouldn't necessarily stop the singularity from occurring. Nor would it shield society from the super intelligent AI that emerges from it. Our local students have cell phones provided by parents and laptops provided by the school. America has almost as many cars as people. As people grow accustomed to banking online, checking the Internet for answers to questions, ordering pizzas through their cellphones, and paying traffic fines assigned by automated systems, they will simply defer to what the computers say. This is the way it is in most civilized societies - people learn to follow regular procedures and fit into economic, legal, and social systems of their day and age. No need for robots, drones, or anything else.
Wow, I didn't realize you were as involved with AstralCodexTen as that! I do read it regularly and bring some of the ideas into work. But at the same time, my own attitude towards the rationalist movement is far more muted - it's very obviously a clique, they very obviously aren't clear thinkers, and I never comment there because I don't accept any of the core passions. My sense is that they really, really care about certain issues (like AI risk) and aren't interested in hearing ideas or counterarguments that originate from outside of their shared frame of reference. For a while I tried posting at r/TheMotte, but pretty quickly learned that they don't understand the world except through the lens of the rationalist movement. Oh well - they're young!
When the rationalist movement is examined as a group, it's very easy to see the familiar human craziness popping up there. You're right, people tend to focus on their own world and overinflate its importance. And you're right, people have tended to believe the End is Nigh for thousands of years. Rationalists moved on from the Bible with the Book of Revelations; now they have Nick Bostrom and his bestselling Superintelligence. And yes, questioning the dogma of the group does have the flavor of an attack on leaders and a threat to the group.
But for all that, I do think that there is a strong case to be made about the danger of AI dragging the world into a horrible singularity: "Sure, the chance for out-of-control AI gaining power over humanity is far out. It might happen at all. But if it does, the worst case scenario is world-ending." The simple fact is that it pays to have more concern for existential risk than ordinary risks. So while I dislike being categorized alongside Scott Alexander, Eliezer Yudkowsky, or other self-styled rationalists, I do have to bite the bullet and agree with them that AI risk is one of the foremost threats against humanity.
Really, there just aren't that many threats against humanity that we could list. Drug abuse, war in Ukraine, and child hunger may be real problems, but they aren't threats against humanity. As you and Anders have pointed out, global warming is a thing, but not THAT much of a thing. The only actual threats to humanity that I can think of are 1. Attention from unfriendly and incredibly advanced extraterrestrials, 2. Something I prefer not to talk about online (sorry), and 3. the discovery and deployment of a technology with unforeseen consequences. And who are we kidding, right now #3 kind of means "AI."
My threshold for being involved in things is not that high. If I see an announcement that there will be an arrangement in Copenhagen, I'm just like "Wow, an opportunity to meet real people, only 200 kilometers away! I'll be there!" I have few other opportunities to meet people outside of my neighborhood.
I agree that technology poses a risk. Maybe not to humanity as a whole, because rural Africa could be a rather technology-free back-up. But technology is certainly risky. Put an AI in charge of it, and we have... risk.
Still, I think it would be more accurate to talk about technology risk than AI risk. This text https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers has made me especially annoyed at the AI risk talk. It tells about a strawberry picker going rogue because it so much wants to pick red things and throw them on something shiny. So it picks a man's red nose. It throws the strawberries on a streetlight instead of in the bucket. It transforms the whole world into red things to throw on the sun. Maybe it is some kind of joke, but there is serious-seeming text in between. I just can't stand the assumptions it makes about strawberry growers and productive people in general: What is a man with a red nose doing close to an operating machine in the first place? Hasn't he learned that machines are dangerous? Why is a strawberry picker strong? Strawberries need to be handled gently. Why doesn't anybody care that the harvest gets thrown away? And so on and so on. I know my objections are seen as minor. But I don't think it is a minor objection that people who work close to machines know that machines are dangerous and people who grow things care about the harvest. That is part of human psychology and I think that should weigh heavily in discussions about the interactions between humans and machines.
I also don't understand the "singularity" idea. Why would there only be one AI? It seems much more probable to me that there will be many different AI systems.
So I'll focus on the last part of your post. Because of your attitude about thinking and IQ being different things, you may not find this a convincing argument, and that's fine with me - I'm not particularly wedded to this explanation. But here it is:
Generally speaking, it does not make sense for a superintelligent AI to exist on a single platform like a classical robot. Computing power is greater when there's access to more data and more hardware, and optimizing a system is easier the more human interactions there are. Although it might be possible to churn out 10,000 different intelligent computers, there's a strong incentive to link them together and get even more intelligent and well-coordinated behavior.
Now consider the practical applications of such an AI. Having Alexa set your home thermostat and plan your driving route is nice, but the more things it can accomplish, the better for the end users. So humans naturally want Alexa to be taking over many tasks:
* Appointments and Customer service
* Drone deliveries
* Driving and Flight control
* Health care
* Surveillance and Law Enforcement
And so on. It is not hard to imagine something like Alexa 7.0 who books you for a doctor's appointment, reminds you of it, brings a car around to pick you up, fills out the paperwork, takes care of the insurance information, and then controls the robotic nurse who checks your vitals and prepares a report for the human physician. Then Alexa 9.0 replaces the doctor, and Alexa 10.0 replaces everything with an AI controlling a single android in your home capable of carrying out the entire process while making you a cup of tea.
The idea that we would have millions of such androids controlled by an unimaginably sophisticated central computer is what has people worried - that is the scenario where we slowly cede control of the entire planet to the equivalent of a robotic god whose eyes are a horde of drones and satellites, and whose minions are a legion of androids and humans carrying out its instructions as part of a global super economy. If the god considers you a threat, rural Africa will not be a safe place for you to run.
The scenario you paint is scarier than the rogue strawberry picker. Maybe I should start worrying a little after all.
I assume that the fundamental idea is that the AI becomes so good so people can't resist it. Most people wouldn't want to be bossed around by a computer. If they still agree to that, it must be because the computer is so much better than the alternatives.
AI is already good enough that many people can't resist it - the youngest generation is addicted to cellphones.
But really, the main idea is more abstract even than this. We can see feedback loops all over the place in nature or economics, and the situation with computing is currently in a positive feedback loop - the better our technology, the better we get at making technological breakthroughs. The real question then is whether AI will improve faster than the rate at which improving AI gets harder. This is hard to explain without calculus or at least some graph paper, but:
A. If, every time AI increases its intelligence by a factor of ten, the next milestone leading to a tenfold increase in intelligence gets only twice as hard, then, AI will explode into a singularity.
B. If, every time AI doubles its intelligence, the next milestone leading to a doubling of intelligence gets ten times harder, then, AI will stall.
If it helps, you can look at the images you made at https://woodfromeden.substack.com/p/iq-and-intelligence-a-bifurcated and imagine A is blue and B is red as artificial intelligence increases over time.
I think A is possible, and dangerous, and should be taken seriously. But I think B is more likely, especially now that Moore's Law has ended. Different people in my family have different attitudes, though. My oldest son is much more optimistic about the singularity - he thinks it's coming, and also he thinks it'll be great when it's here. Personally I doubt it's going to be any better than crabapple pies, though. Three more in the oven right now!
The disaster scenario requires that technology becomes extremely cheap. For the moment, I have the impression that technology is expensive. Most technology is a privilege for those who can afford it. There is so much technology I know exist, but that I don't use, because I can't afford it. The spelling correction of my touch-pad is rather primitive. It can't distinguish between languages, for example. I know I could build a fully automatic irrigation system, but that would be both costly and fragile. Our robotic lawnmower is stupid when it operates, but now it doesn't even operate due to a crack in the limitation cable. When we build our house, we install pipes for electronic management of ventilation and outdoor lamps. But those pipes will remain empty until we have amazing amounts of time in some distant future.
This basic resistance of technology needs to be overcome in order to create an AI disaster. Maybe people who work in programming feel that is what they do, step by step. But here on the ground technology still looks chunky.
I wonder why you stress this objection. If we were limited to the physical technology that exists today, that would definitely reduce the possibility of a positive feedback loop producing machine super intelligence, but it wouldn't necessarily stop the singularity from occurring. Nor would it shield society from the super intelligent AI that emerges from it. Our local students have cell phones provided by parents and laptops provided by the school. America has almost as many cars as people. As people grow accustomed to banking online, checking the Internet for answers to questions, ordering pizzas through their cellphones, and paying traffic fines assigned by automated systems, they will simply defer to what the computers say. This is the way it is in most civilized societies - people learn to follow regular procedures and fit into economic, legal, and social systems of their day and age. No need for robots, drones, or anything else.
Your second set of bullet points is numbered 1 and 2 when I think you want 3 and 4.
Thank you for pointing it out. Fixed it the best I could.