29 Comments

I'm skeptical about using the economic argument that humans are more 'useful' than AI. I highly doubt that will persist longer than another few decades, max.

If you take this argument to it's conclusion as well then... once AI becomes good enough to do everything humans can do, we're useless. I'd argue that if you don't want AI to become the norm you have to reject them on more religious or philosophical grounds, and avoid the economic argument entirely. Humans are done when it comes to economics.

Expand full comment

> If everyone has access to a silken tongued AI then silken tongues will no longer be a way to get ahead in life.

Another truth from the column, which can be generalized as a critique of much of AI alarmism: If AI makes it possible to do task X by computer, and (as usually happens) that automation can't be effectively monopolized by a small number of businesses, the price of task X rapidly declines to zero. This is bad for people who do task X, especially those who are particularly good at it, but it's good for everybody who needs task X done, which is usually everybody else. But the labor income in the economy will simply shift around to whatever tasks automation can't do (at that time).

And sometimes "what tasks automation can't do" will be by definition. It has been 5,000 years since the fastest way for a human to get from point A to point B was by running, but during those 5,000 years, being one of the fastest human runners has been lucrative labor. (Consider the ancient Olympic games and the contemporary Usain Bolt.) And likely the total income of professional human chess players well outstrips the total money paid for computer chess-playing software and services.

Expand full comment

> Any AI with the intelligence to pose an existential threat to humanity will understand this simple logic. A brain needs its hands. No matter how smart the AI gets, humanity’s future will be assured as long as we can provide vital services to the masters of the digital realm. It might not be the future most AI alarmists envision for humanity, but for most people on Earth it is a future very similar to the present.

I don't think that you directly say that AI is not such a big deal, but I kinda get that you try to suggest it. Are you 100% sure that you are right and there is no chance AI can kill us all? One of the major ideas in existential risks papers is that, given how hard making those predictions is, we can not be perfectly sure about our estimations of how likely it is that something will pose an existential threat. And since the stakes are high, we should be careful.

What this means is that even if this is mostly true (not perfectly):

> No matter how smart the AI gets, humanity’s future will be assured as long as we can provide vital services to the masters of the digital realm.

We should not conclude that there are no existential risks coming from AI or that we shouldn't try to prevent it. To really get a proof about how the world with an AI would look like, we would actually need to run the AI, which could kill us all, so it's better not to!

Expand full comment
author

You are right that I believe AI is a big deal. I just do not see how it can pose a serious threat to us. There is always a risk, of course. But there is a risk with everything. AI risk seems quite modest in comparison.

What more, AI has the power to help us contain multiple other risk scenarios. The risk that AI should by some unknown magic turn both almighty and hostile seems very slim compared with the near certainty that AI can help free up the labor of millions or even billions of humans whose ingenuity and industriousness can then be harnessed to advance humanity.

Expand full comment

I'm curious Anders, have you seen Robert Miles' talk?

https://www.youtube.com/watch?v=pYXy-A4siMw&t=931s

(If you're not interested, the summary is that the AI models we use now - models where we've had amazing gains - are dangerous by default. Their *baseline* behavior is aberrant. I personally doubt these will be the same same problems we see in the long run, but so far it seems that the fewest assumptions yield an AI that will be misalined to humanity's goals.)

Expand full comment
author

I am sort of allergic to the spoken word so I am not going to listen to some unknown person talking about AI (although, I must admit he had a very clear and easy to understand voice).

My point of view is not really the philosophical angle that seems so pervasive in AI risk circles. I merely believe that the risk from today's AI does not seem very great. Partly because of the physical world problem I deliberate on in this article. Partly because I do not see how today's AI can get very much smarter.

What is today called AI is basically statistics engines that mimic human speech. They seem human because they are very good at imitating the way humans talk (or, rather, write). But since they are only imitating I do not see how they will ever get more intelligent than humans. That will require some entirely new technology. That is of course not impossible, but as far as I know it is not imminent either.

Expand full comment

They seem more intelligent in language because they can complete many language tasks in less time with a better result. My take is that what we are seeing what intelligence looks like when it exists purely of language with no logical reasoning, etc.

It's true that humans are a gatekeeper to the physical world. However, advanced AI could become gatekeeper to the virtual world. A world that now controls the physical by operating critical energy infrastructure, etc remotely via computer. So the AI may be just as capable of holding us hostage as the other way around.

Expand full comment

Gotta hand it to you! You have a fresh perspective on AGI. Your fingers are pointing in directions outside the frame the mainstream media points to! ;-)

Nonetheless I'd like to push back a bit on one of your good thoughts.

>The superiority of human dexterity will not disappear anytime soon. One day robots might equal humans in physical ability. But this day is not today. It is not tomorrow either. _The time when robots will outdo humans physically is probably more centuries than decades away, if it ever comes._

I am reminded of the quote apocryphally attributed to Henry Ford: "If I had asked my customers what they wanted they would have said a faster horse”.

In other words, what might happen when AI - as Ford did - changes the rules? For example, suppose AI enables better "just-in-time growing" where little robos bring ripe food from farm to your kitchen? No warehouse needed. Or bricks from the kiln directly to the building site? No hands needed!

With actions like these the need for human dexterity is reduced. Thus, I've have no feeling that "disarming" society will take centuries.

See also:

* https://www.dexterity.ai/

* https://www.universal-robots.com/

* https://scitechdaily.com/columbia-engineers-create-highly-dexterous-human-like-robot-hand-that-can-operate-in-the-dark/

Update 2 2023-05-10

See also: https://substack.com/inbox/post/119020621

AXIS OF ORDINARY ~ Alexander Kruel: Links for 2023-05-10

> Before you say silly things like, "But AI is still bad at physical tasks," stop right there and read about the amazing advances being made in robotics literally every week.

Followed by a dozen useful links.

Update:

BTW, there is a sad but relevant example where the imitation of human dexterity was outwitted by an alternative technology:

https://en.wikipedia.org/wiki/Paige_Compositor versus https://en.wikipedia.org/wiki/Linotype_machine

The author Mark Twain was a major investor in Page's device. Twain bet and lost a fortune in the process.

Expand full comment
author

Of course, if AI turns out to be inapprehensibly more intelligent than human beings, intelligent on some level we can hardly even fathom, then all bets are off. I have read predictions of AI becoming so smart that it can design its own nanobots from the proteins up. And by extension control the physical world on a level far beyond what humans can accomplish with their big, clumsy hands.

For several reasons I do not think that AI will or even can achieve intelligence on that scale. I might write another article about that but right now Tove has banned me from writing about that subject since she has several articles on the intelligence theme coming out in the next days or weeks.

Expand full comment

>> For several reasons I do not think that AI will or even can achieve intelligence on that scale.

You are not alone.

An amusing current thread on "intelligence explosion" / https://www.lesswrong.com/tag/intelligence-explosion is accessible from here:

Contra Yudkowsky on Doom from Foom #2

https://www.lesswrong.com/posts/LF3DDZ67knxuyadbm/contra-yudkowsky-on-doom-from-foom-2

Expand full comment

This article resonates with my thoughts on how increasingly ignorant people are getting about the real world. I restacked a quote from the article with a much longer response.

https://substack.com/profile/143639320-nthbridgeburner/note/c-15727707

Expand full comment
May 7, 2023·edited May 8, 2023

There is so much truth to this column to be unpacked. As just a start, I've been reading business management magazines on and off for [corrected:] 30 years. In that literature, it's well known that the diffusion of the PC through business was revolutionary. Specifically, before that, many middle managers spent their time (1) taking data from their subordinates, aggregating it, and passing it to their superiors, and (2) taking aggregate commands from their superiors, disaggregating them, and passing those to their subordinates. With the PC, those processes could be largely automated. Similarly, that sort of data became easier to pass "sideways", from one middle manager to another at the same level, rather than having to be passed up to a common superior and back down. The result was that a lot of middle-manager labor was eliminated, and consequently a lot of middle-managers were eliminated, leading to the contemporary "flatter organizational structures". But it's not like the people of that social class and education have worse job prospects now than they did 50 years ago.

Expand full comment
author

The essence of my article is, I think, that AI will profoundly change the entire global economy. And it will do it in a way that shifts economic power to manual workers. If so it will go against the last 200 years of economic development which is very interesting, to put it mildly.

Expand full comment

> shifts economic power to manual workers. If so it will go against the last 200 years of economic development

I would phrase it "workers who do physical things" ... a large fraction of those in the current economy are people who deal with clients in person, and many of those don't need to be particularly strong or dexterous, and the crucial skills are social. Certainly, it will tend to downgrade purely intellectual workers.

Whether the last 200 years have consistently been against manual workers is a complicated question. My knowledge of economic history is largely along the UK-US thread. During the earliest parts of the Industrial Revolution, manual workers didn't gain much. But by the mid-1800s real incomes were rising enough that manual industrial workers could reliably reproduce. By the 1950s in the US, (male) manual workers in large industries were doing very well and their unions wielded a great deal of political power. But since globalization started biting in around 1975, the position of manual workers has declined.

Expand full comment

I can't keep up with you and Tove! I have things to say, but it just takes time to say it. Three quarters of the reason is that what I want to say invariably relies on facts most people A) don't know or B) don't understand or worse C) consider taboo.

For now I'll simply say that this is a clear and well written description of the optimistic position. The pessimistic position notes with a shrug that people will do things for money. What is worse, people don't even need to be paid very much if they're asked to do things like:

"Use the spray bottles to disinfect the cutting boards before the morning shift arrives." or "Take this cardboard package and leave it by a corner of the hallway, where someone will pick it up for incineration" or "Place the contents of this package into spray bottles labelled 'disinfectant'" or "Bring these live virus samples into a warehouse and leave them in a cardboard package for disposal" or "Find a package in the corner of the hallway and mail it to a given address" or "Open the package you receive in the mail and pour the contents into spray bottles labelled 'disinfectant.'"

Already people trust AI to tell them where to go and how to get there; when trust in computers becomes a matter of course, will humans remain human, or will their hands become biological components of the machine?

Expand full comment
author

My starting point to this article is the Terminator film, incidentally as Matt Yglesias wrote about here:

https://www.slowboring.com/p/the-case-for-terminator-analogies

And my argument is that even though today's AI is at least equal, but probably superior, to Skynet in the Terminator film there is nothing even similar to the T-1000 robots. It is perfectly possible that today's AI tries to wipe us out with nuclear weapons if we ever give it the launch codes. Or by some other nebulous scheme.

But without a presence in the physical world every AI doomsday scenario will always be a half measure. Plenty of humans will survive. And they will strike back. AI is not invincible. It is totally dependent on physical infrastructure: servers, networks, even electricity.

In a war between humanity and computer code the odds are heavily stacked in favor of humanity. That seems to be easily forgotten everytime ChatGPT expresses another eloquent opinion.

Expand full comment

Well, let's assume for the moment that everything you wrote is correct. (It definitely starts off well, given that Terminator 1 was one of the best movies of all time.) But COVID gave humanity the heebie jeebies, and its death toll was largely confined to the elderly. It didn't even kill 1% of people alive. In contrast consider a usual scenario of war in the near future between humans and a rogue AI that wants to cover the arable land mass of the Earth with paperclips. If it were possible for such a conflict to kill a quarter of humanity, then it would be worth taking seriously.

But this isn't really what I'm concerned about. The substance of your argument depends on robots not being able to function effectively without human hands. Well, the year is 2023, and here is the level of agility and manual dexterity robots are capable of:

https://www.youtube.com/watch?v=-e1_QhJ1EhQ

https://www.youtube.com/watch?v=y3RIHnK0_NE

The videos are obviously scripted, and the level of *intelligence* they display is questionable. And of course the usual arguments about logistics, production, benefit vs cost, maintenance, and so on all still apply. The jobs of manual laborers are not going to disappear for the next several years, if ever in the foreseeable future. But the fact that robots are approaching human-level *coordination* is extremely difficult to deny. Where will they be in ten or twenty years? As the Singularity draws near, do you think AI will depend *completely* and *totally* on the cooperation of human agents?

Expand full comment
author

I do not deny that the Atlas robot can do some impressive acrobatics. But it still has some way left to truly human levels. What more, I suspect it will be difficult for it to advance very much further. Recent advances have mostly been in sensors and perception, more or less AI stuff. The physical side of the robot has seen much less development. It still uses a hydraulic system for movement. And while hydraulics are useful I suspect they have more or less reached their power to weight limits.

I can not find any information about what weights the Atlas can lift, nor how long its batteries last. But I assume both values, especially the battery life, is very much less than humans can achieve.

And then there is cost. Even if a robot like the Atlas one day equals humans in mobility and versatility it is very hard to see it doing it at a cost level comparable to humans. If that day ever comes I predict it is not due to an abundance of competent robots but rather due to a lack of humans.

Expand full comment
author

>>I have things to say, but it just takes time to say it.

Trust me, writing things often takes us ridiculous amounts of time. This one is no exception.

>>Will humans remain human, or will their hands become biological components of the machine?

This question makes me think of those stories about those truck drivers who trusted the GPS and got stuck in the middle of the forest. One of them was fired on spot https://sverigesradio.se/artikel/2439827 (link in Swedish, I just had to verify it actually happened). The fact that the employer fired a truck driver for trusting the GPS says something about the expectations on human workers: They are supposed to show common sense, as a complement to machines which are great at data, but lack common sense.

Basically, I think that's why humans are hired and will continue being hired: The combination of hands and common sense. As everybody knows, everything that can go wrong, will go wrong. Actually trusting an AI would require people to forget about that rule. Common sense is at least as old as hands. I think that is a human killer app just as much as dexterity.

Expand full comment

1. Fine, but would you balk when asked by a supervisor to "Place the contents of this package into spray bottles labelled 'disinfectant'?" And even if you or Anders would, would *everyone* hesitate? A superintelligent AI should be able to put together a plan requiring many steps, and it should be able to notice who does and does not reward it by doing what it wants; this is one of the most basic aspects these models, that they learn from rewards. Even if most people won't function like robots, some will.

2. And even if it's unlikely they will ever produce hands of their own, and even if it's unlikely that they will ever want to turn us all into nuclear reactor shielding, and even if it's unlikely that anyone will do what they ask, is it so unlikely as to be impossible?

And ultimately while Point 1. doesn't sound like paranoia to me, it isn't even my position! It's more like the self-described Rationalists on AstralCodexTen. My position is just point 2. It's easy to focus on the likely future and brush aside the unlikely. *No,* an end-of-humanity Singularity in the next forty years isn't the most likely outcome of AI research in the present, and *yes,* I tend to agree with you and with Anders when you talk about limits to the possibility of (or devastation of) a coming Singularity. But to justify the position that we shouldn't be concerned about it, you need to be able to show the likelihood of that happening is indistinguishable from zero. You and Anders haven't done that, and I don't think you can.

Expand full comment
author

Already in the 19th century, Karl Marx said workers were alienated. That's entirely possible. But still, I haven't heard about any person in our society who had a job where they didn't know what they were doing. It might exist in the military or secret services, but I never heard of it in civilian life. I think there are two reasons behind this:

1. People get much more efficient when they know what they are doing

2. People like very much to know what they are doing

During the last few years, Anders and I have been building a house. I'm doing most of the construction work. I don't know how many times I have pestered him for instructions and updated drawings. Time after time, I have explained: I can't work efficiently if I don't know exactly what the result is supposed to be. Not knowing the exact goal of the work is paralyzing because I hesitate when I should be working.

Contstruction work is rather complicated work. But the same applies when I'm planting things in the vegetable land. I know that if I plant one thing, I risk planting it in the wrong spot and my vegetable land and crop rotation will be ruined and in worst case I will have to dig it up and plant it again.

Humans don't like to do the same work twice because they did it wrong the first time. I think it is an instinct. We dislike it so much that we instinctually demand to know exactly what we are doing when we are doing things.

I'm not 100 percent sure that humans will not compromise even more with that instinct than they already do. But I think it exists and it is important. And I think most AI alarmists have no idea that it exists, because they are not manual workers themselves.

Expand full comment

Oh how I dream of employment where I knew what I was doing. At my current place of work, other employees have commented that they were given very clear, direct, low-level instructions by a supervisor that were then countermanded by that supervisor over a span of months, weeks, or hours. My position is now totally secure - *now.* But where job security is poor, or, where the prospects of advancement are dangled just out of reach, most white collar workers will bite their lips, look around, and try to figure out how to effectuate the most blatant contradictions and mouth the most obvious absurdities.

https://www.youtube.com/watch?v=BKorP55Aqvg&t=164s

Epicurus said we should quit and grow food in the boonies with our buddies like you and Anders. Live the dream, Tove!

Expand full comment
author

>>At my current place of work, other employees have commented that they were given very clear, direct, low-level instructions by a supervisor that were then countermanded by that supervisor over a span of months, weeks, or hours.

An AI boss would never behave that way. Selective forgetfulness is human, all too human.

In general, white collar workers need to put up with a lot more bullshit than blue collar workers. That's the price they pay for the privilege to work with a computer screen instead of that messy, heavy physical world. I think everybody agrees that the world would be a better place if a certain percent of all white collar people were fired and replaced with hamsters in cages. The problem is that people can't agree over exactly which of them are doing more harm than good.

Expand full comment

It can be really bizarre. People knock on my door more than once a day for help or advice outside of my job duties, I go the extra mile to provide additional services, and every day people show gratitude for my presence and tell me they are glad I'm there. (If this seems incredible, I don't act the same way at work - I'm polite there.) And yet *all* of this is extraneous to my actual job duties, which I've pared back to the absolute minimum in response to inconsistent direction from above. If their stated goals were the same as their actual goals, they would very wisely replace me with a hamster cage.

As it is, their actual goals are nebulous enough that I doubt even they understand what they are. I keep throwing out feelers higher and higher up in the organization, and I suspect that maybe the problem actually lies at the very top. In other words, maybe my own supervisor zigs and zags so often because he's just terrified of his own boss? A good thing if true - the current upper management is leaving - but my coworkers seem unhappy about the incoming replacement, so sheesh, I don't know.

TL;DR: Live the dream, Tove!

Expand full comment

Very very few people would actually do the weird and shady things necessary for hypotheticals like this, though. They'd assume the computer is glitched and tell their manager. Or maybe just ignore the computer altogether and not even bother telling the manager - "refill the disinfectant spray bottles from an unmarked container that just showed up in the mail from some random dude? That's dumb, I'll just refill them with the normal stuff and tell the computer I did what it said."

Expand full comment

Fine. So what about point 2?

Expand full comment

Very few things in life can be guaranteed to be 100% safe. What we normally do is set up safeguards and early warning systems to prevent problems and/or catch them before they become serious. For example - should one big AI be giving orders to everyone from restaurant workers to high-security bio lab scientists? Of course not, that's dumb. You run a restaurant with an AI designed to be good at running restaurants, and a high-security bio lab with an AI custom-made to handle both science and high levels of biological security. Two separate AIs, that will normally have no contact with each other, and don't understand each other's areas of expertise - can I guarantee 100% that they will never start talking to each other and come up with your plan to smuggle deadly viruses into restaurants? No, nothing in this life is 100%. But we can make it 99 with a whole lotta 9's after the decimal point, and that's good enough for most of us.

Expand full comment

In that case the only thing we seem to disagree about is

1. How easy AI safety will be, and

2. What steps we ought to take.

I'm not at all reassured that AI safety will be easily achieved. Researchers are already casually discovering that AI can already be used to do things like find formulas for substances that are many times more deadly than known chemical weapons:

https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx

Or consider that lethal autonomous weapon systems have already been developed which are capable of killing human targets after deployment; the development has prompted US generals to warn that we may be approaching a new cold war created by emerging technologies.

https://www.weforum.org/agenda/2021/06/the-accelerating-development-of-weapons-powered-by-artificial-risk-is-a-risk-to-humanity/

"We'll just separate the AIs" doesn't solve either of these problems.

Most of all, the numerous unknowns posed by a singularity - a hypothetical time when decades or centuries of technological development can occur in the span of a few days - leave us with serious questions about what we ought to do. Yes, spreading alarmist ideas about how we are doomed isn't very wise, but neither is reassuring everyone that "Oh it will all be fine, this is all merely alarmism." We should work towards risk mitigation rather than minimizing or ignoring the problem.

Expand full comment

Thanks. Thought provoking.

Expand full comment