> If everyone has access to a silken tongued AI then silken tongues will no longer be a way to get ahead in life.
Another truth from the column, which can be generalized as a critique of much of AI alarmism: If AI makes it possible to do task X by computer, and (as usually happens) that automation can't be effectively monopolized by a small number of businesses, the price of task X rapidly declines to zero. This is bad for people who do task X, especially those who are particularly good at it, but it's good for everybody who needs task X done, which is usually everybody else. But the labor income in the economy will simply shift around to whatever tasks automation can't do (at that time).
And sometimes "what tasks automation can't do" will be by definition. It has been 5,000 years since the fastest way for a human to get from point A to point B was by running, but during those 5,000 years, being one of the fastest human runners has been lucrative labor. (Consider the ancient Olympic games and the contemporary Usain Bolt.) And likely the total income of professional human chess players well outstrips the total money paid for computer chess-playing software and services.
> Any AI with the intelligence to pose an existential threat to humanity will understand this simple logic. A brain needs its hands. No matter how smart the AI gets, humanity’s future will be assured as long as we can provide vital services to the masters of the digital realm. It might not be the future most AI alarmists envision for humanity, but for most people on Earth it is a future very similar to the present.
I don't think that you directly say that AI is not such a big deal, but I kinda get that you try to suggest it. Are you 100% sure that you are right and there is no chance AI can kill us all? One of the major ideas in existential risks papers is that, given how hard making those predictions is, we can not be perfectly sure about our estimations of how likely it is that something will pose an existential threat. And since the stakes are high, we should be careful.
What this means is that even if this is mostly true (not perfectly):
> No matter how smart the AI gets, humanity’s future will be assured as long as we can provide vital services to the masters of the digital realm.
We should not conclude that there are no existential risks coming from AI or that we shouldn't try to prevent it. To really get a proof about how the world with an AI would look like, we would actually need to run the AI, which could kill us all, so it's better not to!
Gotta hand it to you! You have a fresh perspective on AGI. Your fingers are pointing in directions outside the frame the mainstream media points to! ;-)
Nonetheless I'd like to push back a bit on one of your good thoughts.
>The superiority of human dexterity will not disappear anytime soon. One day robots might equal humans in physical ability. But this day is not today. It is not tomorrow either. _The time when robots will outdo humans physically is probably more centuries than decades away, if it ever comes._
I am reminded of the quote apocryphally attributed to Henry Ford: "If I had asked my customers what they wanted they would have said a faster horse”.
In other words, what might happen when AI - as Ford did - changes the rules? For example, suppose AI enables better "just-in-time growing" where little robos bring ripe food from farm to your kitchen? No warehouse needed. Or bricks from the kiln directly to the building site? No hands needed!
With actions like these the need for human dexterity is reduced. Thus, I've have no feeling that "disarming" society will take centuries.
AXIS OF ORDINARY ~ Alexander Kruel: Links for 2023-05-10
> Before you say silly things like, "But AI is still bad at physical tasks," stop right there and read about the amazing advances being made in robotics literally every week.
Followed by a dozen useful links.
Update:
BTW, there is a sad but relevant example where the imitation of human dexterity was outwitted by an alternative technology:
This article resonates with my thoughts on how increasingly ignorant people are getting about the real world. I restacked a quote from the article with a much longer response.
There is so much truth to this column to be unpacked. As just a start, I've been reading business management magazines on and off for [corrected:] 30 years. In that literature, it's well known that the diffusion of the PC through business was revolutionary. Specifically, before that, many middle managers spent their time (1) taking data from their subordinates, aggregating it, and passing it to their superiors, and (2) taking aggregate commands from their superiors, disaggregating them, and passing those to their subordinates. With the PC, those processes could be largely automated. Similarly, that sort of data became easier to pass "sideways", from one middle manager to another at the same level, rather than having to be passed up to a common superior and back down. The result was that a lot of middle-manager labor was eliminated, and consequently a lot of middle-managers were eliminated, leading to the contemporary "flatter organizational structures". But it's not like the people of that social class and education have worse job prospects now than they did 50 years ago.
I can't keep up with you and Tove! I have things to say, but it just takes time to say it. Three quarters of the reason is that what I want to say invariably relies on facts most people A) don't know or B) don't understand or worse C) consider taboo.
For now I'll simply say that this is a clear and well written description of the optimistic position. The pessimistic position notes with a shrug that people will do things for money. What is worse, people don't even need to be paid very much if they're asked to do things like:
"Use the spray bottles to disinfect the cutting boards before the morning shift arrives." or "Take this cardboard package and leave it by a corner of the hallway, where someone will pick it up for incineration" or "Place the contents of this package into spray bottles labelled 'disinfectant'" or "Bring these live virus samples into a warehouse and leave them in a cardboard package for disposal" or "Find a package in the corner of the hallway and mail it to a given address" or "Open the package you receive in the mail and pour the contents into spray bottles labelled 'disinfectant.'"
Already people trust AI to tell them where to go and how to get there; when trust in computers becomes a matter of course, will humans remain human, or will their hands become biological components of the machine?
Hands of gods
> If everyone has access to a silken tongued AI then silken tongues will no longer be a way to get ahead in life.
Another truth from the column, which can be generalized as a critique of much of AI alarmism: If AI makes it possible to do task X by computer, and (as usually happens) that automation can't be effectively monopolized by a small number of businesses, the price of task X rapidly declines to zero. This is bad for people who do task X, especially those who are particularly good at it, but it's good for everybody who needs task X done, which is usually everybody else. But the labor income in the economy will simply shift around to whatever tasks automation can't do (at that time).
And sometimes "what tasks automation can't do" will be by definition. It has been 5,000 years since the fastest way for a human to get from point A to point B was by running, but during those 5,000 years, being one of the fastest human runners has been lucrative labor. (Consider the ancient Olympic games and the contemporary Usain Bolt.) And likely the total income of professional human chess players well outstrips the total money paid for computer chess-playing software and services.
> Any AI with the intelligence to pose an existential threat to humanity will understand this simple logic. A brain needs its hands. No matter how smart the AI gets, humanity’s future will be assured as long as we can provide vital services to the masters of the digital realm. It might not be the future most AI alarmists envision for humanity, but for most people on Earth it is a future very similar to the present.
I don't think that you directly say that AI is not such a big deal, but I kinda get that you try to suggest it. Are you 100% sure that you are right and there is no chance AI can kill us all? One of the major ideas in existential risks papers is that, given how hard making those predictions is, we can not be perfectly sure about our estimations of how likely it is that something will pose an existential threat. And since the stakes are high, we should be careful.
What this means is that even if this is mostly true (not perfectly):
> No matter how smart the AI gets, humanity’s future will be assured as long as we can provide vital services to the masters of the digital realm.
We should not conclude that there are no existential risks coming from AI or that we shouldn't try to prevent it. To really get a proof about how the world with an AI would look like, we would actually need to run the AI, which could kill us all, so it's better not to!
Gotta hand it to you! You have a fresh perspective on AGI. Your fingers are pointing in directions outside the frame the mainstream media points to! ;-)
Nonetheless I'd like to push back a bit on one of your good thoughts.
>The superiority of human dexterity will not disappear anytime soon. One day robots might equal humans in physical ability. But this day is not today. It is not tomorrow either. _The time when robots will outdo humans physically is probably more centuries than decades away, if it ever comes._
I am reminded of the quote apocryphally attributed to Henry Ford: "If I had asked my customers what they wanted they would have said a faster horse”.
In other words, what might happen when AI - as Ford did - changes the rules? For example, suppose AI enables better "just-in-time growing" where little robos bring ripe food from farm to your kitchen? No warehouse needed. Or bricks from the kiln directly to the building site? No hands needed!
With actions like these the need for human dexterity is reduced. Thus, I've have no feeling that "disarming" society will take centuries.
See also:
* https://www.dexterity.ai/
* https://www.universal-robots.com/
* https://scitechdaily.com/columbia-engineers-create-highly-dexterous-human-like-robot-hand-that-can-operate-in-the-dark/
Update 2 2023-05-10
See also: https://substack.com/inbox/post/119020621
AXIS OF ORDINARY ~ Alexander Kruel: Links for 2023-05-10
> Before you say silly things like, "But AI is still bad at physical tasks," stop right there and read about the amazing advances being made in robotics literally every week.
Followed by a dozen useful links.
Update:
BTW, there is a sad but relevant example where the imitation of human dexterity was outwitted by an alternative technology:
https://en.wikipedia.org/wiki/Paige_Compositor versus https://en.wikipedia.org/wiki/Linotype_machine
The author Mark Twain was a major investor in Page's device. Twain bet and lost a fortune in the process.
This article resonates with my thoughts on how increasingly ignorant people are getting about the real world. I restacked a quote from the article with a much longer response.
https://substack.com/profile/143639320-nthbridgeburner/note/c-15727707
There is so much truth to this column to be unpacked. As just a start, I've been reading business management magazines on and off for [corrected:] 30 years. In that literature, it's well known that the diffusion of the PC through business was revolutionary. Specifically, before that, many middle managers spent their time (1) taking data from their subordinates, aggregating it, and passing it to their superiors, and (2) taking aggregate commands from their superiors, disaggregating them, and passing those to their subordinates. With the PC, those processes could be largely automated. Similarly, that sort of data became easier to pass "sideways", from one middle manager to another at the same level, rather than having to be passed up to a common superior and back down. The result was that a lot of middle-manager labor was eliminated, and consequently a lot of middle-managers were eliminated, leading to the contemporary "flatter organizational structures". But it's not like the people of that social class and education have worse job prospects now than they did 50 years ago.
I can't keep up with you and Tove! I have things to say, but it just takes time to say it. Three quarters of the reason is that what I want to say invariably relies on facts most people A) don't know or B) don't understand or worse C) consider taboo.
For now I'll simply say that this is a clear and well written description of the optimistic position. The pessimistic position notes with a shrug that people will do things for money. What is worse, people don't even need to be paid very much if they're asked to do things like:
"Use the spray bottles to disinfect the cutting boards before the morning shift arrives." or "Take this cardboard package and leave it by a corner of the hallway, where someone will pick it up for incineration" or "Place the contents of this package into spray bottles labelled 'disinfectant'" or "Bring these live virus samples into a warehouse and leave them in a cardboard package for disposal" or "Find a package in the corner of the hallway and mail it to a given address" or "Open the package you receive in the mail and pour the contents into spray bottles labelled 'disinfectant.'"
Already people trust AI to tell them where to go and how to get there; when trust in computers becomes a matter of course, will humans remain human, or will their hands become biological components of the machine?
Thanks. Thought provoking.