Discussion about this post

User's avatar
Mikolaj's avatar

> Any AI with the intelligence to pose an existential threat to humanity will understand this simple logic. A brain needs its hands. No matter how smart the AI gets, humanity’s future will be assured as long as we can provide vital services to the masters of the digital realm. It might not be the future most AI alarmists envision for humanity, but for most people on Earth it is a future very similar to the present.

I don't think that you directly say that AI is not such a big deal, but I kinda get that you try to suggest it. Are you 100% sure that you are right and there is no chance AI can kill us all? One of the major ideas in existential risks papers is that, given how hard making those predictions is, we can not be perfectly sure about our estimations of how likely it is that something will pose an existential threat. And since the stakes are high, we should be careful.

What this means is that even if this is mostly true (not perfectly):

> No matter how smart the AI gets, humanity’s future will be assured as long as we can provide vital services to the masters of the digital realm.

We should not conclude that there are no existential risks coming from AI or that we shouldn't try to prevent it. To really get a proof about how the world with an AI would look like, we would actually need to run the AI, which could kill us all, so it's better not to!

Expand full comment
Theo Armour's avatar

Gotta hand it to you! You have a fresh perspective on AGI. Your fingers are pointing in directions outside the frame the mainstream media points to! ;-)

Nonetheless I'd like to push back a bit on one of your good thoughts.

>The superiority of human dexterity will not disappear anytime soon. One day robots might equal humans in physical ability. But this day is not today. It is not tomorrow either. _The time when robots will outdo humans physically is probably more centuries than decades away, if it ever comes._

I am reminded of the quote apocryphally attributed to Henry Ford: "If I had asked my customers what they wanted they would have said a faster horse”.

In other words, what might happen when AI - as Ford did - changes the rules? For example, suppose AI enables better "just-in-time growing" where little robos bring ripe food from farm to your kitchen? No warehouse needed. Or bricks from the kiln directly to the building site? No hands needed!

With actions like these the need for human dexterity is reduced. Thus, I've have no feeling that "disarming" society will take centuries.

See also:

* https://www.dexterity.ai/

* https://www.universal-robots.com/

* https://scitechdaily.com/columbia-engineers-create-highly-dexterous-human-like-robot-hand-that-can-operate-in-the-dark/

Update 2 2023-05-10

See also: https://substack.com/inbox/post/119020621

AXIS OF ORDINARY ~ Alexander Kruel: Links for 2023-05-10

> Before you say silly things like, "But AI is still bad at physical tasks," stop right there and read about the amazing advances being made in robotics literally every week.

Followed by a dozen useful links.

Update:

BTW, there is a sad but relevant example where the imitation of human dexterity was outwitted by an alternative technology:

https://en.wikipedia.org/wiki/Paige_Compositor versus https://en.wikipedia.org/wiki/Linotype_machine

The author Mark Twain was a major investor in Page's device. Twain bet and lost a fortune in the process.

Expand full comment
27 more comments...

No posts