In my previous post, I accused computers of not thinking and people with extremely high IQs of not also being extremely skilled thinkers. It got no comments. Probably because almost nobody reads this blog. So I have to write my own comment to my own blog post. My comment is: What is thinking, then? If a computer can't think, how do we humans do it when we think?
Philosopher Immanuel Kant answered that question already in 1781: We think through categories. We can only see the world through a lense of quantity, quality, relation and modality, cause and effect, substance and accident…
Exactly what categories Kant saw is not the important thing. Other people have refined and developed the idea since. Infamous Neo-Kantians like Michel Foucault took that idea to a new level and said that we can only perceive the world through a vast web of sociology. After his death Foucault got some blame for being a predecessor to woke ideology. But his thinking could also be interpreted as a predecessor to rationalist ideas about biased thinking and how to avoid it.
Through the lens
We can never perceive the thing in itself, Kant said, because the only way we can perceive things is through our categories. Computers and humans see the world through fundamentally different categories.
AI computers see colors (certain wavelengths of electromagnetic radiation), they hear sounds (different waves of air pressure) and they count, count, count and pile those numbers together to statistics. Everything is appearances and relations between appearances. A computer has, for example, no idea of cause and effect. It only has a lot of information about what usually appears in a certain order.
Good thinking, bad thinking
We know a lot more about how a computer processes data than how we do it ourselves. That is not strange, since we made the computers but not ourselves. Kant was fascinated with thinking as such. The rationalist community seems more interested in good thinking and bad thinking and how to promote the former and avoid the latter.
The bad thinking tends to be of the human, all too human kind. We do sloppy statistics because we like our biases, we over-apply our instinct for perceiving cause and effect and see causation where we should only see correlation. In some respects, computers really are our superiors. Is that why many people in the rationalist community fear that computers could become our superiors in every respect, destroying us all? With enough data processing capacity, computers will be capable of everything we are capable of, but better, the reasoning goes. With enough data, computers too will learn to be Kantians.
This is where the weak spot is: Do we humans even learn to be Kantian? Are we born without, for example, a sense of cause and effect? Does it develop during infancy as we gather more and more data? Or is it already there, in our hardware, just like social abilities, social needs, sexual drives and jealousy? We don't become jealous because we add up data and conclude it is wise to be jealous. We are jealous for the same reason apes are jealous (apes are not monogamous, but that doesn't mean they like sexual competition).
Most evidence points towards the hardware. And that hardware is not easy to replicate. Millions of years of evolution made us into Kantians. Biased, jealous, religious Kantians. Our ability to think is flawed. It is biased by our feelings. More than that: it requires feelings. In Descartes' Error (1994), neuroscientist Antonio Damasio argued convincingly that functional thinking requires feelings: some people who lose some ability to feel because of brain injury become unable to make everyday decisions, even though their intellectual abilities are intact. To put it another way, all their computational power remains intact, but some of their human hardware has been destroyed.
Computers can't think because they don’t have the hardware for thinking. The only thing an advanced AI can do is to estimate what an average thinking person would have said or done in this or that situation based on statistics. The AI can keep a huge database of conclusions. It can browse through them extremely quickly. An average person with average thoughts might take years to reach a certain average conclusion. The AI will reach it instantly, since it learned from other average people. Still, the AI can't produce any new thoughts of its own. It is just a huge database of past human thinking.
Our ability to think is biased, obscure, and wonderful. To become like us, a computer not only needs to copy our virtues. It also needs to acquire our vices. And those vices are firmly seated in our flesh.
I feel a little bit unsure about your reasoning. You made the same point a few times: "Computers can't think because they don’t have the hardware for thinking." and based it on the example of the man who lost his feelings and therefore agency and the idea that computers are only able to copy based on some previously provided data.
The fact that somebody lost his agency by loosing his feelings speaks very little of how an AI can work, since as far as I know current AI are not that close to how our brains work . AI seem to be also getting quite creative with games: https://www.youtube.com/watch?v=Lu56xVlZ40M . I think that not having million years of evolution and little sociological context is an advantage for AI since it's not confined to conventional solutions.
I get the argument in general: AI is definitely going to be different than humans. It might not grasp the concepts of art (or a lot of other concepts we use daily) and even if it does, after learning on millions of examples, it will feel superficial for us.