Discussion about this post

User's avatar
Todd's avatar

The introduction of LLMs dropped the marginal cost of producing bullshit to 0. The supply is therefore limitless. People will notice and adjust their subjective valuations of bullshit accordingly. My hypothesis is that this will lead to a retrospective revaluation of much of what we have taken to be progress over the last quarter century, and we will get the widespread acceptance of the astronaut meme: “It always was.”

Unirt's avatar

About information content, can we put it like this: When looking at an old photograph, you get information about things those long-dead people used and found deserving to show, but none about the color scheme of the surroundings (cause it's black-and-white). When a child draws a picture of a man, you get all kinds of interesting information about the development of human abstract imagination, but not much about the anatomy of an actual man (it's drawn with just the head and two long legs). Looking at an advertisement photo with happy people having good time you get to know what the ad experts think people like, but not much about the actual people using their product. And looking at AI art, you can't get much good info about real world, but you do get some glimpses into how the AI's mind works. For example, if you ask for the 18c Netherlands style painting, what are the characteristics of that period paintings it will find necessary to replicate? Will it always draw a Santa Clause if you ask for a man and a deer? How much more compute does it need to stop messing up the numbers of legs? Does it get less and less dreamlike with more compute and training data?

7 more comments...

No posts

Ready for more?