I usually use AI to generate artwork that wouldn't be seen as real; photographs of real world objects and events are usually uninteresting to me. For free photographs you can host on your blog, it's hard to beat www.pexels.com, which is where I get over 80% of the images for my own blog posts.
The introduction of LLMs dropped the marginal cost of producing bullshit to 0. The supply is therefore limitless. People will notice and adjust their subjective valuations of bullshit accordingly. My hypothesis is that this will lead to a retrospective revaluation of much of what we have taken to be progress over the last quarter century, and we will get the widespread acceptance of the astronaut meme: “It always was.”
Remember when the machines learned to make cheap lace? Lace went out of fashion and now people wear less lace than in the olden days when it was hard to make. Though hand-made crochet lace stays expensive and some still make it. Perhaps there's some hope that AI-art will go out of fashion, while some human artists, along with machines, still remain.
About information content, can we put it like this: When looking at an old photograph, you get information about things those long-dead people used and found deserving to show, but none about the color scheme of the surroundings (cause it's black-and-white). When a child draws a picture of a man, you get all kinds of interesting information about the development of human abstract imagination, but not much about the anatomy of an actual man (it's drawn with just the head and two long legs). Looking at an advertisement photo with happy people having good time you get to know what the ad experts think people like, but not much about the actual people using their product. And looking at AI art, you can't get much good info about real world, but you do get some glimpses into how the AI's mind works. For example, if you ask for the 18c Netherlands style painting, what are the characteristics of that period paintings it will find necessary to replicate? Will it always draw a Santa Clause if you ask for a man and a deer? How much more compute does it need to stop messing up the numbers of legs? Does it get less and less dreamlike with more compute and training data?
>>looking at AI art, you can't get much good info about real world, but you do get some glimpses into how the AI's mind works. For example, if you ask for the 18c Netherlands style painting, what are the characteristics of that period paintings it will find necessary to replicate? Will it always draw a Santa Clause if you ask for a man and a deer? How much more compute does it need to stop messing up the numbers of legs? Does it get less and less dreamlike with more compute and training data?
Yes! AI images do contain information, about the machine that made them. For that reason I would feel better about people's AI images if they wrote their prompts and the name of the machine that made them as a caption. Then I could at least learn something from looking at the images: If you tell program so or so to make an image of this or that, it can look a bit like this picture.
Thanks for sharing there thoughts. I haven’t been consuming much AI art and I don’t think I’m missing much. The value of art is in connecting to humanity, in the present and in the past. Making art, and seeing and taking it in for all its worth involves a lot more work and self reflection (in good and rewarding ways!) than I think many people realize. I find writing and making art myself is one way to deepen this appreciation, an appreciation for both humanity and for the past.
Yes. Whatever "art" means, I think it must include a high degree of effort. "The art of writing" means making an effort to write well. "The art of cooking" means making an effort to cook well. And Marcel Duchamp made an effort to provoke well. For that reason, I'm a bit reluctant to call most AI images "art". Just like "the art of writing" means doing one's best, "the art of making pictures" should mean "creating the best pictures possible".
A lot of interesting points. Ironically, illustrations were added to "media" (printed words) to provide the reader with *more* information, at least the result of an artist's imagination marshaling details he'd seen previously. Now we can generate illustrations to fill column-inches without any work at all.
Similar thoughts apply to AI-generated words. There is a lot of text that's written each year that's nearly wallpaper in terms of the price-quality tradeoff that the consumer will support.
I usually use AI to generate artwork that wouldn't be seen as real; photographs of real world objects and events are usually uninteresting to me. For free photographs you can host on your blog, it's hard to beat www.pexels.com, which is where I get over 80% of the images for my own blog posts.
>>I usually use AI to generate artwork that wouldn't be seen as real
I'm doing my best to avoid looking at them.
>>For free photographs you can host on your blog, it's hard to beat www.pexels.com
Really nice pictures! But no descriptions? I can't find out the geographic origins of the photos.
The introduction of LLMs dropped the marginal cost of producing bullshit to 0. The supply is therefore limitless. People will notice and adjust their subjective valuations of bullshit accordingly. My hypothesis is that this will lead to a retrospective revaluation of much of what we have taken to be progress over the last quarter century, and we will get the widespread acceptance of the astronaut meme: “It always was.”
Remember when the machines learned to make cheap lace? Lace went out of fashion and now people wear less lace than in the olden days when it was hard to make. Though hand-made crochet lace stays expensive and some still make it. Perhaps there's some hope that AI-art will go out of fashion, while some human artists, along with machines, still remain.
About information content, can we put it like this: When looking at an old photograph, you get information about things those long-dead people used and found deserving to show, but none about the color scheme of the surroundings (cause it's black-and-white). When a child draws a picture of a man, you get all kinds of interesting information about the development of human abstract imagination, but not much about the anatomy of an actual man (it's drawn with just the head and two long legs). Looking at an advertisement photo with happy people having good time you get to know what the ad experts think people like, but not much about the actual people using their product. And looking at AI art, you can't get much good info about real world, but you do get some glimpses into how the AI's mind works. For example, if you ask for the 18c Netherlands style painting, what are the characteristics of that period paintings it will find necessary to replicate? Will it always draw a Santa Clause if you ask for a man and a deer? How much more compute does it need to stop messing up the numbers of legs? Does it get less and less dreamlike with more compute and training data?
>>looking at AI art, you can't get much good info about real world, but you do get some glimpses into how the AI's mind works. For example, if you ask for the 18c Netherlands style painting, what are the characteristics of that period paintings it will find necessary to replicate? Will it always draw a Santa Clause if you ask for a man and a deer? How much more compute does it need to stop messing up the numbers of legs? Does it get less and less dreamlike with more compute and training data?
Yes! AI images do contain information, about the machine that made them. For that reason I would feel better about people's AI images if they wrote their prompts and the name of the machine that made them as a caption. Then I could at least learn something from looking at the images: If you tell program so or so to make an image of this or that, it can look a bit like this picture.
Thanks for sharing there thoughts. I haven’t been consuming much AI art and I don’t think I’m missing much. The value of art is in connecting to humanity, in the present and in the past. Making art, and seeing and taking it in for all its worth involves a lot more work and self reflection (in good and rewarding ways!) than I think many people realize. I find writing and making art myself is one way to deepen this appreciation, an appreciation for both humanity and for the past.
Yes. Whatever "art" means, I think it must include a high degree of effort. "The art of writing" means making an effort to write well. "The art of cooking" means making an effort to cook well. And Marcel Duchamp made an effort to provoke well. For that reason, I'm a bit reluctant to call most AI images "art". Just like "the art of writing" means doing one's best, "the art of making pictures" should mean "creating the best pictures possible".
A lot of interesting points. Ironically, illustrations were added to "media" (printed words) to provide the reader with *more* information, at least the result of an artist's imagination marshaling details he'd seen previously. Now we can generate illustrations to fill column-inches without any work at all.
Similar thoughts apply to AI-generated words. There is a lot of text that's written each year that's nearly wallpaper in terms of the price-quality tradeoff that the consumer will support.