We seem to be in one of those recurrent waves of panic about whether AI will learn to make art and thus put all of us artists — painters, novelists, etc. — out of business. One useful article on this subject, this one focused on painting, is from William Collen:
What about writing, and novels in particular? Actually the idea of novels being written by machines is not new. In 1984, George Orwell portrayed trashy novels and newspaper articles being written by a machine called the versificator. At the time he wrote this, in 1948, he was perhaps commenting more on the mechanical nature of trashy novels and newspaper articles. But I suspect that he might feel that his words were applicable to the present moment as well.
Here is how I see it. Art is composed of two main elements: vision and execution. Mastering execution, be it in music, painting, or writing, takes time, patience, and study. Vision is more elusive. I don’t know that there is much training for vision, except perhaps living attentively and consuming great art with attention, though I don’t think either of those things is certain to give you vision, particularly a vision worth sharing. But it is vision — seeing something in the world or in human experience that matters, that enriches our lives or our appreciation of them — that is the reason for having art in the first place.
So, the first question is, can AI do execution? As William Collen shows, the current answer is that it has made impressive strides in some respects, but its not there yet. Will it get there? Having spent 30 years in the technology industry (mostly as a technical writer) I have observed that there are a number of problems in tech that I think of as 95% problems. That is, 95% of the desired functionality can be achieved by diligently working the problem using existing methods and knowledge and wads of VC cash. The final 5%, though, is not solvable by currently known methods.
People confidently assume that when they get to 95%, the final 5% will fall into place. At least, that is what they tell the VCs when they apply for funding. And showing VCs the 80% functionality you have already achieved (the part that comes easily and relatively cheaply) is a good way to get them believing it too.
And occasionally the final 5% gets solved and the world changes. But a lot of the time the 5% remain stubbornly unsolvable and all the time, hope, and cash that went into the other 95% is wasted. Essentially, the entrepreneurs and engineers have built a really expensive plinth on which to mount the wondrous new thing they hoped to invent, but never found the solution for. This happens a lot in AI research. It is why your car still does not drive itself and why the voice interface to the infotainment system misunderstands any but the simplest and clearest commands.
Is AI’s execution of art a 95% problem, or does nothing but refinement and another round of funding stand in the way of solving the problems that Collen identifies? I haven’t the slightest idea. I suspect no one else does either. That is the nature of 95% problems. You usually don’t know what the final 5% is, or even that there is an intractable 5% lurking in the dark until you have solved everything else and you still can’t make the project work. In many cases a technology development project is a process for finding out that the problem is intractable.
But lets assume that AI execution of art is not a 95% problem and that they will get it nailed someday soon. What about the other aspect of art? What about vision?
Vision, it seems to me, is a non-starter for AI. Quite simply, AI does not have vision. It sees nothing, hears nothing, knows nothing. It is algorithms playing with patterns. To have vision, it would need more than eyes. Vision is about more than recording photons. It is about forming an experience of the world with all its pain and wonder and making something of it, something that can be communicated to other people through words, images, and sound. Vision of this type requires consciousness. And the problem with consciousness is that we don’t even know what it is, let alone how to build it. Until that problem is solved, if it is indeed solvable, AI will not be doing art.
But AI has demonstrated that it is capable of making pretty pictures and stringing together comprehensible words. So if it is not doing art, what is it doing? Actually, there is a word for execution without vision in the art world. It is called pastiche.
Can AI do pastiche? Yes it can. Orwell’s versificator was doing pastiche. The AI’s that Collen describes are doing pastiche. It is not great pastiche yet. We marvel at it the way we marvel at a dog walking on its hind legs. It is not done well; but you are surprised to find it done at all.
But perhaps the boffins of AI are only months or a few years away from solving the last 5% of the execution problem and getting really good at pastiche.
Here’s the thing about pastiche: In many ways it has all of the effects of art. It can move us the way art can move us, because it is a copy of art. It brings nothing new, but it reproduces the effects of the art that has come before it. Perhaps it does not reproduce the full effect, but a muted one, like eating frozen pizza from the supermarket rather than fresh from a pizza parlor. But most people are satisfied with that, most of the time.
Pastiche is the backbone of the publishing industry. When a literary agent asks a querying author to list their comps (comparative titles), they are asking who their work is a pastiche of. Writing good pastiche of best-selling books is a reliable way to get a publishing contract and reasonably reliable book sales. Fan fiction and corporations turning literary works into “franchises” are both examples of the power of pastiche at opposite ends of the market.
And this arrangement serves the appetites of the public quite well. Once we find a book we like, after all, we want more of the same (as we do with all of our appetites). Readers looking for another book the same as the one they just read (only different) are asking for a pastiche. “The same only different” is the essence of what the publishing industry is looking for. (This is not a slight. It is what they say they are looking for, sometimes in those exact words.) “The same, only different” is the essence of pastiche. And it is the essence of AI art, which “learns” to make art by consuming vast libraries of content and extracting patterns from them to create something the same, only different. Art AIs are designed from the ground up to be pastiche generators.
AI pastiche generation isn’t quite there yet, and I suspect it may turn out to be a 95% problem, despite the rapid progress made to date. Rapid progress in the middle phase of the project is the hallmark of a 95% problem. If I’m wrong about that, though, AI could conceivably have a serious impact on the pastiche publishing industry, and on the many authors who make a living writing great pastiches of other authors. But it won’t be doing art, and the rare and special people among us who actually possess vision and the technique and patience to execute their vision, have nothing to worry about.
Actually, I don’t think any of us have anything much to worry about, and I’ll get to that in a minute. But first an aside, because I am wondering what the purpose of this line of AI research is. What problem are they actually trying to solve?
It is not like there is a desperate shortage of novels or paintings in the world. There is a horrendous glut of them with more being created all the time. The last thing we need is a more efficient way to make more.
And it is not like the writing of novels, or the painting of pictures, is a noxious chore that people are anxious to rid themselves of. Countless people happily type and daub away creating the art glut with little thought or hope of reward. The last thing they want is a machine to do it for them.
And much as they enjoy pastiche, readers are not yearning for the purity and consistency of machine made art either. Quite the contrary, readers value knowing and meeting their favorite authors and interacting with them in person and online.
There is, in short, no demand for AI art either as a product or as a process.
And let us consider the most oft-bragged-about triumph of AI, its mastery of chess and similar games. Yes, AI now plays chess better than the greatest of grandmasters. Does this mean that people now prefer watching machines play chess, rather than humans? No. They still prefer to watch humans play each other. Because they are, themselves, humans.
The reason for trying to make AI do art — indeed, the reason for much of the AI project — is actually philosophical. It is an attempt to prove the reductionist philosophy that sees human beings as automatons. As long as there are things that humans can do that machines cannot do, there remains room for the soul. If it can be shown that there is nothing that humans can do that machines cannot do also, then the case would be made (or would seem to be made) that humans are no more than machines themselves, and the soul would disappear in a puff of logic (or so the reductionist philosopher would argue).
If achieved, this would strike me as the hollowest of hollow victories, since it would mean that the proof itself was nothing more than the product of a machine. And if that were true, what grounds would we have for regarding it as true? Machines, after all, are not good at recognizing their own mistakes, as the Pentium FDIV bug of a few years back demonstrated. If we are merely machines, what is to say their are no bugs in our firmware?
None of this dreary philosophical project has anything to do with our desire for art made by humans. Even if such art were by some measure better than that produced by humans, we still would not care. Machines have been exceeding human capacities for centuries. A cannon can throw a shot further than a shot-putter can. It does not mean that cannons have replaced athletes in the Olympics.
Machines could win pretty much every Olympic sport these days, if they were allowed to compete. But they are not allowed to compete, because, unless we happen to be an artillery officer, we don’t actually care how far a cannon can throw a shot. But we do care, for a couple of weeks every four years, how far a human can throw one. Because we are humans too.
And that, I suspect, is the real answer to all the fears about AI and art. Even if AI learns to make art better than humans can, even if it masters not only execution but vision as well, we won’t care, because we want to see how good a book a human can write, how fine a painting a human can paint, because we are humans too.
That was thoroughly well-put, thank you.
I love these two lines: "it is a copy of art" and "what problem are they trying to solve?" Some of my kids are reading a book for school about the child laborers in the beginning of the twentieth century. When machines were built to do the work of child machine-tenders in textile mills, kids were able to live a childhood free of the worry of having their fingers ripped off in a machine at work. Is there a pressing need to get people away from having to do artistic activity? The answer is, as you are aware, obvious.
i agree. tasking a machine with making art misses the point of both machine and art. AI can be programmed to simulate a lot of things, but at the end of the day, it doesn't feel--it'll never desire, fear, love, hate, grieve, etc. it'll never be inspired. so, any art it creates will always be an imitation, however convincing. is anyone asking for this?