It’s the brush at dawn as artists feel the pressure of AI-generated art – TechCrunch

If you have been anywhere near the interwebs lately, you will have heard of DALL-E and Midjourney. The kinds of art that neural networks can generate—and with a deeper understanding of the technology’s strengths and weaknesses—means we’re facing a whole new world of hurt. Often the fallout from tasteless jokes (How do you get a waiter’s attention? Shout “Hey, artist!?”), computer-generated art is another point in the man-versus-machine “they took our jobs” narrative.

To me, the interesting part of this is that robots and machines that do certain jobs have been accepted mercilessly because the jobs are repetitive, boring, dangerous, or just generally awful. Car chassis welding machines do a much better, faster and safer job than humans ever could. However, art is another matter.

As with all technology, there will come a time when you will no longer trust your eyes or ears; machines will learn and evolve at breakneck speed.

In the latest movie “Elvis”, Baz Luhrmann puts a quote in the mouth of Colonel Tom Parker, saying that a great act “gives the audience feelings they weren’t sure they should experience”. To me, that’s one of the greatest statements I’ve heard about art in a while.

Commercial art is nothing new; whether your mind wanders to the movies, music, or prints that come with frames at Ikea, art has been distributed on a large scale for a long time. But what they all have in common is that they were created by people who had some kind of creative vision.

The image at the top of this article was created using MidJourney, after I gave the algorithm a slightly ridiculous request: A man dances like Prozac is a cloud of laughter. As someone who has had a lifetime of mental health swings, including depression and somewhat severe anxiety, I was curious what a machine would create. And, my god; none of these graphics created are anything I would have conceptually created myself. But, I won’t lie, they did something to me. I feel more graphically represented by these machine-generated artworks than almost anything else I’ve seen. And the wild thing is, I did it. These illustrations were not drawn or conceived by me. All I did was type a weird message on Discord, but these images wouldn’t have existed if it wasn’t for my brainchild. Not only did it come up with the image at the top of this article, it spit out four completely different — and surprisingly perfect — illustrations of a concept that’s hard to wrap my head around:

It’s hard to say exactly what this means for conceptual illustrators around the world. When one can, at the click of a button, generate artwork of anything, imitating any style, creating almost anything you can think of, in minutes – what does it mean to be an artist?

Over the past week or so, I may have gone a little overboard, creating hundreds and hundreds of Batman images. Why Batman? I have no idea, but I wanted a theme that would help me compare the different styles that MidJourney is able to create. If you really want to go deep down the rabbit hole, check it out AI Dark Knight Rises on Twitter, where I’m sharing some of the best crafted pieces I’ve come across. There are hundreds and hundreds of candidates, but here’s a selection that shows the breadth of styles available:

Generating all of the above—and hundreds more—had only three obstacles: the amount of money I was willing to spend on my Midjourney subscription, the depth of creativity I could find for the prompts, and the fact that I could only generate 10 concurrent designs.

Now, I have a visual mind, but there isn’t an artistic bone in my body. But I don’t need one. I come up with a request – for example, Batman and Dwight Schrute are in a fist fight — and the algo outputs four versions of something. From there, I can re-roll (ie generate four new images from the same prompt), render a high-res version of one of the images, or iterate based on one of the versions.

The only real shortcoming of the algorithm is that it favors a “you’ll take what you’re given” approach. Of course, you can get much more detailed with your requirements to get more control over the final image – both in terms of what’s going on in the image, style and other parameters. If you’re a visual director like me, the algorithm is often frustrating because my creative vision is hard to capture in words, and even harder to interpret and render to AI. But the scary thing (for artists) and the exciting thing (for non-artists) is that we are at the very beginning of this technology and will have much more control over how images are created.

For example, I tried the following request: Batman (left) and Dwight Schrute (right) are in a fistfight in a parking lot in Scranton, Pennsylvania. Dramatic lighting. Photo realistic. Monochrome. High detail. If I had given this prompt to a human, I expect they would tell me to sit down to talk to them as if they were a machine, but if they were to create a drawing, I doubt that humans would be able to interpret it. fast in a way that makes conceptual sense. I gave it a bunch of tries, but there weren’t many illustrations that made me think “yeah, that’s what I was looking for”.

What about copyright?

Here is another interesting feature; many of the styles are distinct, and some of the faces are also distinct. Take this one, for example, where I’m having AI imagine Batman as Hugh Laurie. I don’t know about you, but I’m impressed; it has Batman style and Laurie stands out in the drawing. What I have no way of knowing, though, is if HE ripped off another artist wholesale, and I wouldn’t want to be MidJourney or TechCrunch in a courtroom trying to explain how that went horribly wrong.

Hugh Laurie as Batman

Hugh Laurie as Batman Image credits: Midjourney on a request from Haje Kamps under a license BY-NC-40.

This kind of problem comes up in the art world more often than you might think. One example is the case of Shepard Fairey, where the artist allegedly based his famous Barack Obama “Hope” poster on a photograph by AP freelance photographer Mannie Garcia. It all became a fantastic mess, especially when a bunch of other artists started creating art in the same style. Now, we have a multi-layered plagiarism sandwich where Fairey is allegedly plagiarizing someone else and plagiarizing in turn. And, of course, it’s possible to generate Fairey-style AI-art, which complicates things infinitely further. I couldn’t resist giving it a whirl: Batman in Shepard Fairey style with the text HOPE at the bottom.

HE HOPED

HE HOPED. A great example of how AI can come close, but not cigar, to the specific vision I had for this image. And yet, the style is close enough to Fairey’s that it’s recognizable Image credits: Eat Kamps (opens in a new window) / In the middle of the trip (opens in a new window)

Kyle has many more thoughts on where the legal future lies for this technology:

So where does this leave artists?

I think the scariest thing about this development is that we’ve moved very quickly from a world where creative pursuits like photography, painting and writing were safe from machines, to a world where that’s no longer as true as it once was. . But, as with all technology, there will soon come a time when you can no longer trust your eyes or ears; machines will learn and evolve at breakneck speed.

Of course, it’s not all doom and gloom; if I were a graphic artist, I’d start using the latest generation of tools for inspiration. Many times I’ve been surprised at how well something turned out, and then thought to myself, “but I wish it was a little more [insert creative vision here]” — if I had graphic design skills, I could take what I have and turn it into something closer to my vision.

This may not be as common in the art world, but in product design, these technologies have been around for a long time. For PCBs, machines have been creating the first versions of the trace design for many years – often to be modified by engineers, of course. The same goes for product design; As early as five years ago, Autodesk was showing off its generative design capabilities:

It’s a brave new world for any job (including mine—I had an AI write most of a TechCrunch story last year) as neural networks get smarter and smarter, and data sets more and more comprehensive to work.


Let me close with this extremely disturbing image, where some of the people AI placed in the image are known to me and other members of the TechCrunch staff:

Midjourney images used in this post are all licensed under Creative Commons Non-Commercial Attribution licenses. Used with express permission from the Midjourney team.

Leave a Comment

Your email address will not be published. Required fields are marked *