Pedestrianization of Pikes Place Market. It’s a not-so-hot take from anywhere city planner Threads on Twitter to Seattle Times editorial board meetings.
But what would one of our city’s sacred landmarks actually look like without cars (and drivers cursing their GPS directions)?
A few weeks ago, a viral submission emerged not from the hand of a human, but from the growing imagination of artificial intelligence. The image posted by an open roads advocate depicted a significantly darker and more attractive road near the market.
Pike Place Market (Seattle, Washington) pic.twitter.com/gD7a0bO7wm
— AI-Generated Street Transformations (@betterstreetsai) July 30, 2022
This, it should be noted, is probably one of the most practical deployments of an AI art generator. Often images of the Dall-E Mini look amusingly anxious (type “Pete Carroll fishmonger” if you don’t believe me). Other times, they are boring or miss the mark. “Seattle Freeze” loses in AI.
But people can’t get enough of these meme machines and their capacity, in seconds, to reimagine familiar scenes. The “Seattle is dying” crowd can appreciate this landscape of hell on the horizon from a Midjourney user. A mix of a Space Needle photograph and galactic watercolor painting produced this beauty. Recently fashionable Dall-E a Banksy-like look for the Seahawks logo.
While some of the creations of these generators are alarming, many of them are extremely accurate. “Dall-E was very surprising to many of us,” says Tanmay Gupta, a research scientist at the Allen Institute for AI.
Founded in 2014 by the late Paul Allen, the Lake Union nonprofit has been studying and driving advances in AI for nearly a decade now. Gupta is on the PRIOR, or Perceptual Reasoning and Interaction Research, team, which has been examining the relationship between text and visuals for years. Gupta remembers working with The Flintstones cartoons a few years ago to create videos with AI. “You’d say something like, ‘Fred is sitting on the sofa, next to Vilma, who’s reading the newspaper’. And then this model would go and create a scene where these things were actually happening.”
But it wasn’t until the Dall-E Mini arrived about a year ago that AI visuals took off and set the field alight. “Not only was the image generation already surprising, but also the fact that it was good at composing things that are very unique – for example, like a person in a space suit, sitting on a horse, on the moon.” Several colleagues tested the new generator with meta-prompts such as “A computer that can see and understand everything” and “A humanoid robot agent lying helpless on the floor of a house.” Dall-E handled them with ease.
Which raises all kinds of concerns, Gupta admits. As AI videos improve, how will we tell a politician’s real speech from a deep fake? How do we compensate artists from whom AI learns how to create, say, a Banksy-like piece? And how does technology avoid imitating the worst of humanity? “While the capabilities of image-generating models are impressive, they can also reinforce or exacerbate societal biases,” Dall-E’s site says in a disclaimer. “While the extent and nature of the Dall-E Mini model’s biases have yet to be fully documented, given the fact that the model is trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups.”
In the short term, Gupta says researchers are now looking for more ways for users, often artists, to control and modify these images. Meanwhile, AI will continue to run wild.
Bits and bytes. Dan Price’s bombshell. Bill Gates’ TerraPower raises $750 million. A pet dating app, Offleash’d, launches in Seattle.