NFT

AI Art vs. AI-Generated Art: Everything You Need to Know

Artists have been experimenting with artificial intelligence for years, but the practice has gained new levels of awareness with the release of increasingly powerful text-to-image generators like Stable Diffusion, Midjourney, and Open AI’s DALL-E. 

Similarly, the genre of generative art has gained a cult-like following over the past year, especially among NFT artists and collectors. 

But what’s the difference? Does the category of generative art also include art made from super-charged AI art generators, too? 

From an outsider’s point of view, it’s easy to assume that all computer-generated artwork falls under the same umbrella. Both types of art use code and the images generated by both processes are the result of algorithms. But despite these similarities, there are some important differences in how they work — and how humans contribute to them.

Generative art vs. AI art generators

There are a few ways one can interpret the differences between generative art and AI-generated art. The easiest way to begin is by looking at the technical foundations before expanding into the philosophical practice of art-making and what defines both the process and result.  

But, of course, most artists don’t start with the nuts and bolts. More commonly, a shorthand is used. 

So, in short, generative art produces outcomes — often random, but not always — based on code developed by the artist. AI generators use proprietary code (developed by in-house engineers) to produce outcomes based on the statistical dominance of patterns found within a data set.

Technically, both AI art generators and generative artwork rely on the execution of code to produce an image. However, the instructions embedded within each type of code often dictate two completely different outcomes. Let’s take a look at each.

How generative art works

Generative art refers to artworks built in collaboration with code, usually written (or customized) by the artist. “Generative art is like a set of rules that you make with code, and then you give it different inputs,” explains Mieke Marple, cofounder of NFTuesday LA and creator of the Medusa Collection, a 2,500-piece generative PFP NFT collection. 

She calls generative art a kind of “random chance generator” in which the artist establishes options and sets the rules. “The algorithm randomly generates an outcome based on the limits and parameters that [the artist] sets up,” she explained.

Erick Calderon’s influential Chromie Squiggles project arguably solidified generative art as a robust sector of the NFT space with its launch on Art Blocks. Since its November 2020 launch, Art Blocks has established itself as the preeminent platform for generative art. Beyond Chromie Squiggles, generative art is often associated with PFP collections like Marple’s Medusa Collection and other popular examples like Doodles, World of Women, and Bored Ape Yacht Club.

In these scenarios, the artist creates a series of traits, which may include the eyes, hairstyle, accessories, and skin tone of the PFP. When inputted into the algorithm, the function generates thousands of unique outcomes.

Chromie Squiggle #795
Chromie Squiggle #795

Most impressive is the total number of potential combinations that the algorithm is capable of generating. In the case of the Medusa Collections, which featured 11 different traits, Marple says the total number of possible permutations was in the billions. “Even though only 2,500 were minted, that’s a really small fraction of the total possible unique Medusas that could be generated in theory,” she said.

However, generative algorithms aren’t only for PFP collections. They can also be used to make 1-of-1 artwork. The Tezos-based art platform fxhash is currently exploding with creative talent from generative artists like Zancan, Marcelo Soria-Rodríguez, Melissa Wiederrecht, and more.

Siebren Versteeg, an American artist known for abstracting media stock images through custom-coded algorithmic video compilations, has been showing generative artwork in galleries since the early 2000s. In a recent exhibition at New York City’s bitforms gallery, Versteeg’s code generated unique collage-like artworks by pulling random photos from Getty Images and overlaying them with algorithmically produced digital brushstrokes. 

Once the works were generated, viewers had a short minting window to collect the piece as an NFT. If the piece was not claimed, it would disappear, while the code continued generating an infinite number of pieces.

How AI art generators work

On the other hand, AI text-to-image generators pull from a defined data set of images, typically gathered by crawling the internet. The AI’s algorithm is designed to look for patterns and then attempt to create outcomes based on which patterns are most common among the data set. Typically, according to Versteeg and Marple, the outcomes tend to be an amalgamation of the images, text, and data included in the data set, as though the AI is attempting to determine which result is most likely desired.

With AI image generators, the artist is usually not involved in creating the underlying code used to generate the image. They must instead practice patience and precision to “train” the AI with inputs that resemble their artistic vision. They must also experiment with prompting the image generators, regularly tweaking and refining the text used to describe what they want.

“That’s been my favorite part of playing with DALL-E […] — where it goes wrong.”

Siebren Versteeg

For some artists, this is part of both the fun and the craft. Text-to-image generators are designed to “correct” their mistakes quickly and continually incorporate new data into their algorithm so that the glitches are smoothed out. Of course, there’s always trial and error. At the beginning of the year, news headlines critiqued AI image bots for always seeming to mess up hands. By February, image generators made noticeable improvements in their hand renderings.

“The larger the data set, the more surprises might happen or the more you might see something unforeseen,” said Versteeg, who is not primarily an AI artist but has experimented with AI art generators in his free time. “That’s been my favorite part of playing with DALL-E or something like it — where it goes wrong. [The errors] are going to go away really quickly, but seeing those cracks, witnessing those cracks, being able to have critical insight into them — that’s part of seeing art.”

Australian AI artist Lillyillo also reported a similar fascination with AI’s so-called errors during a February 2023 Twitter Space. “I love the beautiful anomalies,” she said. “I think that they are just so endearing.” She added that witnessing (and participating in) the process of machine learning can teach both the artist and the viewer about the process of human learning.

“To some extent, we’re all learning, but we’re watching AI learn at the very same time,” she said.

Concerns over AI-generated art

That said, the speed with which AI-generated art processes large amounts of data creates concerns among artists and technologists. For one thing, it’s not exactly clear where the original images used to train the data come from. It has been said that it’s now too easy to replicate the signature styles of living artists, and the images may sometimes border on plagiarism. 

Secondly, given that AI image generators rely on statistical dominance to generate their outcomes, we’ve already begun to see examples of cultural bias emerge through what could seem like innocuous or neutral prompts.

For instance, a recent Reddit thread points out that the prompt “selfie” automatically generates photorealistic images of smiles that look quintessentially (and laughably) American, even when the images represent people from different cultures. Jenka Gurfinkel — a healthcare user experience (UX) designer who blogs about AI — wrote about her reaction to the post, asking, “What does it mean for the distinct cultural histories and meanings of facial expressions to become mischaracterized, homogenized, subsumed under the dominant dataset?”

Gurfinkel, whose family is of Eastern European descent, said she immediately experienced cognitive dissonance when viewing the photos of Soviet-era soldiers donning huge, toothy grins.

“I have friends in Eastern Europe,” said Gurfinkel. “When I see their posts on Instagram, they’re barely smiling. Those are their selfies.”

She calls this type of statistical dominance “algorithmic hegemony” and questions how such bias will influence an AI-driven culture in the coming generations, particularly when book bannings and censorship occur in all areas of the world. How will the acceleration of statistical bias influence the artwork, stories, and images generated by fast-acting AI? 

“History gets erased from history books. And now it gets erased from the dataset,” Gurfinkel said. Considering these concerns, tech leaders just called for a six-month pause on releasing new AI technologies to allow the public and technologists to catch up to its speed.

Regardless of this criticism — whether from the more than 26,000 individuals who signed the open letter or those in the NFT space — artificial intelligence isn’t going anywhere anytime soon. And neither is AI art. So it’s more important than ever that we continue to educate ourselves on the technology.

The post AI Art vs. AI-Generated Art: Everything You Need to Know appeared first on nft now.



Source link

Back to top button
Please enter CoinGecko Free Api Key to get this plugin works.