Search artists, collectors, galleriesSearch artworks, exhibitionsSearch articles, courses
Type to Search...
Log in
Join
Log in
Join
Interviews
Amelia Winger-Bearskin

Why Does So Much AI Art Look Like Comic Books? AMA with Amelia Winger-Bearskin

The hype of AI art has been around for a while and after experimenting with AI images for some of our articles our team wondered - why does so much AI art look like comic books? We turned to an expert to ask why certain aesthetics proliferate in AI libraries - in this interview Amelia Winger-Bearskin answers our questions and offers a deep-dive into other cutting-edge issues in the development of AI image-making technologies. What new possibilities does this medium create for artists and what are its setbacks?

Below is a summarized transcript of an interview between Amelia Winger-Bearskin and Katherine Jemima Hamilton. For the complete conversation, please join the free course and watch the full interview.

WATCH THE FULL INTERVIEW

Amelia Winger-Bearskin is a Banks Family Preeminence Endowed Chair and Associate Professor of Artificial Intelligence and the Arts, at the Digital Worlds Institute at the University of Florida. She is also the founder of the AI Climate Justice Lab, the Talk To Me About Water Collective, and the Stupid Hackathon. In 2022 she was awarded a MacArthur Foundation Award as part of the Sundance AOP Fellowship cohort for her project CLOUD WORLD / SKYWORLD which was part of The Whitney’s Sunrise/Sunset series. In 2021 she was a fellow at Stanford University as their artist and technologist in residence, made possible by the Stanford Visiting Artist Fund in Honor of Roberta Bowman Denning (VAF). In 2020 she founded Wampum Codes, an award-winning podcast and an ethical framework for software development based on indigenous values of co-creation, while a Mozilla Fellow at the MIT Co-Creation Studio.

In 2019 she was a delegate at the Summit on Fostering Universal Ethics and Compassion for His Holiness, The 14th Dalai Lama, at his World Headquarters in Dharmsala, India. In 2018 she was awarded the 100k Alternative Realities Prize for her Virtual Reality Project: Your Hands Are Feet from Engadget and Verizon Media. This was also the year that nonprofit IDEA New Rochelle won the $1 Million Bloomberg Mayor’s Challenge for their VR/AR Citizen toolkit to help the community co-design their city.

Amelia is an enrolled member of the Seneca-Cayuga Nation of Oklahoma, Deer Clan on her mother’s side; her late father was Jewish/Baha’i.

Eser Çoban: Every week our team has a content meeting, and we've noticed this trend, especially with the Lensa AI app: people were putting up selfies, and they all looked a little more like heroes or comic book characters. If you generate anything on mid-journey or stable diffusion, you get this comic book-style art. We were discussing where this trend comes from, why AI tends to make everything look like it's from a comic book. Then we read your article on visual trends in AI art, and that just blew my mind. I was excited to see someone categorizing these trends. We wanted to talk to you specifically about aesthetics in AI. That's our theme, and I'll leave it to Katherine to conduct the interview with you.


Katherine Jemima Hamilton: We've prepared a few questions on that topic, and we can get started if it's okay with you, Amelia.


Amelia Winger-Bearskin: Sounds great.


KJH: So how would you characterize the style of AI art produced by popular image-making AI technologies?


AWB: AI, especially diffusion models and GANs (Generative Adversarial Networks), relies on a backward-looking approach. These models use vast amounts of data to approximate and recreate pixel maps, making images that resemble their training data but with variations. These variations can even fool the discriminator part of a GAN, which consists of a generator and a discriminator. This is how we get these fake images. The goal is to make the computer believe they are true. When we think of the bulk of images, we often have frames from movies, like Marvel films. So, if you imagine how many frames there are in each Marvel film, each one is an individual image.

Additionally, commercial designers, animators, and editorial teams pull images from sources like Getty Images, and then people create images in a similar style, for instance influencers. This convergence optimizes current beauty and marketing standards. It's driven by marketing and media, reinforcing what's considered desirable or beautiful, current standards of beauty at least, to convince people to purchase or buy into an idea, product, system, or lifestyle. You also see influences from fan art and online communities such as Deviantart, contributing to the uniformity in the nature of these images. I see a lot of fan art imagery coming through - I myself got started as an artist after being discovered online and getting some shows through that.

That also happened in the nineties too, with the very beginning of sharing online art. So we're seeing similar stories echoing throughout time. When we connect language models to tags, anything combining language and images, such as a database like Getty Images where you can search for a model holding a glass looking at the sunset. You'll get an image based on these tags. For instance, if you mention "sunset," you'd expect an image of a sunset. However, when using models like stable diffusion and asking for a painting of a sunset, it typically defaults to something resembling Western painting styles. It may not consider how sunsets are depicted in indigenous practices or other parts of the world. It tends to focus on what's heavily represented in mainstream international media.

So I think you are absolutely right, there are strong currents of comic books, of fan art, and of the Western current of tagging. Western European painting is always the default, not woodblock painting from Japan.

This convergence optimizes current beauty and marketing standards, driven by marketing and media, reinforcing what's considered desirable or beautiful to convince people to purchase or buy into an idea, product, system, or lifestyle.

KJH: It feels much clearer when you put it that way. As a follow-up, you started writing a series of articles on visual trends in AI art in 2022, categorizing them as Particle Systems, Data, 3D Hyperreal, and Nightmare Corp. Could you expand on these categories and perhaps mention any other emerging trends?


AWB: Certainly. Particle Systems have become dominant in thinking about information. Think of data as tiny particles that move in a flow state. Since about 2012, particle systems were easily generated even within browsers or other types of animation. We see this in films, and we also see it in art. It’s interesting to use the word data ponts as actual literal points, because that is not necessarily the way it has always been thought of. And it's not necessarily the way that data works. Data doesn't necessarily work as one individual data point in this massive group that gives it relevance or information.

Oftentimes, its relationship to other things give it more coherence than its individual self. And again, I would say that's a western way of thinking of something. If we had data points about all of the people in a certain area and how much water they used in a singular day, it wouldn't be relevant to look at the individual points. It's interesting and important to look at that aggregative information and its relationships to each other, how they're using water because water is shared and a shared resource among all of those groups. So it is interesting to me that this is a type of philosophy around what is important about data, the individual data points, and then how it's represented.

The difficulty with this approach is that data visualizations are meant to make complex information easier to understand, but representing data as millions of dots might not necessarily enhance comprehension. While it's aesthetically pleasing, it might not aid in understanding relationships. Data visualizations should help people grasp complex data in a clear manner, not merely be visually appealing, as they can be manipulated to skew perceptions or present biased information.

The purpose of these visualizations is to help people understand relationships on a larger scale, something more abstract or embodied. Humans don't just rely on their eyes and ears; we also experience and understand things physically. This is why we still practice building models before creating larger systems. Models help us grasp how things work together. So, when it comes to particle systems, why does big data translate into tons of tiny dots? I pose a lot of questions about that.

Moving on to the hyperreal category and looking at deep fakes, we've witnessed an explosion of this phenomenon recently. People create hyper-realistic versions of nonsensical, funny, or bizarre images. For instance, you might see the Pope in a puffy coat or Gwyneth Paltrow in night sightings on dash cams. This changes what we think about when we think of conceptual art. In the 1960s, conceptual artists would write or speak as a form of art. People would describe actions that they performed - and that resolves within the minds of the audience members. Now, we can type these concepts into language models and see visual interpretations, like Gwyneth Paltrow on a dash cam at night. It alters the imagination process and allows people to bring their visions to life.

Oftentimes, my students use diffusion models because they don't feel confident in their drawing skills. These tools help them turn their imaginative ideas into tangible creations. However, it's essential to question who decided some drawing styles are "good" while others aren't. Rather than saying, "I'm bad at drawing, and AI fixes that," I'd prefer my students to recognize that their unique ways of expressing themselves through art are valid and valuable. Being able to use our hands is something that we have evolved to do and we need it in our cultures - there are a lot of takes about why we need this but the first image of hands is from 45,000 years ago. We’ve had the capability to make realistic images for millenia and that is deeply ingrained in us as a necessity. The same happened with the advent of photography - today anyone in the world can be a photographer, not just professionals. People said “this is the end of painting, nobody is going to ever pay again to have their portrait painted” and so on. While it is true that people no longer paint to get a good representation of a person, but the reason to paint shifted into a different reason for making things. I like the way that AI has democratized art creation, much like the camera did in its time, making it accessible to anyone.

This is what I would like to see with AI - a shift to a space where our ideas are centered and valued in a new way. A technique that is easily merged between technology and our human hands.

This is what I would like to see with AI - a shift to a space where our ideas are centered and valued in a new way. A technique that is easily merged between technology and our human hands.

KJH: Transitioning from democratization to the ethics of diffusion models, I wanted to ask about data scraping for AI image generators. Is it ethical, considering issues artists face with a lack of consent in how these generators gather images?


AWB: This issue recently came up in my alumni group, perhaps before stable diffusion was widely available. People wondered how to credit work created with stable diffusion. Some argued that since the model made the images, artists deserved no credit, while others claimed they should receive credit for crafting prompts, editing, and upscaling the images. I chimed in, mentioning that the model was trained on a vast dataset that included images from our community, and my community because I was among the first wave of digital artists, raising questions about ownership and credit.

It seems rather one-sided to give credit that way, doesn't it? In reality, it belongs to all of us. When I interact with it and my data is trained on it, it's not solely theirs. The ethical aspect here concerns digital artists who worry that our decades of work might now be overshadowed, with models taking over. We contributed to shaping those aesthetics and providing the information.

There's also the issue of opting in or out. When we posted images to Tumblr or Instagram, we never imagined they'd be appropriated and used in a manner that raises copyright questions. We need new laws and regulations to address this. Who owns the work, the original artist, the collaborator, or the system itself? It's likely a combination. We should strive for fairness and equity, seizing this opportunity to create standards that benefit all involved.

Many artists, myself included, have seen our work stolen, copied, and used for commercial purposes without any compensation or acknowledgment. I had to chase down those who used my work without permission. People often assume they can use whatever they find on the internet because it's there. We need to define ownership and ensure that creators can earn a living from their work and inventions.

It’s still very much the norm to think that anything on the internet is available for free. So we need to come together as a community and think about methods of how we can make a living through our work. Communities that engage in this should uphold ethical values and ensure fairness and equity. Ideally, we can improve on what the music industry has done with sampling and address the copyright issues around art in the US, as both of these areas currently don't always benefit artists, particularly with streaming services not compensating artists adequately for their work.

In the US, copyright laws differ from those in other countries. Artists often have fewer rights, and this is crucial to remember. Artists who own their intellectual property bear the burden of trying to secure funding for their work, without many safety nets to rely on.

We need to define ownership and ensure that creators can earn a living from their work and inventions.

KJH: Absolutely. Shifting from the ethical dimension, I'd like to explore the role of artists. Artists and storytellers have historically defined eras and social norms. Consider Charles Dickens, who shaped the collective imagination of Christmas dinners. Can AI images from diffusion models help create new social norms or redefine our collective perception of reality?


AWB: Without a doubt, yes. During my visits to events like Basel Miami, I observed thousands of artworks spanning various art forms. Many pieces were crafted by hand but resembled pixelated or AI-generated objects. They ventured into 3D models reminiscent of video games. The digital realm has already integrated seamlessly into the 'real world.' These AI-influenced images have permeated our culture across the board, fascinating and captivating people. We're witnessing new VFX techniques that incorporate AI-style morphing. Even if not directly using AI, artists are manually animating to achieve this unique style, given its novelty and creativity. As a result, we see artists responding aesthetically in the AI vernacular.

I'm excited to observe how this trend will continue to evolve and shape our culture. Additionally, I hope artists will push back against it by exploring more experiential dimensions of art. Focusing on audience engagement, connecting deeply within communities, and experimenting with hyper-local approaches might become more prevalent. This shift could usher in a new era of artistic expression and connection. Because, if we can generate text quicker than we can read it, and if we can create images faster than we can view them, then how does that transform our relation to material and how do artists push back?

We're witnessing new VFX techniques that incorporate AI-style morphing. Even if not directly using AI, artists are manually animating to achieve this unique style, given its novelty and creativity.

KJH: So, it's intriguing to ponder how art will coexist with these advances. If we can generate text faster than we can read it, or churn out images faster than we can process them, how does it shape our experience of art, reading, and connection?


AWB: Well, Marshall McLuhan is the one who said that artists are the only ones who can approach technology with impunity. I agree with that. Hopefully we will deepen our understanding of what it is to be human and what it is to be in the world at this time. We view technology as a means to an end, a tool for expressing our internal lives, forging connections with our communities, and deepening our understanding of humanity. If that's your intent, you'll quickly focus on tools that serve that purpose. I believe it'll be interesting to see how artists harness these tools.

My students and many peers already incorporate these tools into their art. While some might be drawn to flashy aesthetics, most aim to convey something timeless or reflective of our values in 2023. Anything that doesn't align with these objectives tends to fade away. As these tools mature, artists will likely adapt them better to their creative processes. Currently, there are limitations, particularly for non-programmers. APIs and tools often have low resolutions and time restrictions, even with paid versions. Local model training, like many of my students do, offers more flexibility for commercial or gallery use. We'll see if software evolves to cater to artists or continues as a bespoke skill.

What captivates me about these technologies is their potential. They open doors to new possibilities, new ways to express ideas and emotions, and new connections with audiences. These tools are transforming the artistic landscape, offering fresh avenues for storytelling, and I'm excited to be part of this exploration.

KJH: We’ve spoken a lot about what these tools do for artists in general and for other artists but what about you? What first drew you to digital art?


AWB: I really enjoy collaborating with machines. My fascination with this began when I started coding around the age of six or seven. My father used to bring home old computers from his job at Eastman Kodak, and even though those computers were considered high-performance in the 1980s, they were nothing compared to today's cell phones. Nevertheless, I found them intriguing. My dad would say, "We're getting rid of these, what do you think?" And I would have to figure out how to code on each one so they could communicate and create art together.

What I love about collaborating with machines is that they excel at things I'm not so great at, and vice versa. For example, I can draw something, but I couldn't reproduce it a million times. Computers, on the other hand, excel at that. I could also draw something and animate it, but the way computers can recolor, retouch, and reimagine it is beyond my abilities. Computers also enable me to create interactive elements for storytelling in new ways. Back then, I used to connect to the early internet via my phone receiver, calling IP addresses directly. I'd send a game or an image to whoever answered. It was a time when the internet was less centralized, and the concept of peer-to-peer communication was fascinating.

Unfortunately, the internet has become increasingly centralized over time. Most people now believe we must go through software systems or specific apps to communicate with one another. The idea of cold calling a random number and sharing something peer-to-peer is no longer the norm. However, the Web 3 movement is exploring ways to decentralize spaces, and I find that intriguing. Creating a community of artists who share their work directly between each other and collaborate is powerful. A lot of the AI technology I'm interested in helps me find that community and collaborate to create remarkable things.

In my artwork, I often specify the algorithms I use. For instance, I might mention that a video uses pixel morphing or image inpainting. I do this to demystify AI, it's not some all-powerful, ubiquitous sentient being; it consists of various individual algorithms. I also prefer not to promote specific corporations that use these algorithms. Many of them were developed in universities and are open source, accessible to anyone who reads the white papers.

Anyone can implement these algorithms into their code. You don't need to pay a specific company for this. You can obtain this information and manipulate pixels without any difficulty. Additionally, during my talks about AI, I often receive questions like, "What does the AI think about this?" or "How do you envision the AI's future actions?" It's as if people believe AI is sentient or intelligent, which it's not. AI is created by people, for people, and thousands, or even hundreds of thousands, of people are involved in moderating it and making critical decisions.

Sadly, many of the workers behind these AI models, especially those involved in moderation and refinement, have gone unnoticed and ununionized. People mistakenly think it's all magic happening in the machine, but when you look closer, you see hundreds of thousands of workers worldwide, particularly in the global south, who are underpaid and perform this vital but precarious labor. They work hard to render themselves obsolete as the machine becomes self-sufficient. They are working hard to teach machines what kind of selections they make so that as soon as they can, they won't have a job anymore! Even human moderators on platforms like Facebook and Twitter, who maintain clean feeds, face similar circumstances. The algorithms we talk about are just another way of masking this highly precarious and invisible labor.

I make a point to reveal the algorithms I use and explain how AI operates whenever possible. It's essential to dispel two myths: first, that it's too late, and AI already controls everything like an unstoppable force; and second, to bring visibility to the hundreds of thousands of people within this system, who aren't all equally compensated but play a crucial role in making AI systems function. AI isn't any different from other aspects of our world; it's created by numerous individuals. So, it's crucial not to grant AI so much power, as it's not the all-knowing entity it's often portrayed to be. We should question who benefits from this narrative. Clearly, it's not the workers, nor is it necessarily artists. So, artists should contemplate how they benefit from believing in this idealized version of AI.

I make a point to reveal the algorithms I use and explain how AI operates whenever possible. It's essential to dispel two myths: first, that it's too late, and AI already controls everything like an unstoppable force; and second, to bring visibility to the hundreds of thousands of people within this system, who aren't all equally compensated but play a crucial role in making AI systems function.

KJH: Yes, so many huge questions in that answer. You use AI systems in your artwork, and I want to break it down a bit for people like me who may be confused about AI systems and algorithms, plural. So I wanted to know, in the AI systems that you use in your work, how are they trained? Do you train them, and would they be trained differently from diffusion models such as DALL-E or chat GPT? If those are diffusion models, then I didn't know.


AWB: Yeah, thank you so much for that question. Um, you know, Sky World and Death World, that was shown at the Whitney Museum of American Art, curated by Christian Paul. I was inspired by looking back at some of the early digital artwork I had made in the nineties, and I had no record of these works, even though they had been exhibited in big museums and collected in collections. When I tried to locate this information, it was challenging because technological systems change so frequently. Any archive created at a museum or gallery might not last five years because the focus is often on chasing the new in our tech-driven world. This constant pursuit of the new made me reflect on something called the Wayback Machine by the Internet Archive. It allows you to explore snapshots of websites over time. I highly recommend checking it out; you can revisit the past versions of websites like Facebook or Twitter. I used it to recover many older digital artifacts, effectively talking to my past self.

It made me question why the art and technology world frequently claims to be the "first-ever" in various domains. My students often say this too, but it's not always accurate. Why do we constantly reinvent things and declare them as entirely new? Does this obsession with the new align with our values? I don't think it aligns with mine. We haven't seen truly ethical technology yet. Current technologies have significant environmental and societal impacts. Technology isn't neutral; it's causing harm in many ways, from environmental destruction to threats to democracy. But that doesn't mean we can't design ethical technology. Technology reflects our values, and if constantly chasing the new erases our memory of what came before, we need to ask who benefits from that.


EÇ: Yeah, Amelia, thank you so much. This was beyond anything I could ever dream of. You provided us with such great information.

End.

Are you a passionate writer with a unique perspective on contemporary art, culture, and social justice or an individual with untold stories or overlooked insights?
Get in touch with us
1.1K
© 2024 Collecteurs. All Rights Reserved

Newsletter

Subscribe to our newsletter to stay up to date with the latest news and updates.
Send

Social Media

Follow us on social media to stay up to date with the latest news and updates.