By Categories: Uncategorized9.1 min read

Understanding the Impact of AI Art on Political Discourse

With the 2024 U.S. presidential election on the horizon, the role of AI-generated political content has sparked intense debate. From a video of Vice President Kamala Harris saying words she never uttered to images of former President Donald Trump surrounded by black voters who don’t exist, AI-generated content is blurring the lines between political commentary and misinformation.

The Power and Peril of Political Representation

Political art has long been a tool for shaping public opinion. Historically, this has often taken the form of propaganda—communication designed to influence beliefs and behaviors, sometimes by misleading or presenting biased information. The concept dates back thousands of years, with one of the earliest examples being an inscription from 515 BCE on Mount Behistun, where Darius I of Persia proclaimed his greatness.

Visual propaganda has proven especially powerful because images can evoke emotions more directly than words.

A manipulated photo from 1860, for example, combined the body of politician John Calhoun with the head of Abraham Lincoln, setting a precedent for the use of altered imagery in politics.

Today, the tools of manipulation have evolved, but the goals remain the same: to shape opinions and, ultimately, votes.

2024 Elections: A New Era of AI Influence

What sets today’s political landscape apart is the ease with which AI can generate these images. No longer requiring teams of experts or expensive equipment, AI can create convincing images in moments. As Carolina Klint, a risk management leader at Marsh, warns, you don’t need to be a tech genius to produce deceptive content that can sway public opinion.

The Global Risks Report, co-authored by Marsh McLennan and Zurich Insurance Group, underscores this concern. The report identifies misinformation and disinformation as critical risks in the coming years, especially as the world’s largest tech companies scramble to address the threat. In February 2024, executives from Adobe, Amazon, Google, and other major players signed an accord at the Munich Security Conference to curb the misuse of AI in elections.

Their statement highlights the stakes: with more than 40 countries holding elections in 2024, involving over four billion voters, the rapid development of AI presents both opportunities and challenges for democracy. The tech giants committed to managing the risks associated with deceptive AI election content on their platforms.

The Fear Factor: Why AI-Generated Content Matters

The primary concern with AI-generated political content is its potential to deceive and emotionally manipulate voters. A photo or video that appears real but is entirely fabricated can lead people to believe in events that never occurred or in an image of a candidate that may not exist. This increases the likelihood of those same viewers being influenced by what they saw. They could change their minds or—far more important in politics—their votes.

A Spectrum of AI-Generated Political Images

AI-generated images exist on a spectrum, from seemingly plausible photos to overtly fictional scenes. Some are so realistic that they could easily be mistaken for actual events, while others—like images of politicians with lightsabers or in superhero costumes—are clearly fantasy.

Yet, even these less believable images can have a significant emotional impact, stirring feelings and influencing perceptions in ways that are difficult to quantify.

AI Photorealism

Photorealistic Scenes That Look Plausible

These photographs were called out almost immediately upon posting as AI-generated photographs. The concern about the photographs was that they are so convincing voters might be persuaded that they are real and reflect actual events—when, of course, they do not.

Just to be clear, none of these images are real. They are all ai-generated.

It is easy to understand and worry about how someone can believe an AI-generated photo, especially when that photo doesn’t display any of the typical AI indicators. These images have consistent lighting, the correct number of arms and legs, and only a few issues with fingers.

Discerning viewers can certainly identify the odd variations in skin tone and age. Also, even the most advanced robot developed at this time doesn’t match the one shaking Harris’ hand. But these images could be convincing, especially if only seen in a 20-second short on YouTube, Instagram, or TikTok.

Photorealistic Scenes That Look Implausible

Would you believe these images of Trump and Harris?

No, probably not.

But what about this one?

So maybe a person or two might see this photo and exclaim, “Oh, how nice that they are getting along!” But perhaps most would immediately doubt it.

In these cases, the imagery has some level of believability but is not as plausible as in the first group of images. However, the emotional impact may be higher on these types of images due to the unlikely representation.

Would that infer that these images are okay and people should share them on social media? And just because you don’t believe it, does it mean it is harmless in a political race? Maybe, maybe not.

Photorealistic Scenes with Implausible Elements

Images with a lightsaber or superhero costumes, even when they look like high-resolution photographs, are immediately identifiable as fiction. Because of those elements, most people would quickly and easily identify it as ai-generated.

No one would believe any of those, right?

But what about any of these?

These images are made to look realistic. However, the elements make this group less believable. But the emotional impact tends to be higher due to the extreme elements such as Harris holding two AK-15 semi-automatic rifles or Trump in a wheelchair.

While it is tempting to believe that danger only exists when the imagery looks like a possible photograph of an event, discounting emotional impact would be shortsighted.

After all, propaganda is known for manipulating emotions. Science has taught us that emotions can significantly impact decision-making, and research suggests that they are a crucial part of the rational decision-making process.

AI Paintings

Portrait Paintings

Images that look like they were drawn or painted vs. a photograph are not trying to confuse the viewer into thinking the image represents reality. In most cases, it is clear the images are created. But does the painted quality add less or more impact to the imagery?

Many would argue that an image that looks like a painting is more compelling, not less.

“…visual arts can elicit a broad range of emotional feelings that go significantly beyond the canonical ‘basic’ emotions,” according to Lauri Nummenmaa, a professor who leads the Human Emotion Systems laboratory at Turku PET Centre, and Riitta Hari, a neuroscientist, physician, and professor emerita at Aalto University.

So, believability in an ai-generated painted image would be low, but the emotional impact could be quite high.

Paintings of a Scene

AI-generated paintings evoke different responses than photographs. They can feel timeless and inspirational, evoking a sense of elegance and beauty.

In their research article “Bodily Feelings and Aesthetic Experience of Art,” Nummenmaa and Hari explored how art impacts emotions, particularly in pieces where human figures are prominent. They found that art rarely evokes negative emotions, even when it deals with themes like death and grief. Instead, viewers often associate the sadness stirred by such artworks with a sense of joy or being deeply moved.

Given that art, especially paintings, often elicits positive emotional responses, should AI-generated political art in this style be dismissed simply because it doesn’t depict actual events? While these artworks don’t claim to represent reality, their ability to evoke positive emotions could still influence public perception.

AI Graphic Art

Cartoons

Since Benjamin Franklin’s first political cartoon, “Join or Die,” appeared in the Pennsylvania Gazette in 1754, political cartoons have provided a visual representation of opinions about society, government, politics, politicians, issues, and current events. They typically appeared within the opinions section of a newspaper, so they had proper context. Political cartoons are often satire, humor, and criticism.

Satire
Humor
Criticism

Political cartoons are known for their ability to persuade and compel discussion. While they fall within the realm of artistic expression, their influence on political opinions can be just as powerful as more realistic images. While AI provides a faster generation of those images, it can provide the same function as a political cartoon, but does it belong solely in that category?

Digital Illustrations

Digital illustrations have the power to shape elections by quickly conveying complex political messages through striking imagery and concise design. These visuals can capture attention and resonate with a broad audience, making them effective tools for influencing voter behavior.

By evoking strong emotions like outrage, pride, or hope, digital illustrations can mobilize supporters, sway undecided voters, and shift the focus of public discourse, playing an active role in shaping election outcomes.

Navigating the Future of AI and Political Art

Right now, the primary concern and focus is on realistic images because there is a higher chance that a viewer could believe those images. The main goal is to ensure that images don’t try to portray a reality that never occurred.

As AI technology continues to advance, the challenge lies in distinguishing between artistic expression and potentially harmful misinformation. Realistic AI-generated images are a particular concern because of their ability to deceive. However, even less realistic images can sway emotions and opinions, raising important questions about the role of AI in politics.

Should AI-generated political imagery be subject to censorship, or should it be allowed under the protection of free speech? How can we balance the need to protect democratic processes with the freedom of artistic and political expression? These are complex questions that society will need to address as AI continues to play an ever-growing role in our lives.

Questions to consider

  • How do we balance the freedom of speech with the need to protect the integrity of elections?
  • What safeguards should be in place to prevent AI-generated content from misleading voters?
  • Should all AI-generated political high-resolution photographic imagery be clearly labeled to distinguish it from reality?
  • How do we address the emotional impact of AI-generated content?
  • Are controversial images good for political discussion and debate?

As we move forward, these questions will be crucial in navigating the intersection of AI, art, and politics. The stakes are high, and the impact on democracy could be profound. ◼