Don’t Blame the AI, Blame the Humans.

March 7, 2024 Generative AI Culture Wars Kyle Chayka

Ethnically diverse Nazis and our collective contribution.

Back when I wrote about AI Inflection Points, it was about instances when AI-generated artifacts drove the cultural conversation. The novelty has certainly worn off (“AI is boring”, as some observers remarked at the end of 2023) but AI still produces the occasional shock to the system—just more negatively than before.

Last week, Google faced a minor online scandal when their Gemini image generator created diverse” images of infamously un-diverse people: Ask it for a picture of the Founding Fathers or German soldiers from the 1940s and Gemini would would output people of different ethnic backgrounds, styled like pilgrims or Nazis.

This was the rare case when either end of the political spectrum was upset: The right-wingers decried an overly woke technology, the left-wingers a blatant misrepresentation; and either way looked at it you had to contend with a tool automatically revising history. It was a bad look.

Automatically, not autonomously

I don’t want to be a technology apologist, particularly not on behalf of Google. But I do want to challenge how we—and I include myself in that—think about the way this technology works. Here’s a software that literally learns” from its input. It replicates whatever it’s fed, and in the majority of cases that’s human behavior.

Back in 2016, Microsoft launched a chatbot called Tay that they had to shut back down just hours later. It turned out that letting people on the open web interact with the bot—and having the bot learn from the input—meant that Tay quickly started parroting the absolute worst things you can imagine.1

The technology has made enormous leaps since 2016, but the fundamental truth is the same: Whatever you feed the bot will—in some way or form—determine what it generates. The actions may be automatic but they certainly aren’t autonomous.

Twisted mirrors

Unfortunately, we have been talking about AI all wrong from the beginning: As the saying goes, AI is neither artificial nor intelligent; and yet the tools pretend to be autonomous, masking as chat partners or semi-cognizant robots. Ultimately, each bot is an intricate algorithm that regurgitates its own input—for better or worse.

It’s easy to blame Google, AI enthusiasts, or even overzealous developers trying to solve” the common problems of letting people play with their creation. And they should absolutely be blamed! If they put this technology into the hands of millions, they bear responsibility for whatever it generates.2

But I find it much more interesting to think about how this technology reflects the human biases and stereotypes that it learned from. The first versions of many AI tools reflected so many stereotypes that Google’s engineers had to try and compensate them. The result we’re seeing now is arguably worse—but the fundamental problem isn’t the technology but the biased data, based on biased ways of thinking.

Each of these scandals effectively holds a mirror up to our society: You can draw a straight line from human behavior to ugly outcomes. And while the tech gets better over time, I’m not sure society will.


  1. Tay turning racist was one of those cases that companies now try and defend their technology against—but sometimes that will go overboard. The diverse Nazis” therefore isn’t another case of woke gone wild” but simply a safety system going haywire.↩︎

  2. If that sounds too much like Guns don’t kill people, bad guys do”, remember that guns are fundamentally made to kill whereas AI is made to… well what exactly, we’re still finding out.↩︎




Next: Quantified Culture

Previous: Why 1984 Won’t Be Like 1984


Imprint   Hand-made since 2002