Ethical imperatives in AI and generative art

Liza Daly
World Writable
Published in
8 min readFeb 16, 2017

--

Adapted from Playful Generative Art: Computer-Mediated Creativity and Ephemeral Expressions, presented at Simon Fraser University in February 2017

Detail from “I Saw the Figure 5 in Gold” by Charles Demuth via The Metropolitan Museum of Art, licensed under CC0 1.0

Any technology as powerful as artificial intelligence provokes ethical questions. Does AI make killing by remote control too consequence-free? Do AI models systematize existing biases? Are we coding ourselves out of jobs?

Lately I’ve been thinking about ethical guidelines that should apply to generative media: narratives or images that are created mainly or entirely by computers rather than people. I’ve drawn much of my thinking from the Twitter bot community; these artists sit at the intersection of “stuff made by computers” and “stuff on social media,” but I believe their guidelines can apply to more than just bots. The global reach of social platforms and the tirelessness of machines means that poorly considered choices can have far-ranging effects.

So what is required to make generative media “ethical”?

1. Anticipate deliberate misuse

Good systems engineers don’t expect to build bulletproof networks or find every security flaw imaginable. They assume machines will be compromised and design resilient systems that minimize damage. Code is not created and released once; it’s a living thing that requires constant patching and monitoring.

Anticipating and reducing the scope of online harassment needs to come from similar assumptions. If you write code that talks to people, it is inevitable that eventually it will say something terrible. This is why chatbot authors typically include basic checks on hurtful language and carefully monitor their bots’ activity. This seems obvious, but then there was Microsoft Tay:

Also please stop making your chatbots “female.”

The Tay bot was easily fooled into repeating hate speech, and could be made to direct that speech at innocent people. It was a PR disaster and the bot was taken offline within 24 hours of launch. The team behind it acted irresponsibly by not taking obvious steps towards abuse mitigation, like a blacklist of offensive terms and strict limits on @-messaging.

Creators need to assume that their work will be used to harm other people, and work backwards from that assumption. As Jenn Schiffer writes, “If you cannot think of a way [your product could be used for harm], then ask people who do not look like you.

2. Consider how code is a powerful amplifier

Martin O’Leary writes, “When we insert a machine into a social environment, we must bear in mind that the machine is in some respects superhuman.” An unstated assumption of National Novel Generation Month is that the code that writes a novel will be far shorter than the novel that it writes. NaNoGenMo specifies that a “novel” is 50,000 words, but it could just as well have been 50 million. The computer doesn’t care.

Code’s amplification potential directly plays into its ability to cause social havoc: a bot can tweet one message to 1,000 people as easily as 1,000 messages to one person. There are less overtly harmful implications, though. Last year, this happened:

This turned out to be a gentle joke account by @hugovk, which tweets, in order, every numeral and its expression in Finnish:

Its most-recent source code update spoofs its location:

https://github.com/hugovk/everyfinnishno/commit/79c2aa499b1185ffa7ea0e76fee2f3a4cff533e7

In a way, @EveryFinnishNo is a polluter, contaminating Twitter geodata space with misinformation. And it’s been noticed more than once!

But @EveryFinnishNo is obviously not malicious or harmful in any sense, and its burden on researchers is minimal. I would argue the joy of discovering its footprint outweighs the minor inconvenience of filtering out its noise (but then I’m not working on my dissertation.) Moreover, I think it provides a useful lesson: any human behavioral data found online is potentially suspect.

Today, geographic information found in tweets is probably mostly legit, and I would feel comfortable drawing broad conclusions from it. But the more we rely on a dataset, the more attractive it becomes to spoof. If the stakes are high—say, if we were measuring reactions on social media to political discourse and using that to inform public policy—then the dataset becomes an irresistible target for disinformation. And computers can pollute that data with unprecedented scale.

3. Show your work

If media literacy seems poor right now, AI literacy is nonexistent. As a society, we are woefully unequipped to judge what is real news, “fake news”, or truly fake news—generated entirely without human intervention. While there’s plenty of the first two, we’re unprepared for the rise of the last kind.

I’m increasingly of the opinion that art projects or experiments that deliberately obfuscate the distinction between man and machine do more harm than good. It was a mild disappointment when the mysterious spambot @horse_ebooks turned out to be a stunt—it was 2012 and it just meant that a little magic went out of the world.

I was less forgiving of SeeBotsChat, a recent livestream featuring two Google Home devices talking to each other. The livestream was entertaining but the dialogue was too good to be completely generative. Nevertheless, the media reported it as being a performance by two AIs, and many people assumed that this was just how Google Home works out of the box. The creators did not immediately disclose how it worked:

Eventually the they revealed what some had guessed: the devices were using a service called Cleverbot (without permission, one reason the creators were initially coy). Cleverbot isn’t fancy: it remixes 20 years of human chat logs and is more like a turbo-charged ELIZA than artificial intelligence. The dialogue in SeeBotsChat was entertaining because it was written by people, but the creators positioned the devices as emerging consciences. It worries me that thousands of people watched the live stream, didn’t catch the later disclosure, and came away thinking, “This is what AI can do.”

A counterexample

Released in 2016, @NYPLEmoji has a very simple mode of interaction. You tweet an emoji at it, and it replies with a public domain image from the New York Public Library Digital Collection:

The bot itself is completely charming, and at first I assumed it was powered by a sophisticated image recognition algorithm. It’s not, and it’s totally transparent about how it works:

The Twitter bio for https://twitter.com/nyplemoji

The bio is very clear: the bot is curated by a named human, and includes a link to the source code, which reveals the handmade list matching emoji to images.

Nothing about its hand-made nature takes away from the project. It’s still a great marketing device for the library, it’s still fun to engage with, and it’s actually better for having been curated by an experienced librarian. A purely automated version would not work as well.

Creative rule-breaking

Art can involve bending or breaking existing rules to make a provocative statement. In 2016, Nora Reed (author of the most perfect @infinite_scream) created two bots: @opinions_good and @good_opinions. The bots would tweet — to no one in particular — statements like “codes of conduct are good” or “feminism is beautiful.” Users could only find the bot if they were searching Twitter for keywords, such as the title of a lighthearted summer comedy that made some men very upset. Once engaged, the bot would cycle through vague canned responses designed to keep the human talking:

On the one hand, this breaks some bot-making guidelines:

  1. Bots should not pretend to be humans
  2. Bots should only engage with people by consent (impossible if the human is not informed that they’re talking to a computer)

On the other hand, it was unquestionably providing a social service: stringing along bad-faith actors who otherwise would’ve directed their vitriol at humans who were just going about their business on social media. I think Reed’s experiment passes the “do no harm” test, even if it’s deceptive, and it suggests ways for technology to alleviate social burdens as well as physical or computational ones.

Malice + scale + AI illiteracy

I’m worried about the unholy trinity: all three of these principles being violated together. An intent to harm, amplified by code, compounded by poor understanding of machine-generated media. It’s probably already happening.

Most so-called political bots are not more sophisticated than Excel macros. Individuals can schedule and script simple tasks to be performed at extra-human, but not yet super-human scale. But automated or generative media is following a familiar curve:

  1. Humans perform a task manually
  2. Task is outsourced to regions with less expensive labor
  3. Humans aided by basic automation
  4. Human ability is matched or bettered by full autonomy (happening now in finance trading and medical diagnostics)

As media consumers, we’re just coming to terms with #2 and #3, as we realize that social profiles are easily manipulated, false news will be believed, and humans can be paid to argue any political opinion online. We find that the volume of disinformation is overwhelming and that many of our technical tools are useless.

I fear we won’t notice when online media production is presented as truthful but is in fact wholly imagined by machines — plausible-looking news stories spewed out by neural nets, sophisticated computer-synthesized images rendering people into scenes that never happened, keyword-stuffed fake videos — in greater volume than could ever be rigorously fact-checked.

Generative artists can do our part to slow the tide. We must anticipate misuse of our tools and consider, then, whether they are worth building. We should endeavor to work transparently — projects can inform as well as entertain. We should design the antidote as well as the poison: can we detect AI-generated media as easily as we can create it? Technology is not ethically neutral, and we should remember that today’s internet tools may be tomorrow’s engines of propaganda.

References

--

--