MIT’s “Moral Machine” helps users reflect on the ethical difficulties of AI decision-making

Moral Machine lays out different scenarios for users to interact with AI

Think you have what it takes to play God?  MIT has a “Moral Machine” that allows you to submit decisions on potential ethical dilemmas, especially those faced by AI technology.  From self-driving cars to deep fake news, the Moral Machine makes it easy to understand the double-edged sword that is AI.

Self-driving cars: Whom to kill?

Based on the classic trolley problem, the self-driving car scenario imagines that you are the brainpower behind a self-driving vehicle with sudden brake failure.  You are shown 13 scenarios where the car can choose to swerve or keep going straight.  In some cases, for example, going straight will cause the car to hit a concrete barrier, killing the occupants of the vehicle, while swerving will cause the car to kill pedestrians in the other lane, but save the occupants of the vehicle.

The scenarios include information about the demographics of the potential victims, highlighting the difficulties of training AI to make informed decisions.  Is it more ethical to kill two adults or one child?  Is it unethical to purposely kill homeless people instead of people with higher social standing, or overweight people before athletes?  Can we ever expect AI to make the right decision every time?  Indeed, can we as humans boast of making the right decision every time?

DeepMoji: Investigating AI in social media

DeepMoji asks users to sign in with their Twitter accounts and explain how they felt while they were writing their last three tweets.  The data is sent to scientists at MIT as part of a research study.  Presumably, the researchers will use the data to teach AI how to make emotional decisions regarding social media.

My Goodness: Donating to charity

My Goodness is not so much focused on AI, as its primary aim is to understand people’s decision-making when giving to charity (there’s a nice link at the end where you can donate to reputable charities).  However, completing the survey and paying attention to the explanation of the results afterwards can yield some interesting food for thought.

The results are broken down into factors such as recipient identity, charity name, effectiveness of the giving, location bias, most- and least-saved character, etc. and participants can see how their decisions compared with those of others.  In many cases, participants may not have purposely selected to save a young man over an old man, but their decisions may have reflected this trend “by accident” if other factors led to them happening to choose that way every time.  This highlights a potential problem in AI, which must base its decisions on the raw data it’s fed without necessarily having the cognitive capacity to understand the reasoning behind the decision-making.

Deep Angel: Investigating the potential of deep fake media

The MIT team, led by Matt Groh, also recently produced Deep Angel: AI technology that allows users to erase images from photographs.  Users can then upload their altered images into the Deep Angel database for others to see, or browse the gallery to see if they can detect “fake” photos.  It’s not perfect yet, and in many instances it’s quite easy to see where the image has been manipulated.  But the project hopes to give people an understanding of how to identify when an image has been altered and, perhaps more importantly, the importance of cultivating an inherent distrust of images in this age of digital manipulation.

As MIT notes, it is only by interacting with AI technology that we can we be capable of understanding its possibilities and limitations when it comes to manipulating the media.  In another article, Groh expands on the aesthetics of absence in AI and the media, and the video below gives a hint of what we can expect from deep fake videos in the very near future.