Recap: AI developments in 2018 and predictions for AI ethics in 2019

AI in 2018: The most important developments

AI has advanced by leaps and bounds this year, bringing us breakthroughs in the fields of medicine, art, automobiles and countless others.  In this article we'll take a look at some of the most significant contributions to AI development in 2018.

Deep fakes and AI art blur the border between reality and fiction

Leaked deep fake algorithms have made it possible for laypeople to create their own deep fake videos; NVIDIA recently announced they’re capable of creating AI-generated images of imaginary people; China’s Xinhua News Agency has developed an AI-powered news anchor; the first AI-generated work of art to be auctioned off at Christie’s sold for a whopping $432,500 in October; and Lexus used AI to create an ad for their latest car.

Algorithms are blurring the borders between what’s real and what’s fake, raising all sorts of ethical questions about the validity of news stories and the definition of creativity.  Meanwhile,  Google’s new Duplex tool, which emulates a human voice to near-perfection, even inserting umm’s and aah’s, sparked a moral outcry from Twitter users who pointed out the inherent dangers in using AI bots so sophisticated that they can be mistaken for humans.  A Washington Post article suggested it might be time to demand that AI reveal itself as AI so humans will not be duped.

AI innovations in medicine continue at a breakneck pace

A number of medical breakthroughs were also made possible thanks to AI.  Researchers have made strides in using AI to predict Alzheimer’s disease, identify missing links across various cancer studies and predict polypharmacy side effects, among other things.  AI’s ability to analyze huge data sets and narrow down the pertinent information is a great asset in fields like medicine, where it might otherwise be years before humans can make the same connections.

On a more ethically questionable note, a Chinese scientist announced in November that he had succeeded in creating the world’s first genetically altered babies, twin girls called Lulu and Nana who will be HIV-resistant thanks to modifications he made in their genetic makeup.  Not surprisingly, this caused a massive outcry from people around the world who are concerned this will take us one step closer to designer babies.

China is bringing AI into schools

This May, the Beijing News reported that a high school in Hangzhou was experimenting with AI in the classroom, using a facial recognition-based “intelligent classroom behavior management system” to monitor students’ behavior and attendance.  While the data is purportedly collected for the purposes of managing the behavior of the overall class and not the individual students, the ethical implications of this are slightly worrying.

Another ethically dubious issue that came to light around the same time concerns the use of artificial intelligence to mark students’ essays.  The technology is comparable in accuracy to a human reader and although it’s designed to help teachers, some have expressed concerns that it might eventually replace them.

Future of Life Institute calls for a ban on the development of lethal autonomous weapons

Following the launch of a new facility for research and development of AI-based military innovations by a leading South Korean university, the Future of Life Institute rallied together organizations worldwide to sign a pledge stating they will not develop lethal autonomous weapons.  Citing the ethical concerns that arise when machines are given the responsibility of taking a human life, the pledge called on governments to create strong regulations surrounding the development of lethal autonomous weapons.

Ethics of AI: Biased algorithms and a changing workforce

Efforts to govern the ethics of AI are scattered and full of holes, ranging from well-meaning pledges from non-governmental organizations, to vague statements released by governments, to frantic efforts by leading tech companies to assure their users their AI won’t run amok.

One major theme this year has been the issue of biased algorithms, which have been seen to unfairly reject passport applications and target job search results based on the gender of the searcher.  A number of companies are developing fairness tools to help combat this problem, and there is a slow but steady push to involve more women in AI research and development in hopes this will help equalize the output of the algorithms.

Another big issue in the ethics of AI is the disruption to the workplace.  That AI will fundamentally alter the look of the workplace is indisputable; whether the change will be positive or negative remains to be seen.  Some say AI will replace human workers and leave us all out of a job, while more optimistic people hope to see AI help with mundane tasks so human workers have more time to focus on generating creative output.

Looking ahead: AI ethics in 2019

So, where do we currently stand in the development of AI?  Elon Musk suggested we’re not far away from the point where we should consider merging AI with the human brain.  China is in the process of developing a social credit system that would see AI infused in every facet of a citizen’s life.  And the AI-infused Internet of Things is becoming ubiquitous, found everywhere from our Fitbits to our Alexas.  Whether we realize it or not, we can no longer escape the reach of AI.

NGO’s, governments and private companies will undoubtedly continue to forge guidelines surrounding the responsible use of AI – it is imperative that we develop detailed regulations as soon as possible if we are to avoid an AI catastrophe.  As individuals become accustomed to having AI play an increasingly important role in their lives, we will surely gain a deeper understanding of these ethical issues and hopefully become empowered enough to contribute to the discussion.

AI is on the brink of cracking the X factor that differentiates humans from machines, and we need to be very careful how we proceed.  2019 will be a crucial year in determining whether we want to continue with black-box AI algorithms or take a step back and work towards explainable AI.