Reframing the question of “good”: AI ethics as a human rights problem

A messy start

We all know the risks of AI – if you need a quick overview, the recently published report on “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” will lay them out for you.  In the report, the authors acknowledge the benefits, but stress the importance of recognizing the dual-use applications of AI and working to combat malicious uses of this emerging technology.

The seemingly obvious approach to controlling ethics in AI is by drafting laws to guide its use.  Last year, for example, Germany became the first nation in the world to release ethical guidelines for self-driving cars.  The EU, the US and various other nations are also in the process of creating a legal basis for ensuring ethical development and use of AI.  However, current efforts are disjointed, vaguely written and come from a variety of levels, with different countries progressing at different paces.  This has resulted in a patchwork approach to AI ethical regulation, and it seems safe to say that a comprehensive approach to AI legislation is still a few years away.

Towards a common approach to AI ethical guidelines

Public policy and accords such as the EU declaration signed in April 2018 are aiming to fill the gaps, as are the various non-profit AI ethics boards which have sprung up in recent years.  Guidelines within private companies, such as Google’s “AI Principles”, offer another potential way to guide ethical development in AI.  One of the main problems with regulating AI is how these regulations would have to remain relevant across contexts, from the workplace to the personal computer to the motorway.

Some are saying that a part of the problem lies not in this disparate approach to regulation, but rather in the fundamental way we view the problem.  A panel of attorneys at the annual Association of Corporate Counsel meeting recently proposed that in light of the lack of concrete guidance, perhaps the best way for lawyers to ensure ethical practices in AI would be by adhering to the Rules of Professional Conduct.  They cited four main rules that can also apply to AI: the duties of competence, communication, confidentiality and supervision.  This is an interesting route because while it pertains specifically to AI in legal practice, the underlying principles also generally apply to AI in any context.

Reframing the question: AI ethics as a human rights issue

SAP, which recently became the first company in Europe to establish an AI ethics board, stated that they view ethics in AI as a question of how best to protect human rights, especially as defined by the United Nations, who stress the need to protect and respect human rights and fundamental freedoms.  Mark Latonero echoes this point of view in his 2018 study, “Governing Artificial Intelligence: Upholding Human Rights & Dignity,” pointing out that the already-established set of basic human rights is a useful, global tool for identifying and protecting our fundamental rights and freedoms.

It seems obvious which rights we need to protect, but in the confusion surrounding AI ethics, we’ve already seen far too many examples of companies trespassing on these rights thanks to a lack of guidelines.  Not only would a human rights-based framework provide a solid foundation from which to make ethical AI decisions, it would also provide a robust lens through which to test it.  Latonero suggests that companies could perform Human Rights Impact Assessments at regular intervals, but also underlines the fact that governments should uphold their responsibility to protect human rights.  By making AI ethics a human rights issue, it becomes easier to envision a framework that would be relevant both in public and private practice, no matter the industry.

Instead of framing it as a question of “ethics,” which people understand to be subjective, framing the issue in terms of “human rights” might be able to drive the point home more effectively.  Where some companies might decide that preserving the confidentiality of user data is gray ethical ground and not worth unduly worrying about, they might be more shamed into taking adequate preventative steps if they were told they were trespassing on a person’s basic human right to privacy.

Could the answer to creating a sound ethical code for AI research and development lie in something as simple as piggybacking off existing human rights norms?