EU High-Level Expert Group calls for input on AI Ethics Guidelines

Draft AI Ethics Guidelines up for review by citizens

The European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) recently published the draft of their AI Ethics Guidelines. The AI HLEG is accepting feedback from European AI Alliance stakeholders on the draft guidelines until January 18, 2019. The final version will be presented to the European Commission in March 2019 and cooperation will subsequently be opened up to non-EU countries who wish to share the same values, and stakeholders will be invited to voluntarily endorse and sign up to the Guidelines.

The AI HLEG is an independent panel composed of 52 high-level academic, business and civil society experts and was created in 2018 with the aim of ensuring that AI is developed in a way that respects the fundamental rights, regulations and core principles outlined by the European Union. In contrast to vaguer AI ethics documents that have been released in the past by other groups, the AI HLEG’s Ethics Guidelines are meant to provide concrete, actionable advice on ensuring ethical use of AI. They provide non-technical (e.g. regulation) as well as technical methods (for example, through testing exercises or through the use of built-in constraints in the AI’s architecture) for doing so. The guidelines are not meant to replace policies or regulations or prevent the drafting of AI policies and regulations in future.

AI Ethics Guidelines urge focus on “trustworthy AI”

The Ethics Guidelines acknowledge the contributions of AI in fields like education, cybersecurity and healthcare, as well as the important role AI will surely play in addressing challenges such as global health and wellbeing, climate change and other Sustainable Development Goals as laid down by the United Nations. Proposed 7-year budgets for 2021-2027 would see €7 billion from the EU budget allotted to AI development.

On the flip side, the Ethics Guidelines warn of the risks inherent in AI, which may pose problems if not properly managed. The panel recommends using a human-centric approach and focusing on trustworthy AI. They define trustworthy AI as AI that has an ethical purpose – i.e. it follows regulations while respecting fundamental rights, core principles and values – and is technically robust and reliable enough to avoid causing unintentional harm. The document places a focus on transparency and explicability, naming the latter as a precondition for achieving informed consent from people interacting with AI technology. Other areas covered include user privacy, human autonomy, biased algorithms and the potential to weaponize AI.

Europe establishing itself as a gold standard in terms of big-data ethics

Europe has been taking giant steps in the last few years in an effort to establish itself as a moral compass in the area of new technology. The implementation of the General Data Protection Regulation (GDPR) in May 2018 set a worldwide standard for data privacy regulation, and the finalized version of the AI Ethics Guidelines will likely do the same.

Lest anyone complain that the guidelines might have a stifling effect on AI innovation in Europe, the panel is quick to add that they mean for the guidelines to have the opposite effect. The hope is that the document will generate quicker uptake by stimulating the creation of AI that users can trust.

Ultimately, the goal is to use AI as part of the collective thrust towards increased individual and societal well-being.