What’s the current situation of ethics regulations in AI?
The obvious challenge in AI ethics governance is the global nature of the technology and the apparent impossibility of finding a way to address the issues in a coherent and comprehensive manner. Currently, the state of AI ethics regulations is messy and disparate, with room for gaps and lags in regulations.
Federal policies addressing the ethics of AI simultaneously address the need for a given nation-state to outpace other nation-states in AI research and development, as a matter of national security and economic interest. It’s easy to see that countries like China have a clear advantage in AI development thanks to the sheer amount of user data they have been able to collect, in part because of their laxer regulations. This puts ethics at an inherent tension with a nation state’s competitive drive.
National policies for AI ethics in North America, Europe and Asia
The UK, France, India, China and the EU have all updated their national policies and/or implemented advisory boards in recent years to include a nod to ethics, especially the importance of privacy. The EU also signed a Declaration of Cooperation in Artificial Intelligence in April 2018 whereby 24 member states will work together to address the opportunities and challenges of AI. In a fast-changing world where security breaches and biases are cropping up faster than they can be foreseen, it’s more important than ever to find a way to work together across borders in preserving ethical guidelines in AI development.
The recently introducted AI in Government Act in the US aims to provide a space for the government to work together with industry and academia to promote responsible AI development. Meanwhile, the National Defense Authorization Act (NDAA) calls for the government to develop ethical policies around AI, especially as they relate to national security and defense. While this is a laudable effort, the exact reach of this mandate is left fairly ambiguous and some say the current setup in the US is lacking the scientific expertise to adequately regulate ethical issues in AI.
While certain countries are signing laws left and right in an effort to reign in the privacy issues of AI and the IoT, there are many concerned voices reminding us other countries are not so scrupulous about regulating data privacy when it comes to processes like facial recognition technology. Since technology and data have a tendency to bleed across borders, it’s crucial to find a way to work together as we continue to develop technology that poses an increasingly real risk to individual privacy and independence.
Alternatives: Private sector self-regulation and non-profit ethics boards
Another option for AI ethical regulation would be self-regulation by the private sector, such as the general AI principles adopted by Google or Microsoft. Like the regulations laid down by the nation states, these AI principles tend to be vague at best, with worrying loopholes. A slightly more promising path is that of non-profit research companies like OpenAI, the AI Ethics Initiative or the Partnership on AI, where leading thinkers and large corporations work together to develop AI best practice. But the problem with these platforms is that they focus more on research and dialogue and lack direct involvement with AI regulation.
The future is here: How do we ensure responsible AI development?
While policymakers and non-profit ethics boards continue to meet and draft guidelines for responsible AI development, AI development itself continues at a frantic pace all over the world, heedless of the ever-evolving guidelines. A simple way to steer AI development in the right direction is by putting money behind the projects we believe to be ethical, safe and helpful. After all, if we’ve learned anything from Facebook’s repeated success in avoiding crippling legal sanctions despite its cavalier carelessness with user data, it’s that money talks louder than almost anything else. Put your money to work where it counts!