Fairness tools and explainable AI: Companies start to invest in the ethics of AI


IBM and Accenture on the forefront of research into algorithm fairness

The neural networks that make up AI are known for being “black boxes” that are so opaque they restrict our ability to understand what factors went into a decision.  As accountability and explainability become increasingly important in this digital age, a company’s reputation stands to suffer enormously if their algorithms are shown to be biased.  IBM and Accenture have been developing tools to check for and correct bias in their business users’ algorithms.

IBM’s Trust and Transparency capabities make it easy for businesses to understand AI decisions

In response to the growing legal requirements for transparency in AI-saturated fields like call centers, tax filing, insurance and data privacy compliance, IBM has developed a set of Trust and Transparency capabilities for the IBM Cloud.  The bias-checking algorithms give IBM’s business users a better real-time understanding of the AI’s decision-making process, and provides recommendations on reducing bias that may be detrimental to their organization.

As well as detecting bias, the system also provides corrective data for retraining the model to mitigate future bias.  Thanks to the simple interface, businesses no longer have to ask data scientists to wade through reams of data to explain the algorithms’ output.  The dashboard can see what factors went into a decision and the confidence level of each decision made by AI, and then compare how the AI performs vs. a human.

Accenture’s Fairness Tool uses predictive parity to evaluate algorithm fairness

In a similar vein, Accenture recently developed a Fairness Tool that gives algorithms a predictive parity rating, a measure of fairness that looks at whether the algorithm’s decisions stay accurate across gender, race and other demographics.  This was one method suggested by DeepMind’s study, titled Path-Specific Counterfactual Fairness, in which researchers also proposed using counterfactual fairness to determine the fairness of an algorithm’s decisions.

Accenture’s tool also examines algorithms to see how certain data fields are correlated with other data fields in ways that might compound bias, such as race and postal code.  Pinpointing these interdependencies can help data scientists to re-program algorithms to be less biased.

AI ethics was a central focus in Accenture’s 2018 Tech Trends Report, which urged businesses to “raise” their AIs to be responsible members of society in light of AI’s increasing importance to our lives.  In an interview with Bloomberg, Runman Chowdhury, leader of Accenture’s Responsible AI, said, “Our clients are telling us they are not equipped to thinking about the economic, social and political outcomes of their algorithms and are coming to us for help with checks and balances.”

She went on to say that it was unrealistic to expect data scientists to develop an algorithm that is 100% fair, especially since this tends to have a negative impact on the algorithm’s accuracy, but that Accenture is working towards a solution that will make their algorithms as fair and as effective as possible.  In the meantime, clients can content themselves with the fact that they can easily understand the factors that go into the algorithms’ choices, and decide for themselves whether they want to proceed with an unfair algorithm if this means a decline in accuracy.

Algorithms must be able to correct for bias on-the-go

The focus is on finding quick solutions: algorithms are a quickly evolving tool and business users are looking for solutions that they can implement immediately.  As users become more familiar with the types of biases that get mistakenly programmed into algorithms, hopes are that we can avoid future bias and develop responsible AI.  Microsoft and Facebook have also been developing tools to catch bias in their algorithms, and we can surely expect other companies to follow suit.