• Alexander Lowe

Removing Bias from Artificial Intelligence

Whether they’re automatically completing emails, recognizing faces, or assessing criminal risk factors, AI systems can incorrectly favor certain results.

Should these AI systems be more transparent in how they formulate recommendations? Better yet, how can we mitigate biased AI in the first place?

Why AI is susceptible to bias

Most AI technologies rely on one of two approaches to learning about the world around them:

  • Trust the Data. By providing examples — often hundreds of millions to billions of data points — machine learning algorithms will automagically predict future outcomes. Often times, providing more training data incrementally improves results.

  • Trust the Humans. This “brute force” approach relies on experts manually writing rules and building ontologies that will understand how to interpret incoming data.  They look at outputs from test data, and further hand-tweak the rules to get more accurate results.

An AI system trained in this way inherits information, biased or otherwise, from either its trusted data or its trusted humans. In some cases, machine learning can even amplify bias because a system trained on slightly skewed data can produce greatly skewed predictions.

Ways to combat AI bias

How can humans remove bias, when we as humans have bias ourselves?

AI practitioners, especially those using AI for natural language processing, have taken notice of biases such as gender stereotypes and started offering solutions to promote AI fairness.  For instance, one natural language training approach actively prevents learning correlations with gendered words, in order to avoid inheriting gender stereotypes.

But another powerful way to mitigate bias is to not rely exclusively on trusted training data or trusted human supervisors, by introducing a background knowledge base. Then, teams can bootstrap new AI systems with previously created, general-domain data points. Starting off deep learning models with millions of “common sense” facts, instead of starting from nothing, can offset the bias otherwise introduced by a domain-specific training corpus.

To read the complete article on AI Business.com, click here.

About the Author: Jeff Foley, the head of marketing at Luminoso, has been working with CX, CRM, and natural language technologies since 1996.

+1 617-682-9056

125 High Street

Oliver Street Tower, Floor 3

Boston, MA 02110

  • Twitter
  • LinkedIn

© 2020 by Luminoso Technologies, Inc.