At the very core of every machine learning application or engine is the algorithm. Algorithms carry out the necessary calculations, data processing, and automated reasoning to produce a desired outcome. Without them, there is no machine learning, no artificial intelligence.

The problem with algorithms is that they’re created by people. And people can be biased. Even the data they select can be biased. When bias enters into the algorithmic equationwhether intentionally or not unexpected and unfortunate things can happen.

As defined in Wikipedia, “Algorithmic bias occurs when a computer system reflects the implicit values of the humans who are involved in coding, collecting, selecting, or using data to train the algorithm.”

Algorithmic bias can have a negative impact on processes and people in very real terms. They can be the reason a bank loan is denied or a credit card application is rejected.  At worst, algorithmic bias can even lead to racial and gender discrimination.

Take for example Amazon’s recent termination of an AI-powered HR recruiting tool, uncovered by Reuters. No stranger to automation, Amazon back in 2014 set out to develop a way to “mechanize the search for top talent” by creating a machine learning tool that reviewed resumes faster and identified the best candidates.

The outcome: The recruiting engine was discriminating against women.

Much to their surprise, the Amazon research scientists discovered that the tool gave lower scores to resumes with the word “women’s” in them, and even downgraded candidates who graduated from a women’s university. The problem was that the machine learning engine Amazon developed was analyzing applicant data from the previous 10 years, tracing back to the company’s earlier days when the tech industry was more male-dominated. This correlated to male applicants historically being more successful candidates and the ones that landed jobs, while scoring women applicants lower. To their credit, Amazon shut down the project immediately.

So, what are some solutions?

A recent Wired article with the headline, What Does a Fair Algorithm Actually Look Like?, discusses the notion of “algorithm transparency” a relatively new concept that advocates companies be open about how their algorithms actually work and make decisions. While most agree that some level of algorithmic transparency is required, there is still plenty of debate on how far it needs to go, and we’re a long way off from any type of standard or code of ethics being agreed upon.

McKinsey on the other hand offers some more practical advice in a recent article titled, Controlling Machine-Learning Algorithms and Their Biases. The piece offers three safeguards for companies that want to minimize algorithmic bias now and in the future. See below:

  1. Know an Algorithm’s Limitations Don’t ask questions whose answers can be invalidated, according to McKinsey. Algorithms are designed for very specific purposes. Understanding how an algorithm produces output is critical to identifying biases in results.
  2. Use Broad Data Samples As seen in the Amazon example, using only historical data or a singular data source can lead to algorithmic bias. Historical data combined with newer or “fresh” data can help reduce it.
  3. Know When to Use Machine-Learning (and When Not to) Not every decision is best left up to an algorithm. For the right tasks and business processes, big data analytics and machine learning offer “speed and convenience” but it’s not a one-size-fits-all approach. When more transparency is required around decision-making or more flexibility is needed, manually crafted decision-making models and human decision-making must still be part of the mix.

As the article concludes, “The good news is that biases can be understood and managed — if we are honest about them. We cannot afford to believe in the myth of machine-perfected intelligence.”

To learn more, read the full McKinsey article here.