Examples of algorithmic bias—when a computer system reflects the implicit values of humans—can pop up at virtually any company carrying out data analytics initiatives. While unintentional and at times inconsequential, the ramifications of algorithmic bias are very real for the victims of it. Rejected loan applications, gender discrimination on the job, and racial inequality are just a few examples. To help overcome this, we’ve offered companies advice on managing algorithmic bias, leading your company to responsible AI, and navigating the slippery slope of tech responsibility here on the APEX of Innovation.

Below we take a non-conventional view on how algorithms actually help reduce bias and offer examples of organizations and companies that are doing it. A recent Harvard Business Review article, titled “Want Less-Biased Decisions? Use Algorithms,” looks at algorithmic bias by comparing it to the bias of the humans. Specifically, the researchers at HBR looked at examples where algorithmic decision making delivered consistently better results than humans—with less bias. These examples included:

  • Mortgage underwriting: According to the HBR article, a 2002 study of underwriting algorithms in the mortgage industry found that automated systems can predict defaults better than humans. This resulted in higher borrower approval rates and benefited typically underserved home buyers.
  • Job candidate screening: A study from Columbia University looked at a job-screening algorithm and found it favored non-traditional applicants compared to humans. According to the article, “Compared with the humans, the algorithm exhibited significantly less bias against candidates that were underrepresented at the firm.” 
  • Choosing company directors: According to the article, a team of finance professors developed an algorithm to select the “best” board members for companies. The team discovered that firms using algorithms to select board members performed better than those without it and with less of a tendency towards selecting male candidates already serving on multiple boards.

So, why do the above examples go against the conventional wisdom that all algorithms produce some level of bias in their results? 
The researchers at HBR conclude that while algorithms are indeed biased, they are typically much less biased than the humans they are replacing.

To learn more, check out the complete HBR article here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here