According to PwC, in 2021, just 20 percent of companies had an AI ethics framework in place, and only 35 percent had plans to improve the governance of AI systems and processes. But as organizations look to operationalize AI, it’s important that they also pay greater attention to strategies for preventing AI from making potentially harmful decisions. 

There are numerous examples of what these harmful decisions could be, including deep-fake technology providing visual “evidence” of something that never occurred. Companies could also misuse AI by buying or selling AI-based assumptions about consumers for their own purposes. And as we’ve written about previously at the APEX of Innovation, AI can also cause harm when it is biased against people based on race, gender, or other variables.  

As companies develop their AI ethics frameworks, there are numerous considerations that must be factored in, including: 

  • Issues of bias
  • Explainability 
  • Data handling 
  • Transparency on data policies 

It’s also critical that policies have provisions for understanding the technology’s impact on society, mitigating unintended consequences, and supporting further innovation toward more ethical AI. 

As companies evolve their AI ethics policies, there needs to be human oversight at every stage of the process, including the creation of the policies themselves. The stakeholders engaged in the development, sales, and deployment of AI systems should assess the possible risks that might arise and examine their project through the lenses of explainability, transparency, and the other considerations outlined above. 

In addition to designing AI projects with an ethical mindset from the outset, companies must continue building AI trust as the initiative progresses. For example, stakeholders should encourage collaboration throughout the business so that issues can be identified, discussed, and resolved in a cooperative environment.

For more on what companies can do to engender AI trust, check out this previous APEX of Innovation post. You can also read more about the current state of AI ethics in this InformationWeek article.