According to a recent CIO piece by John Edwards, “The risks of getting AI wrong are real and, unfortunately, they’re not always directly within the enterprise’s control.” Edwards came to this conclusion after reviewing a new Forrester report that stresses the need for third-party accountability in artificial intelligence (AI) tools.

Customer attrition, regulatory fines, and brand damage are just a few of the ramifications if companies fail to address AI accountability. And, as the technology increasingly matures and becomes more widely adopted, it’s important that organizations ensure the responsible development and deployment of AI solutions.

The Data Factor

Edwards writes, “Most enterprises partner with third parties to create and deploy AI systems because they don’t have the necessary technology and skill in house to perform these tasks on their own.” However, this arrangement can inadvertently lead to issues if the company in question doesn’t fully grasp the complexity of the AI supply chain. Chief among these complexities? Data. As Forrester analyst and report author Brandon Purcell noted, “Incorrectly labeled data or incomplete data can lead to harmful bias, compliance issues, and even safety issues in the case of autonomous vehicles and robotics.” As such, a large part of addressing AI accountability is ensuring that the data environment is designed to help, rather than hinder, the responsible use of the technology.

Buyer Beware

When reviewing third-party AI solutions, companies should never assume that the tool will be objective simply because it’s marketed as such. Performing due diligence on AI vendors early and often is a critical step. Just like it’s become commonplace in the manufacturing industry for companies to document each step in the supply chain, AI vendors must be held to similar expectations of transparency. Many organizations are vetting AI solutions by creating a task force to evaluate the fallout from any potential AI slip up. As Purcell told Edwards, “Some firms may even consider offering ‘bias bounties,’ rewarding independent entities for finding and alerting you to biases.”

Establish a Strong Partnership

The Forrester report stresses that organizations embarking on an AI initiative should select partners that share their vision for responsible use. According to Edwards, “Most large AI technology vendors…have already released ethical AI frameworks and principles.” As part of the selection process, companies should thoroughly review these materials and determine how they fit with the organization’s own ideology.

Accountability is a Team Sport

It’s also important that companies look internally once AI solutions have been deployed and ensure that responsible usage continues to remain a top priority. While some companies are hiring Chief Ethics Officers to oversee this area, in most organizations this responsibility is shared across multiple roles. Purcell advises “…data scientists and developers to collaborate with internal governance, risk, and compliance colleagues to help ensure AI accountability.”

For more of Purcell’s thoughts and what companies must do to prioritize AI accountability, you can read Edwards’ article in its entirety here. To explore the ethics of AI, check out this recent APEX of Innovation post.