In previous APEX of Innovation posts, we’ve explored the importance of responsible AI and the role we all play in ensuring that models don’t perpetuate biases and injustices. It’s a complex and complicated journey, but one that is nonetheless critical for companies to undertake as AI and machine learning (ML) become more pervasive.

Below are four important things to keep in mind as you work towards building more responsible AI:

Allow People to Express Their Questions and Concerns

The first stop on the road to more responsible AI is acknowledging that biases appear in your data, your models, and yourself. The strongest teams are those that recognize these uncomfortable truths and are given the space to consider how these biases affect the world around them. Companies should encourage transparent conversations and enable team members to speak openly about potentially controversial topics.

Know What to Look For

The best way to spot biases or other potential issues in your model is to pay attention and be intentional in your training. The academic community is increasingly pushing for datasheets that will encourage more responsible AI by clarifying what is and is not included in a dataset to help teams ensure that the data they use is intended for their purpose and represents their user base.

Meet People Where They Are—Not Where You Want Them to Be

Chances are your team members will have different thoughts about the chief ethical concerns surrounding AI based on their age, experiences, and background. In the same vein, some are likely to be more passionate and well-read on the topic and related concepts and approaches than others. It’s the company’s responsibility to make sure every voice on the team is heard and that the team works together to create a common language and framework to discuss the issues, key terms, and ideas related to building ethical AI.

Adapt as You Learn

It should go without saying that it’s important to stay abreast of current topics in social justice and AI. Yet, it’s equally as critical to embrace the unknown, as building responsible AI encompasses anticipating change, being open to continuous learning, and recognizing that problems may arise without any clear-cut answer.

For more on these considerations and other things to be mindful of as you encourage more responsible AI, check out this recent VentureBeat article.