As companies worldwide adopt artificial intelligence (AI), along with it comes a whole host of factors to consider, including how to help employees upskill to new roles, AI ethics and accountability, and more. Below we look at AI regulations, which are beginning to get significant attention from government bodies across the globe, including U.S. financial regulators, the U.S. Federal Trade Commission, and the European Commission.

A recent Harvard Business Review article unpacks some of the most recent developments related to AI regulations and what business leaders can expect next, predicting that “New laws will soon shape how companies use artificial intelligence.” 

Following a slew of new directives coming out from financial regulators, the U.S. Federal Trade Commission (FTC) recently issued new guidelines concerning “truth, fairness, and equity” in AI, which the article describes as “uncharacteristically bold.” The European Union has followed suit, proposing fines of up to six percent of a company’s annual revenues for non-compliance, higher than fines imposed by GDPR non-compliance.

So, what exactly can you do to make sure your AI initiatives stay on track?

According to the article, the answer is not that simple. In fact, most companies trying to improve and remain competitive find making the inevitable move to artificial intelligence (AI) technologies challenging enough without also taking current and future government regulations into account. To help you navigate these uncharted waters, the Harvard Business Review offers up three “concrete actions” that your company can take:

  1. Conduct Risk Assessments

Understanding what the article refers to as “high-risk” algorithms is quickly becoming a requirement for using AI. For example, when companies process large amounts of personal data as part of AI initiatives, they need to be aware of any unintended outcomes and how to resolve them, including providing clear descriptions of both potential problems and resolutions.

  1. Accountability and Independence

According to the HBR, companies can minimize risks and increase AI accountability by testing systems for risks and involving different stakeholders with different incentives for success. This extends beyond involving only data scientists in projects and including lawyers and other types of technical roles.

  1. Continuous Review of AI Systems

Risk management for AI is a continuous process, requiring companies to keep reviewing systems long after initial impact assessments are completed. This is because AI systems are “brittle,” and risks can increase over time. As a result, companies can never fully mitigate risks related to AI, requiring them to carry out comprehensive and ongoing system reviews.

To learn more about incoming AI regulations, read the complete HBR article here.