If you’re like most organizations, auditing artificial intelligence (AI) and big data while simultaneously addressing security and governance requirements is still a work in progress. But, according to a recent TechRepublic article, there may be practices already embodied in your IT policies and procedures that you can adapt for both AI and big data. The piece offers nine questions organizations can use to self-audit these technologies, which we explore below in more detail.

1. Do you know where your data is coming from?

It’s critical that you evaluate any data you purchase and use from outside vendors for its trustworthiness and quality before being used in AI and analytics. In fact, TechRepublic urges that vetting third-party data be included in every RFP.

2. Have you addressed data privacy?

It’s important to not only have privacy agreements with customers but also to consider how these may change when they are extended to external business partners that may not have the same data privacy standards.

3. Do you have lockdown procedures?

The Internet of Things (IoT) and edge computing are bringing volumes of big data into enterprise systems. Given their mobile and distributed nature, it’s incredibly easy for these devices to become lost, misplaced, or compromised. IT must be cognizant of this vulnerability and implement a plan to lock them down when they are reported missing.

4. Is all IT aligned with your security settings?

It’s not uncommon for these edge and IoT devices to arrive with default security settings that don’t sync with corporate standards.

5. How clean is your data?

Another important auditing consideration is making sure that an appropriate level of data cleaning, involving data discards, data normalization, and the use of ETL tools, is in place. This ensures that all data entering your analytics and AI systems is as clean and accurate as possible.

6. How accurate is your AI?

The algorithms and the data that are used in AI systems continuously change. As such, the assumptions for AI that are true today may not be so tomorrow. In addition, there’s always the chance that AI is incorporating biases that are not immediately detected. To address these concerns, you should ensure that monitoring and revising AI algorithms, queries, and data be a continuous and ongoing process.

7. Who is authorized to touch your big data and AI?

In order to ensure that only authorized users are accessing big data repositories, AI and analytics systems, these should all be monitored on a 24/7 basis.

8. Is your AI fulfilling its mission?

AI systems should be assessed to confirm that they are addressing business needs on an annual basis, if not more frequently. If they fail to deliver, they should be revised or discarded.

9. Can you failover if AI fails?

If AI is embedded into business processes, it’s vital that your disaster recovery plan address what happens if these systems become inoperable.

For more on the above considerations and how they can help strengthen your AI deployments, head over to TechRepublic for the full article.