A recent TechCrunch article resurfaced a 2017 joint study from Google and Stanford on a critical challenge facing the artificial intelligence (AI) industry: what happens when bots learn to circumvent the rules?

In the study, the research team was working with a neural network to improve the efficiency of turning satellite imagery into Google maps. The AI agent was instructed to transform the aerial images into street maps, and it performed incredibly well at first glance. It was only when the team investigated further that they discovered the agent was effectively cheating at the appointed task.

The TechCrunch piece outlines how the discovery unfolded in detail. And while the situation certainly underscores that these projects can’t be judged at face value, the author of the piece, Devin Coldewey, doesn’t believe there was anything inherently mischievous about the Google/Stanford bot. Coldewey writes, “One could easily take this as a step in the ‘the machines are getting smarter’ narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting.”

Wired’s Tom Simonite has a slightly different point of view. An August 2018 article details similar examples of AI systems gone rogue, including a bot that found a way to score big in Atari by triggering a game flaw that released “a shower of ill-gotten points.” As he puts it, “These examples may be cute, but here’s the thing: As AI systems become more powerful and pervasive, hacks could materialize on bigger stages with more consequential results.”

For example, could an AI agent tasked with saving energy for a utility company wreak havoc by shutting down the grid? Researchers recognize the potential problems here. Simonite’s article quotes a few who study rogue AI projects for the sole purpose of revealing the roots of this behavior.

The consensus seems to be that acts of AI impishness cannot be entirely avoided. Rather, engineers need to coach and collaborate with these systems and be extremely specific in the tasks they’re setting forth. As Catherine Olsson, a Google researcher interviewed for the Wired article, said, “Today’s algorithms do what you say, not what you meant.”

This is essential wisdom for all those who interact with and manage AI systems. As the technology grows more mainstream, it’s important that this caution remain top of mind in order for enterprise adoption of AI to continue.