According to a new paper by Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, a key driver behind this cycle of missing goals is the many misinformed assumptions about AI and natural intelligence. Mitchell outlines four common AI fallacies that provide a false sense of confidence about how close we are to achieving AI systems that can match the cognitive and problem-solving abilities of humans, which are as follows:

Narrow AI and General AI are Not on the Same Scale

Current-generation AI systems are good at solving narrowly defined problems—for example, converting audio data into text or finding cancerous patterns in x-ray images. However, just because this technology can solve single problems well does not necessarily mean the industry as a whole is closer to solving more complicated issues. Automated chatbots, for example, can address very specific queries but can’t engage in open-ended conversations without losing coherence over long periods of time. That’s because these capabilities require more than just solving smaller problems; they require common sense, one of the primary unsolved challenges of AI.

The Easy Things are Hard to Automate

According to Mitchell, the things humans do without much conscious thought are actually the most difficult to automate. Some examples of these kinds of tasks include carrying on a conversation, walking through a crowd without bumping into people, and making sense of what we see out in the world. On the flip side, it’s relatively easy to design machines to complete tasks that are difficult for people to master, such as mastering chess or translating sentences into different languages accurately.

Anthropomorphizing AI Doesn’t Help

Another issue that leads to a false sense of confidence in AI is the industry’s tendency to put software on the same level as human intelligence. For example, the use of terms like “learn,” “understand,” and “think” to describe how AI algorithms work suggests that we are further along in the development of technology that truly works like the human mind.

AI without a Body

Mitchell is a proponent of the idea that emotions, feelings, subconscious biases, and physical experience are inseparable from intelligence. As such, she believes efforts to develop AI that lives in servers and matches human intelligence will ultimately fail. In the interim, however, she suggests that this concept is impeding our understanding of current-generation AI and what we can expect as the technology matures.

To learn more about these and other common AI fallacies, head over to VentureBeat.