The More People Understand AI, the Less They Like It: Unpacking the Growing Disconnect Between Awareness and Acceptance

The rapid growth of artificial intelligence has led to the question: Who is most likely to integrate AI into their daily lives? Many assume it’s the tech-savvy—those who understand AI and its workings—who would be the first to adopt it.

However, recent research published in the Journal of Marketing reveals a surprising truth: those with less knowledge about AI are often more open to using it. This phenomenon, which we term the “lower literacy-higher receptivity” link, suggests that a lack of technical understanding can actually make people more willing to embrace AI.

This pattern holds true across various groups, contexts, and even countries. For instance, our analysis of data from Ipsos covering 27 countries shows that people in nations with lower average AI literacy are generally more open to adopting AI technologies than those in countries with higher literacy.

Similarly, a survey of U.S. undergraduate students found that those with less understanding of AI were more likely to express interest in using AI for tasks like academic assignments.

So why does this happen? The answer lies in how AI now performs tasks traditionally associated with humans. When AI generates art, writes a thoughtful response, or plays music, it can seem almost magical—like it’s stepping into the realm of human creativity and emotion.

Of course, AI doesn’t actually possess human-like qualities. A chatbot might produce an empathetic response, but it doesn’t truly feel empathy. People with more technical knowledge of AI understand this and recognize the underlying algorithms, training data, and computational models at play, which demystifies the technology.

On the other hand, those with less understanding may perceive AI as more magical and awe-inspiring, which makes them more open to using it. Our studies suggest that this sense of wonder is especially strong when it comes to using AI for tasks that involve human-like traits, such as providing emotional support or counseling. However, when AI is used for tasks that don’t evoke human qualities—like analyzing test results—the pattern reverses. People with higher AI literacy are more receptive to these uses because they appreciate AI’s efficiency, not its “magical” aspects.

What’s interesting is that this link persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even somewhat frightening. Despite these concerns, their openness to using AI seems to stem from their sense of awe about its potential.

This finding sheds new light on why people have such varied reactions to emerging technologies. Some studies show that consumers tend to favor new tech, a phenomenon known as “algorithm appreciation,” while others demonstrate skepticism, or “algorithm aversion.” Our research suggests that the perception of AI’s “magicalness” plays a key role in shaping these reactions.

For policymakers and educators, this insight presents a challenge. Efforts to increase AI literacy might inadvertently reduce enthusiasm for AI by making it seem less magical. This creates a delicate balance between educating people about AI and keeping them open to its adoption.

To maximize AI’s potential, businesses, educators, and policymakers need to understand how perceptions of AI’s “magicalness” influence its adoption. By doing so, they can help create AI products and services that not only align with people’s understanding of AI but also maintain the sense of wonder that drives many to embrace the technology. Ideally, this balance will enable people to better understand both the benefits and risks of AI, without diminishing the excitement that fuels their willingness to adopt it.

Leave a Comment

Your email address will not be published. Required fields are marked *