Home » Robotics » Shaping the Future of Generative AI: How Cultural Narratives and Ideologies Influence Public Expectations

Shaping the Future of Generative AI: How Cultural Narratives and Ideologies Influence Public Expectations

As society continues to grapple with the rapid evolution of generative artificial intelligence, a new study published in *Frontiers in Human Dynamics* sheds light on how public expectations surrounding the technology are being shaped—not by technical details, but through broader cultural narratives. The article, titled “Promissory Discourses of Generative Artificial Intelligence—Between Ideological and Imaginary Dimensions,” explores the powerful role of language, media, and collective imagination in defining the social meaning of generative AI tools such as ChatGPT, DALL·E, and Stable Diffusion.

The study, authored by Dionysios Kapsaskis, investigates how both ideological stances and cultural imaginaries influence the way generative AI is discussed in public spheres. Rather than focusing solely on the capabilities or limitations of the technology, the analysis delves into the framing devices and rhetorical strategies that shape societal expectations. This includes both utopian visions of AI-driven progress and dystopian fears of technological disruption.

Kapsaskis identifies two key mechanisms at play in the formation of what he terms “promissory discourses.” The first, the “ideological dimension,” captures how political and economic interests contribute to shaping narratives around AI, often emphasizing efficiency, innovation, and productivity. The second, the “imaginary dimension,” is rooted in collective visions of the future—stories that blend fantasy and speculation, offering visions of transformation that resonate with broader cultural anxieties and desires.

These discourses, the study finds, are not merely abstract or theoretical. They have concrete implications for how generative AI is developed, adopted, and governed. By framing the future as already written—either as a technological utopia or a perilous path forward—these narratives exert pressure on policy decisions, public attitudes, and economic investments. In effect, they help define what kinds of futures are considered possible or desirable, while eclipsing alternative perspectives.

One of the central arguments is that such discourses blur the boundary between what AI is and what it is imagined to be. This fusion, according to Kapsaskis, risks bypassing critical engagement with the actual social, political, and ethical consequences of AI technologies. As generative AI becomes increasingly visible and accessible, with tools like large language models entering educational, creative, and commercial domains, the need for a more democratic and reflexive public conversation becomes urgent.

The article calls for a reevaluation of how society talks about AI. Rather than allowing promissory narratives to monopolize the conversation, the study advocates for inclusive and critical forms of dialogue that recognize the technology’s uncertainties, limitations, and broader impacts. Kapsaskis suggests that a more participatory framing of AI futures—one that actively engages diverse social voices—could help counterbalance the top-down narratives currently dominated by corporate and techno-optimistic perspectives.

The research offers a timely intervention into public discourse, emphasizing the profound influence of storytelling and imagination in shaping technological reality. As debates over the regulation and ethical use of generative AI intensify, the study encourages journalists, policymakers, and technologists alike to look beyond surface-level assumptions and interrogate the deeper cultural forces that guide collective expectations.

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *