Between Alarm and Evidence: Misinformation in the Age of AI

In the age of Artificial Intelligence, misinformation is widely perceived as a major threat to democracies. However, empirical evidence paints a more nuanced picture: it is not only the spread of misinformation, but above all the sharp decline in the consumption of reliable news that deserves greater attention.

Aarauer Demokratietage

Misinformation is currently discussed in politics, media, and international organizations as one of the central threats to democratic systems. In the Global Risks Report of the World Economic Forum, misinformation is identified as one of the greatest risks to stability. Public discourse is often shaped by a strongly problem-oriented perspective.

This raises the question of to what extent this perception is supported by empirical evidence, and whether misinformation truly represents the central challenge for democratic public spheres. Without downplaying the problem of misinformation, this article argues that a more differentiated perspective is required and that another structural phenomenon deserves greater attention: news deprivation, i.e. the growing lack of exposure to news.

Perception studies: Misinformation as a major problem

Representative surveys show that misinformation is perceived by the public as widespread and socially consequential. In Switzerland, studies indicate a high level of problem awareness, particularly in the context of social platforms (Vogler et al., 2021). Comparable findings are observed in other Western democracies (Van der Meer & Hameleers, 2024). At the same time, misinformation is associated with significant consequences for democratic processes and institutions, for instance with regard to trust and legitimacy (McKay & Tenove, 2021). Perception studies thus point to a high subjective relevance of the phenomenon.

Tracking studies: Limited exposure to misinformation

Studies based on more robust measurement approaches paint a considerably more nuanced picture. Behavioral tracking studies consistently show that the share of content from sources classified as unreliable in the digital information environment is comparatively low. Depending on the approach, these studies measure either exposure to such sources – i.e. content users could potentially see in their feeds – or actual usage, such as clicks or visits to misinformation websites.

For the United States, estimates within the news domain are in the low single-digit percentage range and vary depending on the measurement approach and platform studied (e.g. Twitter, Facebook), typically between one and six percent (Grinberg et al., 2019; Guess et al., 2020; Acerbi et al., 2022). When expanding the perspective beyond news to total media consumption, the share is substantially smaller: the use of misinformation websites accounts for only about 0.15 percent of daily media use (Allen et al., 2020).

Tracking data for European countries show similarly low levels. In France, roughly four to six percent of visits to news websites are directed to misinformation sources, depending on the period studied, while in Germany the share is just under one percent (Altay et al., 2022). Comparable data for Switzerland are currently lacking.

Distribution and effects: Concentration and contextual dependence

Overall, a clear discrepancy emerges between the perceived ubiquity of misinformation and its empirically measured prevalence in tracking studies. Exposure to misinformation is also highly unevenly distributed and concentrated. A relatively small group of users accounts for a large share of contacts and interactions with such content (Budak et al., 2024; Christner et al., 2025).

This pattern is also evident in our own study on interactions with misinformation during the COVID-19 pandemic: only 0.3 and 1.9 percent of observed Twitter users interacted with misinformation content, with corrective responses (debunking) to misinformation being more frequent than supportive ones (Rauchfleisch et al., 2020).

It is also often implicitly assumed that exposure to misinformation necessarily leads to attitudinal or even behavioral change. However, empirical evidence does not support this assumption. Reactions to such content are frequently characterized by indifference or skepticism (Rauchfleisch et al., 2020). This is partly due to the relatively high level of distrust toward digital platforms – where misinformation is more prevalent – which tends to increase further in the age of AI (Camila et al., 2022).

Vulnerability: Individual and societal factors

Although the overall prevalence and impact of misinformation are often overestimated, this does not mean that the phenomenon is unproblematic. Its relevance is highly context-dependent and increases particularly where specific individual or societal vulnerabilities exist.

At the individual level, motivated reasoning – the tendency to interpret information in line with existing beliefs – plays a key role (Zeng & Brennen, 2023; Osmundsen et al., 2021). When misinformation aligns with prior attitudes, it is more likely to be believed or shared. Empirical studies also show increased vulnerability among groups with extremist political orientations (Törnberg & Chueri, 2025). Moreover, misinformation is not always shared out of conviction but is sometimes used strategically to harm political opponents (Petersen et al., 2018).

At the societal level, vulnerability is shaped by structural conditions. These include polarization, low trust in news media, fragmented media use, a high reliance on social media as a news source, and – crucially – the presence of established elites who spread misinformation. Based on these factors, countries differ significantly in their resilience to misinformation. While Switzerland appears comparatively resilient, other countries such as the United States are considerably more vulnerable (Humprecht et al., 2020).

Artificial intelligence: Ambivalent effects

As shown, misinformation is less widespread than often assumed. It is a phenomenon concentrated among specific groups and varies across national contexts. However, generative artificial intelligence is changing the conditions of information production and dissemination.

On the one hand, it lowers the costs and technical barriers for creating manipulative content, for example in the form of deepfakes (Karaboga et al., 2024). It also enables new forms of coordinated and scalable dissemination of misinformation (Schroeder et al., 2026). Experimental studies further suggest that AI-generated content can be persuasive and thus has the potential to influence opinions (Goldstein et al., 2024).

On the other hand, AI also offers opportunities for countermeasures, for example in automated fact-checking or dialog-based correction of misinformation (Lee & Fussell, 2025).

A bigger problem? News deprivation

Misinformation should not be considered in isolation but rather in the context of broader changes in information use. In this regard, another issue stands out: in many Western democracies, news consumption has declined significantly (Toff et al., 2023; Skovsgaard & Andersen, 2020).

Globally, around 40 percent of the population now belong to so-called “news avoiders” (Newman et al., 2025). In Switzerland, as many as 46 percent are already considered “news deprived,” meaning they are persistently underexposed to news (Eisenegger et al., 2025). The consequences for democratic societies are substantial: low exposure to journalistic content is associated with lower political knowledge, reduced democratic participation, and weaker identification with democratic systems (Eisenegger et al., 2025).

Against this backdrop, it appears plausible that for many democracies, the greatest risk lies not primarily in exposure to false information, but in the declining consumption of reliable information provided by professional news media.

Conclusion: A stronger focus on news deprivation is needed

Downplaying misinformation is just as dysfunctional as adopting an overly alarmist perspective. Unwarranted dramatization can itself reduce satisfaction with democracy (Jungherr & Rauchfleisch, 2024).

Misinformation is undoubtedly a relevant issue whose importance may increase further with technological developments such as artificial intelligence. At the same time, there are clear differences across countries and platforms: certain contexts are particularly vulnerable, and some platforms expose users more strongly to problematic content. Moreover, misinformation is a temporally dynamic phenomenon that intensifies particularly in times of crisis (Christner et al., 2025).

In this context, a differentiated perspective is essential. Alongside efforts to address misinformation, another issue—potentially even more significant for many Western democracies—should receive greater attention: news deprivation. To improve the quality of information environments, it appears more important in many countries to strengthen exposure to reliable journalistic content and thereby enhance societal resilience than to focus solely on combating misinformation (Altay, 2026; Acerbi et al., 2022).


Note: This article is based on the presentation “Beyond Misinformation: Perception, Reality, AI, and the Challenge of News Deprivation”, delivered by Mark Eisenegger at the Aarau Democracy Days, 12–13 March 2026.

References

  • Allen, J., Howland, B., Mobius, M., Rothschild, D., & Watts, D. J. (2020). Evaluating the fake news problem at the scale of the information ecosystem. Science advances, 6(14), eaay3539
  • Altay, S. (2026). Rethinking the problem of misinformation and its solutions. New Media & Society, 0(0). https://doi.org/10.1177/14614448261428635
  • Altay, S., Nielsen, R. K., & Fletcher, R. (2022). Quantifying the “infodemic”: People turned to trustworthy news outlets during the 2020 coronavirus pandemic. Journal of Quantitative Description: Digital Media, 2. https://doi.org/10.51685/jqd.2022.020
  • Acerbi, A., Altay, S., & Mercier, H. (2022) ‘Research note: Fighting misinformation or fighting for information?’, Harvard Kennedy School (HKS) Misinformation Review, 3 (1), pp. 1-15. https://doi.org/10.37016/mr-2020-87
  • Budak, C., Nyhan, B., Rothschild, D. M., Thorson, E., & Watts, D. J. (2024). Misunderstanding the harms of online misinformation. Nature, 630(8015), 45–53. https://doi.org/10.1038/s41586-024-07417-w
  • Camila, M., Badrinathan, S., Ross Arguedas, A., Toff, B., Fletcher, R., & Nielsen, R. K. (2022). The trust gap. https://doi.org/10.60625/risj-skfk-h856
  • Christner, C., Makhortykh, M., & Gil-Lopez, T. (2025). Populist radical-right attitudes, media trust, and social media reliance: Combining survey and tracking data to investigate predictors of online exposure to disinformation. Telematics and Informatics, 98, 102250. https://doi.org/10.1016/j.tele.2025.102250
  • Eisenegger, M., Vogler, D., Ryffel, Q., Udris, L., Rieser, R., & Schneider, J. (2025). Wissen ohne Nachrichten? Wie News-Deprivation mit Wissen, demokratierelevanten Einstellungen und dem Informationsverhalten zusammenhängt. Schwabe Verlag.
  • Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. presidential election. Science, 363(6425), 374–378. https://doi.org/10.1126/science.aau2706
  • Guess, A. M., Lockett, D., Lyons, B., Montgomery, J. M., Nyhan, B., & Reifler, J. (2020). “Fake news” may have limited effects on political participation beyond increasing beliefs in false claims. Harvard Kennedy School Misinformation Review. Advance online publication. https://doi.org/10.37016/mr-2020-004
  • Hameleers, M., T. G. L. A. van der Meer, and A. Brosius. (2020). “Feeling “Disinformed” Lowers Compliance with COVID-19 Guidelines: Evidence from the US, UK, Netherlands and Germany.” Harvard Kennedy School Misinformation Review. https://misinforeview.hks.harvard.edu/article/feeling-disinformed-lowers-compliance-with-covid-19-guidelines-evidence-from-the-us-uk-netherlands-and-germany/
  • McKay, S., & Tenove, C. (2021). Disinformation as a Threat to Deliberative Democracy. Political Research Quarterly, 74(3), 703–717. https://doi.org/10.1177/1065912920938143
  • Newman, N., Ross Arguedas, A., Robertson, C. T., Nielsen, R. K., & Fletcher, R. (2025). Digital news report 2025. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2025-06/Digital_News-Report_2025.pdf
  • Petersen, M. B., M. Osmundsen, and K. Arceneaux. 2018. “A “Need for Chaos” and the Sharing of Hostile Political Rumors in Advanced Democracies.” 114th Annual Meeting of the American Political Science Association, August, 1–42.
  • Skovsgaard, M., & Andersen, K. (2020). Conceptualizing News Avoidance: Towards a Shared Understanding of Different Causes and Potential Solutions. Journalism Studies, 21(4), 459–476. https://doi.org/10.1080/1461670X.2019.1686410
  • Rauchfleisch, A., Vogler, D., & Eisenegger, M. (2020). Wie das Coronavirus die Schweizer Twitter-Communitys infizierte. fög (Ed.), Qualität der Medien: Jahrbuch, 61-75. https://www.foeg.uzh.ch/dam/jcr:e591f77b-8507-4272-819c-11ed9af1f0d6/Studie_03_2020.pdf
  • Toff, B., Palmer, R., & Nielsen, R. K. (2023). Avoiding the news: Reluctant audiences for journalism. Columbia University Press.
  • Van der Meer, T. G. L. A., & Hameleers, M. (2025). Perceptions of misinformation salience: a cross-country comparison of estimations of misinformation prevalence and third-person perceptions. Information, Communication & Society, 28(4), 575–596. https://doi.org/10.1080/1369118X.2024.2375256
  • Vogler, D., Schwaiger, L., Rauchfleisch, A., Marschlich, S., Siegen, D., Udris, L., Eisenegger, M., & Schneider, J. (2021). Wahrnehmung von Desinformation in der Schweiz. https://doi.org/10.5167/uzh-210613

Image: unsplash.com

image_pdfimage_print
CategoriesEvent, SeriesThemes
, ,