THE ALGORITHM OF INSURGENCY. THE UNDERSTANDING OF THE EFFECT OF AI-MEDIATED DISINFORMATION ON COGNITIVE SECURITY OF NATIONS.
DOI:
https://doi.org/10.54658/ps.28153324.2026.15.1.pp.35-45Keywords:
Cognitive Security, Artificial Intelligence, Disinformation, Hybrid Warfare, National Security Strategy, Computational Propaganda and Information integrityAbstract
The spread of sophisticated artificial intelligence (AI) programs, in particular Large Language Models (LLMs) and generative adversarial networks that can create synthetic media (deepfakes), has opened a new and dangerous era in the history of information conflict. The main issue that this technological inflection point is raising, discussed in this article, is the development of AI as a force multiplier of disinformation campaigns that are high-frequency, hyper-personalized, and are increasingly indistinguishable to real human communication, bypassing the traditional safeguards of epistemology, and posing a national stability threat. This article presents two main points by using a multidisciplinary approach that combines the theories of security studies, behavioral psychology, and computational linguistics. To begin with, AI-based disinformation implies a qualitative shift in the paradigm of information warfare towards a more insidious one, whereby the strategic object of interest is not physical or cybernetic infrastructure but the perception of reality and epistemological underpinnings of the democratic society, by the individual citizen. Second, existing national security principles, which are based on the notions of territorial sovereignty and the deterrence of kinetic threats, are essentially unprepared to combat this automated, scalable, and to a large extent attributionless insurgency against the popular mind. The article concludes with the suggestion of an elaborate framework of Cognitive Defense, and states that national resilience in the twenty-first century would be achieved through systemic integration of technological countermeasures, the rise of media and information literacy to the rank of core security imperative, and the creation of binding international norms governing the use of AI in the information space.
Downloads
References
Bail, C. A. (2021). Breaking the social media prism: How to make our platforms less polarizing. Princeton University Press.
Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy, 3, e32. https://doi.org/10.1017/dap.2021.30
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712.
C2PA. (2023). Coalition for Content Provenance and Authenticity: Technical specification. https://c2pa.org/specifications/
Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1819.
Diamond, L. (2019). The road to digital unfreedom: The threat of postmodern totalitarianism. Journal of Democracy, 30(1), 20–24.
Fukuyama, F., Richman, B., & Goel, A. (2021). How to save democracy from technology: Ending Big Tech’s information monopoly. Foreign Affairs, 100(1), 98–110.
Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations. Georgetown University Center for Security and Emerging Technology.
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131.
Mackintosh, E. (2019, May 21). Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy. CNN. https://edition.cnn.com/interactive/2019/05/europe/finland-fake-news-intl/
Mitchell, E., Lee, Y., Khazatsky, A., Manning, C. D., & Finn, C. (2023). DetectGPT: Zero-shot machine-generated text detection using probability curvature. Proceedings of the 40th International Conference on Machine Learning (ICML).
Mueller, R. S. (2019). Report on the investigation into Russian interference in the 2016 presidential election. U.S. Department of Justice.
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.
Persily, N., & Tucker, J. A. (Eds.). (2020). Social media and democracy: The state of the field, prospects for reform. Cambridge University Press.
Pomerantsev, P. (2019). This is not propaganda: Adventures in the war against reality. PublicAffairs.
Rid, T. (2020). Active measures: The secret history of disinformation and political warfare. Farrar, Straus and Giroux.
Rid, T., & Buchanan, B. (2015). Attributing cyber attacks. Journal of Strategic Studies, 38(1–2), 4–37.
Sasse, B. (2023). The era of deepfakes: Advancing technologies and growing threats. Atlantic Council Digital Forensic Research Lab.
Schmitt, M. N. (Ed.). (2017). Tallinn Manual 2.0 on the international law applicable to cyber operations (2nd ed.). Cambridge University Press.
Singer, P. W., & Brooking, E. T. (2018). LikeWar: The weaponization of social media. Eamon Dolan/Houghton Mifflin Harcourt.
Toews, R. (2023, May 22). The next frontier for deepfakes: Real-time face swaps. Forbes.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.
Walter, N., Cohen, J., Holbert, R. L., & Morag, Y. (2020). Fact-checking: A meta-analysis of what works and for whom. Political Communication, 37(3), 350–375.
Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making (Report DGI(2017)09). Council of Europe.
World Health Organization. (2020). Managing the COVID-19 infodemic: Promoting healthy behaviours and mitigating the harm from misinformation and disinformation. https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Herasym Dei

This work is licensed under a Creative Commons Attribution 4.0 International License.

