Abstract
In the 21st century, the intensive development of artificial intelligence (AI) and synthetic media (deepfake) has led to a systemic crisis of the reliability of public information. The paper analyzes the technological, ethical and, above all, legal dimensions of this problem. Special attention is paid to the gaps in the current legislation of Georgia in terms of controlling Deepfake and algorithmic manipulation. The model of the European Union Digital Services Act (DSA) is discussed as a potential framework for regulating the liability of platforms. The research presents integrated recommendations, which include the adoption of special legislation, strengthening media literacy and adapting technological identification mechanisms to the national context.
References
Castells, M. (2009). Communication Power. Oxford University Press.
Chesney, R., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(5), 1753-1823.
Council of the European Union. (2022). Regulation on a Single Market For Digital Services (Digital Services Act - DSA).
Graves, L. (2016). Deciding What’s True: The Rise of Political Fact-Checking in American Journalism. Columbia University Press.
Kshetri, N. (2017). Blockchain’s roles in strengthening cybersecurity and protecting privacy. Telecommunications Policy, 41(9-10), 875-889.
Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake News Detection on Social Media: A Data Mining Perspective. SIGKDD Explorations, 19(1), 22-36.
Sunstein, C. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press.
Vosoughi, S., Roy, D., & Aral, S. (2018). The Spread of True and False News Online.