Opinion by: Roman Cyganov, founder and CEO of Antix
In the fall of 2023, Hollywood writers took a stand against AI’s encroachment on their craft. The fear: AI would churn out scripts and erode authentic storytelling. Fast forward a year later, and a public service ad featuring deepfake versions of celebrities like Taylor Swift and Tom Hanks surfaced, warning against election disinformation.
We are a few months into 2025. Still, AI’s intended outcome in democratizing access to the future of entertainment illustrates a rapid evolution — of a broader societal reckoning with distorted reality and massive misinformation.
Despite this being the “AI era,” nearly 52% of Americans are more concerned than excited about its growing role in daily life. Add to this the findings of another recent survey that 68% of consumers globally hover between “somewhat” and “very” concerned about online privacy, driven by fears of deceptive media.
It’s no longer about memes or deepfakes. AI-generated media fundamentally alters how digital content is produced, distributed and consumed. AI models can now generate hyper-realistic images, videos and voices, raising urgent concerns about ownership, authenticity and ethical use. The ability to create synthetic content with minimal effort has profound implications for industries reliant on media integrity. This indicates that the unchecked spread of deepfakes and unauthorized reproductions without a secure verification method threatens to erode trust in digital content altogether. This, in turn, affects the core base of users: content creators and businesses, who face mounting risks of legal disputes and reputational harm.