When even lawmakers are fooled by AI, laws can’t fix it
An AI-generated video circulating on the internet over the weekend appeared to have misled Senator Ronald “Bato” dela Rosa into sharing what he seemingly believed was a genuine youth voice against the impeachment trial of Vice President Sara Duterte.
When it comes to information shared or posted on social media, I always take a skeptical attitude, sometimes even assuming that everything online is fake until proven otherwise. So I also treated the news about netizens ridiculing the good senator with skepticism. Upon verification, I learned that the AI-generated video was indeed shared on the senator’s verified social media page. There were also reports about it in at least two national newspapers with an online presence.
As a media literacy advocate and mentor, my advice to anyone today when it comes to using the internet and social media is this: assume that any information shared electronically is fake until proven otherwise, or verified by reliable, institutional sources. That same skeptical attitude I mentioned earlier is what guides how I navigate the digital world.
I don’t even take the comments section on any platform seriously. One time, an activist friend asked me if I had ever been red-tagged online for my views or for the controversial cases I’ve handled. My answer was a simple no, because I never really pay attention to the comments section or any online noise.
Anyway, that a senator of the republic fell for an AI-generated video aimed at making it appear that the youth support the vice president is no surprise. Even your uncle with an engineering degree, your retired aunt with a double PhD, or your colleague at the university can easily fall for fake news and disinformation these days. This only shows that intelligence and professional standing alone are not guarantees against manipulation in the age of algorithmic persuasion and synthetic media.
And with artificial intelligence (AI), manipulation is taken to yet another level. A report by PhilStar.com last week confirmed that online trolls are now using generative artificial intelligence “to influence political discussions through inauthentic comments on social media platforms in the Philippines.”
These developments are worrisome, of course, but what is even more worrisome is what we are doing, or not doing, about them as a society. There seems to be fragmented responses; some express anger online, others share fact-checks, while some strongly call for legislation. Meanwhile, our institutions --schools, media, government, and civil society-- are slow to adapt.
Once again, we tend to fall into a familiar pattern: when something goes wrong, our instinct is to pass a law that punishes someone. And when the problem gets worse, we simply raise the penalties, as if harsher punishment were a substitute for real solutions. I oppose punitive legislation that seeks to penalize AI-generated manipulation per se, or that attempts to restrict the use of AI without a nuanced understanding of the technology’s nature and potential. Instead, we should address the root of the problem by implementing real, systemic solutions.
The root of the problem of online and AI-generated misinformation and disinformation is the public's inability to critically evaluate the information they consume. Thus, the more effective and sustainable approach is to build a media-literate public. Studies show that media literacy education significantly impacts students' ability to analyze and evaluate media messages. Students who received media literacy education are “more likely to evaluate media messages critically and less likely to accept them at face value.” Existing media literacy education programs have also shown promising results in improving individuals' ability to identify and resist misinformation (Washington, 2023).
- Latest