In the early morning of June 5, 2005, Irish* opened a picture on her phone and felt sick to her stomach. The image showed a naked woman lying on her back with her legs spread widely. The body was a different person, but the face was unmistakably hers.

The message came from Rustan Ang, an ex-boyfriend she met while going to school at a university in Aurora Province. Irish broke things off with Ang after discovering he had gotten another woman pregnant. When she refused to get back together with him, Rustan started harassing her with text messages, telling her that he could easily create more scandalous photos and spread them online.

Experts later testified in court that the obscene image had clearly been manipulated, since the face was different in proportion and color from the rest of the body. Ang was found guilty of violating the Anti-Violence Against Women and Their Children Act by committing psychological and emotional abuse. The case, Ang v. Court of Appeals, G.R. No. 182835 (April 20, 2010),is considered a Philippine landmark ruling on violence against women through electronic harassment.

Advertisement

At the time, creating a convincing manipulated photo required a significant amount of time and some technical expertise. Today, advances in artificial intelligence (AI) have made it possible for anyone with a smartphone to generate fake images and videos that are nearly indistinguishable from reality.

Easy targets: women and minors 

Collage: Nicole Almero/Allure Philippines; Source images: Envato Images

Deepfakes are artificial images or videos produced by machine learning algorithms to convincingly mimic a real-life person or scenario. They can be fabricated entirely from scratch, or by manipulating a real person’s existing photos and videos.

Advertisement

According to Mathieu Simon, a senior network engineer specializing in IT infrastructure, the danger lies not only in the technology itself but in how quickly AI development is outpacing human capacity to detect visual deception. “We come from a world where we trusted what we saw, but that world no longer exists,” he said. “The performance of today’s tools is remarkably impressive. It’s becoming increasingly difficult to distinguish fake images from real ones, and many people don’t have the skills or the habits yet to verify the source of a video or picture.”

While deepfake technology is used for entertainment or creative outputs, research shows that its more widespread applications are far more insidious, with women as the overwhelmingly targets of abuse. A 2023 report by digital security firm Security Hero found that 98 percent of deepfake videos online are non-consensual pornography, and 99 percent of those victimized are women.

Minors are also particularly vulnerable. A 2024 investigation into a pornography ring in South Korea uncovered countless deepfake images and videos that placed the faces of girls and young women onto sexually explicit bodies. These images were being circulated within school communities and private online groups, causing lasting reputational and psychological harm.

Advertisement

According to Dr. Erika Fille Legara, a Filipina scientist and one of the Philippines’ leading voices in AI, women tend to be disproportionately targeted because deepfakes exploit long-standing social double standards. “In many societies, a woman’s reputation, especially around sexuality, still carries heavier consequences,” Legara shares. “That makes manipulated images a powerful tool for intimidation and control.”

Atty. Army Padilla-Santos, a legal and policy advocate for women and children’s rights, warns that the growing believability of deepfakes makes them particularly easy to weaponize. “Deepfakes are being used to discredit women in politics or leadership, and against women who are activists, journalists, or professionals,” Atty. Padilla-Santos, shared. “They are being used as tools for harassment campaigns by circulating altered images to suggest inappropriate or sexual conduct. These attacks are often coordinated and very damaging.”

All three experts stress that individuals, especially women, must be vigilant about their digital footprint. In an era where images can be manufactured at scale and distributed globally in seconds, Simon says a person’s likeness has effectively become “an identity asset that must be protected.”

Advertisement

How to protect yourself—and fight back 

Collage: Nicole Almero/Allure Philippines; Source images: Envato Images

Protection begins with being careful with what people choose to upload as well as controlling who has access to it. “High quality pictures and videos are the ones that will be the most useful for a harasser to create deepfakes,” Simon advised. “Protect the perimeter where the information is accessible, do not share it with anyone without your control. Make it unusable by limiting the quality of your images as much as possible. Finally, do periodic cleaning by removing old content and reducing the amount of accessible information.”

Dr. Legara said what’s particularly alarming is the move toward real-time deepfakes, or manipulated video during live calls. People must also be mindful of the audio content they share, because this can be used to clone a person’s voice.

Advertisement

If abuse does happen, Dr. Legara said, there are tools available. “Services like Google’s Results About You and platform-based image removal tools from companies like Meta allow individuals to request the removal of harmful content more quickly.”

Atty. Padilla-Santos, outlines the steps the victims should take to pursue legal action. “When victims come to us with situations like this, the first thing we advise them to do is gather everything they have. That includes screenshots, photos, text messages, emails, and even testimonies from friends who may have seen the posts. Give everything to your lawyer and let them review it carefully.”

She also cautions against common mistakes. “One is immediately deleting posts, messages, or accounts before preserving evidence. Another is publicly confronting the perpetrator online without documentation, because there is always the risk that a cyber libel case could be filed in response.”

Advertisement

Early reporting is crucial because waiting too long can make it more difficult to investigate. “Consulting a lawyer early on is important so you can determine what laws may apply—whether that involves violence against women, cybercrime, extortion, or other offenses,” Atty Padilla-Santos says. Depending on the case, victims may be advised to also file complaints with the Philippine National Police AntiCybercrime Group or the National Bureau of Investigation Cybercrime Division.

Still, Dr. Legara emphasizes that responsibility should not fall solely on those who are harmed. “It begins with the people designing these systems,” she said. “Developers who understand the real-world harms their technologies could enable make very different design decisions. That awareness has to exist from the beginning, not after something goes wrong.”

A global concern should merit global action

Amid all the challenges, what gives Dr. Legara hope is the growing response from governments and courts to address the issue, as well as the increasing number of women entering the AI field who can help change the way systems are built. “AI itself isn’t inherently harmful. The real question is how we design it, govern it, and who gets a seat at the table when those decisions are made. The more diverse that table becomes, the more likely we are to build technologies that work for everyone.”

Advertisement

*not her real name

More from Allure Philippines: