खबर लहरिया Blog Gendered Realities: Speaking Back to Disinformation

Gendered Realities: Speaking Back to Disinformation

“Safety Should Be Baked Into the Design of Technology from the Beginning”

Rohini Lakshané cuts through the confusion: gendered disinformation is not trolling, it is targeted propaganda. She traces how abuse has evolved from voyeurism to political deepfakes, how AI now enables impersonation and stalking, and why the manosphere blames women for structural problems like unemployment. Platforms have abandoned moderation, AI operates without limits, and digital literacy alone cannot solve a problem rooted in cultural contradictions. The answer is systemic: safety must be designed into technology, not added later.

photo credit : illustration

writing – sejal 

Illustration credit: Upasana Agarwal, 31 Fantastic Adventures in Science

About the series:

Gendered Realities: Speaking Back to Disinformation is a three-part interview series bringing together journalists, researchers, and digital rights experts who have been documenting and challenging gendered disinformation. From tracing how hate is seeded and amplified online to examining its real-world consequences for women, particularly those from marginalised communities, the series explores the architecture of online harm and the urgent need for systemic change.

In Part 2, technologist Rohini Lakshané unpacks the ecosystem of gendered online harm. She frames it not as isolated abuse, but as a politically engineered, incentivised network that thrives on disinformation.

Based on years of research, Rohini argues that gendered disinformation must be understood in relation to political propaganda, ideological trolling, and the profit models of Big Tech. She critiques the tendency to blur terms like trolling, fake news, and disinformation. These distinctions, she says, are crucial when we talk about harm, accountability, and regulation. Collapsing everything into the idea of “online toxicity” hides the fact that many attacks are strategic and organised, especially against feminists, journalists, and women from marginalised groups.

Despite the challenges, Rohini points to a path forward. She calls for stronger alliances, sharper policies, and a shift in responsibility toward those with power whether in tech, media, or governance.

Technologist, researcher and long-time digital rights advocate, Rohini Lakshané has spent over a decade working at the intersection of gender, technology and civil liberties. From her early days as a tech journalist to her research on online violence, patent reform and openness, Rohini has consistently spotlighted how internet infrastructures such as Wikipedia and FOSS communities reflect and reinforce social inequities. In this conversation, she reflects on how disinformation disproportionately targets women, and why technologists must approach design with lived experience, transparency and ethics at the centre. Rohini toots @rohini and blogs at Aarohini. Rohini’s Author Page at SSRN

Defining the Ecosystem of Online Harm: Trolling, Disinformation, and TFGBV

KL: You’ve worked at the intersection of technology, gender and online harm for years. How do you distinguish between gender disinformation, trolling, and tech-facilitated gender-based violence (TFGBV)? Why is it important to name these differently, especially for policy?

Rohini: The confusion over these terms trivializes serious abuse.

Trolling originally referred to people who were disruptive and tried to derail conversations in early internet forums. Now it’s a generic term for all kinds of abuse. This is unhelpful because it likens serious violations to a childish act by a mythical creature, which minimizes the actual harm caused. The term trolling should not be used to talk about serious violations.

Gender disinformation is a form of tech-facilitated gender-based abuse. I see it under the larger umbrella of TFGBV. Disinformation is insidious; it’s always information disguised as fact, but it’s actually propaganda. Its goal is to discredit an individual, a group, or an entire movement, like the feminist movement.

TFGBV is the broadest term. Violence here includes both physical and digital abuse. Gender disinformation is a specific tool or method within the TFGBV framework.

Using the right language is essential because different facets of a problem require different remedies. To understand, mitigate, prevent, and remedy the harm, we need to differentiate these problems.

The Evolving Intent Behind Image-Based Abuse in India

KL: You’ve extensively written about different forms of image-based sexual abuse. In your view, how has the intent behind such abuse evolved over the last decade in the Indian context?

Rohini: I take a longer view, having first published on this in 2014. The arc has moved from voyeurism to organized political targeting.

Initially, it started out as voyeurism driven by the lack of cheap, widespread internet access and a demand for unauthorized sexual content. As demand grew, websites began popping up to profit from it via ads or subscriptions. They even found ways to distribute content offline via DVDs. Alongside this, there were cases of personal revenge, like sharing intimate photos after a breakup.

Political Motives and Deepfakes

Then came its use for political motives, such as when a morphed video of journalist Rana Ayyub was circulated. These weren’t random cases; they had clear ideological or political goals.

Now we’re in the age of deepfakes. The technology makes it incredibly easy to create non-consensual images. It doesn’t always have to be explicitly sexual. Even an image of a woman drinking alcohol can be used to ruin reputations in conservative contexts. The arc has moved from voyeurism, to profit-driven abuse, to personal vendettas, to organized political targeting, and now to hyper-realistic, AI-generated imagery. The motives have expanded, and the consequences have deepened.

AI as a Tool for Sophisticated Violence

KL: You described how AI-facilitated gender-based violence has become more sophisticated. Could you walk us through some lesser-known forms?

Rohini: There are two concerning trends I’ve observed.

AI Chatbots for Stalking and Impersonation

The first is the use of AI chatbots for stalking and impersonation. Users can configure LLM-based bots, feeding them personal information about a victim. In one reported case, a stalker created a bot and fed it the victim’s home address. The chatbot then invited others to visit, and people actually showed up at the woman’s door. This shows how user-defined LLM-based bots can impersonate real people.

Misuse of AI Entertainment Services

The second trend involves AI-based entertainment services marketed for creating promotional videos or ads. You upload photos and choose from pre-set templates, for example, two people hugging. This can be easily misused to create photorealistic doctored visuals that are circulated with any false narrative. This doesn’t fall under current moderation guidelines or laws because the content isn’t nude, but it is still sexually exploitative and deeply damaging.

KL: So basically, I can feed information about someone into a chatbot, and the chatbot will impersonate them?

Rohini: Exactly. That’s what happened in that professor’s case.

The Manosphere: Misdirected Anger and Political Narratives

KL: Your piece with GenderIT delves into the manosphere in India. How are these online communities organizing and influencing public discourse?

Rohini: These communities are huge and not a monolith, but they constantly interact with other groups that share similar worldviews. They often cherry-pick incidents, like a case of intimate partner violence, to promote the narrative that we need to go back to “Indian family values.”

Displaced Anger and False Claims

Their discourse is often built on displaced anger. Unemployment is high, but the frustration gets misdirected onto women: “How are women getting jobs? Why do they have quotas?” They ignore that these reservations are limited and that overall unemployment is the structural issue.

They hold public protests based on false claims, such as objecting to free public transport for women by claiming the money comes only from taxes paid by men. This ignores that women pay taxes, and that when women save money on transport, that money is often reinvested in their families, indirectly benefiting men. The narrative stays stuck on “women are being pampered” and that the world is gynocentric, filled with ideas that men are suffering while women get undeserved advantages.

Misplaced Frustration

The reasons are many. There’s a dissonance because women have progressed, but men haven’t been adequately sensitized. They are unable to accept that a wife or daughter can no longer be treated as their mothers or grandmothers were. This creates a shock. Some men also face problems like loneliness or lack of social skills and have no support system, unlike women who form organic support groups. Instead of seeking help, this frustration often turns into misdirected hate and blame toward women.

Discrediting Women in Power and Platform Accountability

KL: What are the common strategies misogynistic groups use online to discredit women who are in power, or who are journalists and activists?

Rohini: The most common accusation is that these women have a foreign-funded agenda and are out to destroy the country, its culture, or religion. Feminism is portrayed as a Western import. They claim, without evidence, that women activists or journalists are being paid, are becoming rich, and are trying to destabilize the country. These allegations are deliberately vague, using undefined terms like “Indian culture” or “values.” In political terms, women’s rights activism is often dismissed as a facade.

KL: How is gender disinformation different from online harassment, and is that distinction recognized in the Indian legal framework?

Rohini: Gender disinformation can overlap with harassment, especially if it’s targeted at an individual and harms their reputation or job. However, more broadly, disinformation is a kind of propaganda. It’s subtle, meant to work over time, and aims to shape public opinion slowly. Online harassment is more direct. The other key point is that disinformation always involves malicious intent. It’s not a misunderstanding; someone knows it’s false and spreads it to cause harm. I haven’t noticed this distinction clearly made in the Indian policy framework.

KL: Do you think transparency and accountability on platforms like Meta and Google have improved?

Rohini: They have definitely not improved. Trust and safety teams have been cut down or laid off. Content moderation teams have been reduced, meaning some smaller languages may now have no human moderators at all. Platforms often rely on AI-generated, automated decisions when you report abuse, which I’ve found immediately dismisses clearly violative posts.

While some platforms have small tech fixes, it’s nowhere near the scale needed. There needs to be far more collaboration across departments like legal, technical, and policy, and also with civil society and users. That comprehensive dialogue is missing.

The Unacceptable Scope of Generative AI

KL: Do you think part of the problem is a lack of data for AI training?

Rohini: I’m an engineer by training. AI should serve a defined, limited purpose. The issue is that AI tools like LLMs, Grok being an example, are being used in completely open-ended ways. I’ve seen disturbing examples where Grok responded to abusive prompts in Hindi, picked a side in a dispute, and used slang.

Whether or not AI is trained on bias-reducing content is beside the point. It simply should not comply with prompts to abuse, impersonate, or create sexually suggestive images. These large language models are doing everything: writing code, manipulating images, answering emails. The scope is too wide, and it’s often in the hands of users with no relevant expertise. When users tell a bot, “My wife didn’t cook dinner after a 12-hour shift, so I cheated on her,” and the AI validates that feeling, it fails to challenge users or hold them accountable.

Digital Literacy is Insufficient: The Need for Cultural Change

KL: Given the limited agency and heightened vulnerability of young girls and women in small towns and rural areas, can digital and media literacy alone be considered a sufficient solution?

Rohini: I’ve grown cynical about literacy and security programs because I’ve seen them treated like checkboxes. You can’t just give someone a rulebook; you have to teach people how to think so they can adapt to evolving technology.

We need to shift the focus from “don’t talk to strangers online” to “if you talk to a stranger and it goes wrong, here’s what you can do.” If a child is too scared to tell their parents they are being abused, the problem is with how parenting is happening.

The problem is deeply cultural. Families want the benefits of modern life, but their values are rooted in the past. They want their daughters to succeed, but operate with the mindset that women should stay inside to stay safe. Taking away a phone is like saying, “If you get harassed on the road, stop stepping outside.”

If we treat digital literacy as the one big fix, it can backfire. When someone is still harmed after training, they are blamed: “You knew what to do. Why didn’t you follow it?” We may need something radically different, something we haven’t figured out yet.

A Path Forward: Systemic Safety by Design

KL: If you had to recommend one key intervention, what would it be?

Rohini: I can’t pinpoint one single response, because an intervention is only effective if it’s multi-pronged and coordinated. If you implement only one strategy, it might be ineffective or even counterproductive. The law making dowry illegal, for example, hasn’t eradicated the practice because the social movement to change norms didn’t progress as much as the legal framework did.

The solution must be systemic. We should have privacy by design and safety by design. This means mandatory checks and balances: a needs assessment (did the affected people ask for this solution?), a privacy audit, an independent security audit, a human rights assessment, and a data protection impact assessment for all technology. These systems of checks and balances, these guidelines and markers for accountability and transparency, are currently missing. That is how it should be.

KL: What gives you hope in the field?

Rohini: I want to be hopeful, but I don’t see a lot of hope. Things like AI have completely steamrolled whatever we were trying to do. You now have technology whose makers themselves don’t know what it’s capable of. That’s the very nature of this technology. It is a black box. The uncertainty is immense. Safety should be baked into the design of technology from the beginning, not patched up later with band-aid solutions.

This is Part 2 of our interview series on Gendered Disinformation, where we speak with leading researchers, activists, and thinkers on what it means to be targeted, how resistance is built, and what needs to change across tech, media, policy, and society. Produced by Chambal Media in collaboration with the Association for Progressive Communications (APC). Stay tuned for the next conversation.

 

यदि आप हमको सपोर्ट करना चाहते है तो हमारी ग्रामीण नारीवादी स्वतंत्र पत्रकारिता का समर्थन करें और हमारे प्रोडक्ट KL हटके का सब्सक्रिप्शन लें’

If you want to support  our rural fearless feminist Journalism, subscribe to our premium product KL Hatke 

Leave a Reply

Your email address will not be published. Required fields are marked *