Content Moderation as a Public Health Concern

When LGBTQ+ People Are Harmed Online

Author

Rebecca Sanaeikia, M.A.

Publish date

Content Moderation as a Public Health Concern: When LGBTQ+ People Are Harmed Online
Topic(s): Public Health

As a bioethicist, what comes to mind when you hear “public health”? Maybe you think of the COVID pandemic or an HIV outbreak in Africa. For years, those images have shaped what it means to protect a community. But lately, I’ve been wondering, what do we need protection from in the digital age? The biggest threats aren’t always in the physical realm; sometimes, they’re viral in a whole new way.

The loss of Pranshu, a queer teenager driven to suicide by relentless online bullying, haunts the queer community not as a single tragedy but as a sign of something bigger. Our digital world, built on platforms where viral content and outrage dominate, has become a new battleground for public health. The grief carried by stories like Pranshu’s is linked to collective responsibility: the health of our communities now depends on what happens online as much as offline.

Public health is about more than just physical diseases. The World Health Organization places it within the scope of social well-being, and the CDC Foundation emphasizes that it’s about protecting people and their communities. These values are essential because digital harms—hate speech, misinformation, targeted harassment—are some of the most dangerous threats we face, spreading invisibly across borders and identities.

Social media platforms, like Facebook, X, and Instagram share the responsibility for setting policies for content moderation. However, such policies are often enforced carelessly, focusing more on engagement and profit than on people’s well-being. That’s why the Petrie-Flom Center and others have urged platforms to rethink their roles, but not only has it not improved, it has actually worsened. For example, Meta has moved away from supporting trans community members against online harassment and has allowed phrases that describe LGBTQ+ people as mentally ill.

Every week seems to reveal new evidence of how hostile the online world has become for marginalized users. Recently, trans people on Bluesky were suspended for criticizing J.K. Rowling after she targeted a trans public figure, Jessie Earl. The irony? Those who spoke out were punished, while hostility and harassment often remain unmoderated. Bluesky, a platform praised for its safety, has become yet another example of an online space where speech is censored without regard for justice or transparency, and where trans people are especially vulnerable to arbitrary silencing.

That injustice is not a minor frustration or “online drama.” It is part of a mounting public health emergency. When online hate campaigns target gender-affirming care providers, the violence doesn’t just stay online. Boston Children’s Hospital has faced bomb threats; clinics have shuttered, and families have lost access to care. The CDC tracks how these waves of online hostility directly damage mental health, safety, and access for queer youth. In moments like these, digital harm means real-world crisis.

And failures of moderation strike beyond hate speech alone. For example, Instagram’s own algorithms facilitated networks tied to child sexual abuse. These aren’t glitches; they’re the predictable result of platforms prioritizing engagement over ethics, concealment over accountability.

Why is this so ethically egregious? Throughout history, there have been plenty of bioethical principles written for researchers and policymakers, including respect for persons, beneficence, justice, and transparency. However, those pillars are brushed aside in our online commons: supportive queer voices are muted, while harassment gets a pass; algorithms amplify harm for profit, and “justice” is reserved for the loudest, most privileged.

Imagine demanding the same from tech platforms that we do from public health agencies: open, public disclosure of decision-making, co-governance with victimized communities (GLAAD), independent audits of harm, and real crisis support (The Trevor Project). These are not luxuries, but the bare minimum if we take collective safety seriously.

A healthy online space needs active management, inclusion, and addressing injustice before it turns into irreversible damage. The online attacks on trans people are a call for public health concerns. Respect and justice aren’t just ideas; they are the foundation of healthy communities. Until platforms recognize that their policies influence the well-being of vulnerable communities, we will continue to face tragedies similar to Pranshu’s.

If we want thriving communities, we must start treating our online world as a space worth safeguarding—just as urgently as any street, school, or hospital.

Rebecca Sanaeikia, M.A., is a philosophy Ph.D. student at the University of Rochester.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.