INFO Blog 1: Misinformation and Moderation

Published on:

Reflecting on how our online spaces shape what we believe, who we trust, and why moderating misinformation is never as simple as it sounds.

Case Study: Censorship of Misinformation and Freedom of Speech on Social Media

Every time I open YouTube, or even Instagram Reels, there’s some new “expert” trying to convince me of something. One week it’s a conspiracy about 15-minute cities, the next it’s wellness influencers telling people to stop wearing sunscreen. I used to think we were dealing with a misinformation crisis. After reading the case study, I’m starting to think we’re actually dealing with a trust crisis.

The whole point of the article is basically this: misinformation isn’t new, but our information environment has changed so dramatically, and our trust in institutions has fallen so sharply, that false claims spread differently now, and attempts to control them often make things worse.

This resonated with me way more than I expected.

What Misinformation Looks Like Today

One example the article gave was vaccine misinformation or election conspiracies. Those feel familiar at this point. The one that hit me, though, was something I saw online recently: the claim that “15-minute cities” are government control centers. I remember scrolling through Instagram Reels and seeing these flashy videos with red warning emojis and dramatic music. And it wasn’t coming from political accounts, it was lifestyle creators, travel bloggers, fitness influencers.

It reminded me how bad misinformation feels today: not necessarily because the claims are more extreme, but because they spread fast through people we don’t expect to be “political.” A dramatic Reel hits harder than an old-school Facebook rant ever did. So yes, I’d say misinformation feels worse today, not because humans changed, but because the architecture of the internet amplifies emotional content.

Are Our Information Institutions Still Working? (Kind of… Not Really.)

The article talked a lot about the “high modernist” era when people trusted universities, journalists, and regulatory agencies. I can see why it worked, life was simpler when we all consumed information from the same handful of sources.

Today, it feels like almost every institution is struggling with credibility.

From my own experience:

  • People trust TikTok health advice more than CDC recommendations.
  • “Academic research” gets dismissed because universities are viewed as biased.
  • Even journalism is treated like fan culture, people pick outlets that confirm their worldview.

So yes, institutions still exist, and many of them do important work. But the relationship between institutions and the public feels fractured. If people don’t trust the messenger, it doesn’t matter how reliable the message is.

Would Perfect Censorship Ever Be Justified?

This one made me think harder than I expected. Part of me wants to say yes, if a system could perfectly identify harmful misinformation with zero bias and instantly remove it, who wouldn’t want that? Less panic, less harm, fewer people getting scammed or endangered. But the reality is that censorship never lands in a vacuum. People react to it. During COVID, whenever Instagram put a small warning label under someone’s story, their followers immediately took it as a sign that the person was “speaking the truth the government doesn’t want you to hear.” Censorship actually boosted their credibility inside their own bubble.

So even if censorship worked flawlessly… in our current environment, it probably wouldn’t achieve the desired effect. It would just inflame distrust more.

Has Technology Made Information Better or Worse? Honestly… Both.

This is the part where I realized I’m just as shaped by algorithms as everyone else.

Ways tech has improved information flow:

  • Wikipedia is one of the greatest human inventions.
  • Long-form explainers make complicated topics accessible.
  • Marginalized voices finally have public platforms.

Ways it has worsened things:

  • Algorithmic echo chambers isolate people.
  • The incentive structure rewards drama, outrage, and fear.
  • False information spreads faster because it’s more entertaining.

Different demographics feel this differently too. Older people struggle with platform literacy, younger people get stuck in anxiety loops crafted by algorithms, and teens trust influencers more than institutions. So… is it net-positive or net-negative? Personally, I think tech makes learning easier but truth harder.

A New Question I’d Ask Readers

How do we rebuild trust online without simply returning to an era where a few powerful institutions controlled all information? I chose this because the case study made it clear that censorship isn’t the fix, the actual issue is that people don’t trust the institutions meant to keep us informed. But I don’t want to go back to a world where only a few bosses, newspapers, or government agencies get to define truth. So what’s the alternative? That’s what I’m still trying to figure out.

Reflection

This reading honestly made me confront my own habits. I always assumed I was “good” at spotting misinformation, but now I’m starting to realize how much of my worldview depends on who I choose to trust. And I also realized that I’m quick to ignore information that doesn’t fit what I already believe, even if I’d never call that misinformation. If anything, this exercise reminded me that misinformation isn’t purely a tech problem. It’s a social problem. A trust problem. A human problem.

And unlike a false TikTok video, you can’t fix that with a “report” button.