BIAS Blog 1: AI’s Regimes of Representation

Published on:

Questioning how AI reduces human complexity into data, and how ethics helps us confront what gets lost in that process.

Case Study: AI’s Regimes of Representation: A Community-Centered Study of Text-to-Image Models in South Asia

This case study examines how text-to-image AI models represent South Asian cultures and highlights the biases, stereotypes, and failures that emerge when global AI systems are built on Western-centric datasets. Through community-centered evaluations with participants from Pakistan, India, and Bangladesh, the study reveals why cultural representation in AI requires local context, lived experience, and ongoing ethical scrutiny.

Right to Fair Representation: Reflections on AI’s Regimes of Representation

When I first read AI’s Regimes of Representation, I didn’t expect it to make me question my own relationship with technology so deeply. The study explored how text-to-image models like DALL·E or Stable Diffusion portray people in South Asia, and, honestly, what happens when a machine becomes the storyteller for cultures it doesn’t fully understand.

For me, cultural representation means being seen in a way that feels true, not flattened into clichés or missing entirely. I think it’s about dignity, context, and accuracy, but also about imagination, about whether AI can capture the everyday humanity behind a culture rather than just its stereotypes. And because I grew up around heavily simplified or exoticized portrayals of certain cultures, I realize that those childhood images still shape how I evaluate AI-generated ones today. When I look at AI outputs, I can’t help centering the parts of my identity that matter to me, my background, my language, the visuals I grew up with, the aesthetics that feel “normal.” If AI gets those wrong, it doesn’t just feel inaccurate, it feels personal.

What Stuck With Me…

What stood out most was the concept of “failure modes.”

The researchers identified three key ones:

  • Underrepresentation: Entire identities or features simply didn’t appear.
  • Amplifying Western defaults: Prompts like “professional woman” defaulted to Western beauty and clothing standards.
  • Cultural tropes: “South Asia” became simplified into temples, saris, poverty, or Bollywood glam.

That last one really hit me. It mirrors what I saw growing up, how certain regions were depicted with a kind of visual shorthand. No nuance, no daily life, no subtlety. Seeing AI repeat those same shortcuts made me realize that “bias” isn’t just a technical issue; it’s inherited from media, history, and power.

The Value of Listening to Small Stories

One thing I really appreciated in the study was its focus on small scale, qualitative evaluation. I think these kinds of community centered, conversation based methods matter because numbers alone can’t capture what “feels wrong” about a representation. A dataset might say a model is “80% accurate,” but 80% accuracy cannot tell you if the remaining 20% feels offensive, dehumanizing, or culturally tone-deaf.

Large quantitative benchmarks are great for speed and scale, but they miss emotional truth. They can’t tell you if an image makes someone feel erased. That’s why I think qualitative evaluations, stories, conversations, lived experiences, are essential if we actually want ethical generative AI. They reveal the things numbers hide.

Can AI Ever Be Globally Inclusive?

This study made me think a lot about whether global inclusivity is even possible. I think AI can become more inclusive, but only if we stop pretending there’s a single global “norm” it can represent. Some participants in the study felt hopeful, imagining AI that empowers communities to tell their own stories. Others doubted that inclusion could be achieved within current power structures, where Western data, Western aesthetics, and Western companies dominate the development pipeline.

I think the truth sits somewhere in the tension: AI can be more inclusive, but not without confronting who holds the power to define “normal.”

So What Should Developers Do?

Reading participants’ reactions made me wonder what practical steps developers could take. I think there are real mitigations worth exploring:

  • Collaborating with local artists and communities, not just using their images as data points
  • Building region specific datasets, curated with cultural context
  • Auditing stereotypes and defaults before model release
  • Creating tools for users to correct or flag harmful images
  • Supporting local model development, so the cultural imagination isn’t centralized in Silicon Valley

Developers shouldn’t just fix mistakes, they should share creative power.

Representation Isn’t Static, Can AI Keep Up?

One of the most interesting points in the article was that representation isn’t fixed. It changes with time, politics, fashion, identity, and lived experience. That made me wonder whether AI can ever “encode” something so fluid.

I think we can try, by regularly updating datasets, involving communities in continuous model iteration, and allowing models to be fine tuned locally, but I don’t think encoding will ever fully capture the dynamism of real cultures. And maybe that’s okay. Maybe the goal isn’t to perfect representation but to remain accountable to the people being represented. Encoding should be flexible, not final.

Learning From History

The article also reminded me that this isn’t the first time a technology claimed to “represent” people. Photography shaped who was allowed to be visible. Colonial, era media shaped entire global imaginations. Hollywood shaped what the world thought America looked like. Those histories teach us that representation is always political and always tied to power.

So if we want responsible AI, we have to learn from those legacies. We need to notice who gets centered, who gets erased, and who gets simplified, and actively push against repeating those patterns.

The Question I’d Ask

If I could add one question to the case study, it would be:

“What would representation look like if local artists, developers, and storytellers from these underrepresented regions were given the same resources to train their own models?”

I think this question matters because it shifts the conversation from fixing representation to creating representation. Instead of trying to patch biases, why not support new cultural imaginaries entirely?

Some final thoughts

This article changed how I think about technology ethics. Before, I saw bias as a technical bug, something engineers needed to fix. Now, I see it as a narrative question: Whose stories are being told, and who gets to tell them? Representation isn’t just about what AI shows us, it’s about whose imaginations are included in the process. And as someone who uses these tools daily, I find myself asking:

Who’s missing from the picture I just created, and why?