Blog Post 1: What AI Thinks It Knows About You
Published on:
Exploring what AI thinks it knows about us, and how ethics helps us make sense of it.
News Article: “What AI Thinks It Knows About You” by Jonathan L. Zittrain
Why I Chose This Article
I picked Jonathan L. Zittrain’s “What AI Thinks It Knows About You” because I already use AI tools in school and daily life. I wanted to look more closely at the assumptions these systems make about me, especially since I don’t always think about how those “invisible guesses” can affect my decisions. The article felt relevant because it shows how AI is not just a neutral tool, it’s shaping behavior, relationships, and even trust.
Main Ethical Concerns
Zittrain raises some big concerns:
- Bias and Stereotypes: LLMs pick up on patterns that can reinforce gender, class, and racial assumptions.
- Manipulation of Users: AI suggestions can subtly push people toward certain choices (like buying luxury goods or spending more money).
- Privacy Risks: Remembering past conversations helps personalization but also increases risks about how that data is stored and used.
- Trust and Dependency: People often accept AI’s output as if it’s always correct, which makes them vulnerable to mistakes or manipulation.
- Protection of Sensitive Conversations: Just like doctor-patient or lawyer-client relationships, some AI interactions may need strict confidentiality.
For Whom? (Stakeholders)
- Everyday Users: May be nudged into decisions that don’t reflect their real needs or values.
- AI Companies (OpenAI, Google, Anthropic, etc.): Profit from personalization, but risk public trust if stereotypes or harms spread.
- Researchers: Work to expose bias and improve transparency, but worry about misuse or oversimplified “fixes.”
- Governments and Regulators: Face pressure to create rules quickly but carefully.
- Vulnerable Groups (children, marginalized communities): More likely to be misrepresented or harmed by biased outputs.
Ethical Frameworks
- Virtue Ethics: A virtuous company would design AI to be honest and fair, not manipulative. Exploiting stereotypes for profit would be “wrong.”
- Care Ethics: The “right” action is protecting vulnerable groups and promoting empathy. Ignoring how bias hurts marginalized users is “wrong.”
- Utilitarianism: If AI genuinely improves access to knowledge and minimizes harm, that’s the “right” outcome. But when harms (like stereotyping or privacy loss) outweigh benefits, it’s “wrong.”
I personally think Care Ethics and Virtue Ethics are the most important here. AI should be built with empathy and integrity, not just efficiency or profit.
Reflection
This exercise pushed me to think about AI less as a cool technology and more as something deeply tied to values and ethics. What stood out most is that AI doesn’t just mirror the world, it can change it by nudging people’s choices. That means ethical responsibility isn’t optional. I came away realizing that the future of AI isn’t just about how smart it gets, but about how trustworthy and humane it remains.