Blog Post 2: Is ChatGPT Making Us “Dumber” or Just Lazy Thinkers?
Published on:
The article warns that using AI too much can actually make our thinking skills weaker. To avoid this kind of “mental atrophy,” the authors suggest four simple habits: draft first (then prompt), use AI as a tutor, take timeouts/checklists, and schedule “AI-free” periods
News Article: “How to Make Sure ChatGPT Doesn’t Make You Dumber” by Paul Rust & Nina Vasan, The Wall Street Journal (Sept 3, 2025)
When AI Starts Thinking for Us
I read an article recently that asked a strange but uncomfortable question: What if ChatGPT is quietly making us worse thinkers? It’s from the Wall Street Journal, and it argues that when we outsource too much of our mental effort to AI, our thinking skills might actually start to fade. That idea hit me harder than I expected, maybe because I see pieces of it in myself.
The authors describe this slow mental weakening as “cognitive atrophy.” They suggest four habits to fight it: write a first draft before prompting, use AI like a tutor rather than a shortcut, take intentional breaks, and schedule “AI free” time. At first, that all sounds sensible. Humans have always looked for easier ways to get things done, but with AI, we’re not just cutting corners, we’re letting the tool make choices our brains used to handle.
The Case They Make, and Where It Slips
According to the article, early studies show that heavy AI use can lead to weaker recall, lower independent performance, and less critical thinking. There’s also a warning about automation bias that subtle habit of trusting whatever the AI says because it sounds confident. In their view, structured limits on AI are what keep our minds strong.
But here’s where I start to hesitate. The argument relies on small, short-term studies and comparisons that don’t really fit creative or academic work. It’s a bit of a hasty generalization. Comparing writing with a pilot over-relying on autopilot doesn’t hold up, creative reasoning isn’t mechanical. Correlation doesn’t always mean causation, and the evidence feels more like a reminder to use AI mindfully than proof that it’s making us “dumber.”
Maybe Offloading Is Just Evolving?
If we zoom out, history actually shows that smart offloading helps us think better. Calculators didn’t ruin math, they freed people to focus on deeper concepts. Coding tools, search engines, even note-taking apps, all were once accused of “making us lazy.” The truth is, tools like ChatGPT can strengthen learning if we use them intentionally.
The real issue isn’t the AI itself, it’s the habits around it. When we copy answers blindly or skip the reflection part, the problem isn’t that AI replaces our brain, it’s that we stop engaging it. Used well, AI can be a partner for deeper thinking, not a shortcut around it.
Beyond Habits
Even if individuals learn to use AI responsibly, there’s a bigger system at play. Schools and workplaces still reward speed and flawless output more than depth or originality. Productivity metrics push people toward efficiency, not reflection. So even if we want to think critically, we’re trained to pick the fastest option, and AI just happens to be the fastest.
That’s why the real solution isn’t just “take more AI breaks.” It’s changing how we define good work. Critical thinking takes time, and our systems don’t always give us that.
Finding My Own Balance
Here’s what’s been helping me:
- I make myself think or outline for at least 10–20 minutes before using ChatGPT.
- I use “tutor mode,” asking it to guide me instead of solve things for me.
- I re-check what I write, asking: What’s the claim? What’s the evidence? Am I skipping my own reasoning?
- I schedule AI free blocks to work through ideas without the safety net.
These aren’t rules as much as reminders, small ways to keep my brain active instead of passive.
Where This Left Me Thinking
There’s also an ethical layer to all of this that’s easy to overlook. If we let AI handle most of our reasoning, we start giving up a part of our intellectual autonomy, the ability to form, test, and trust our own thoughts. Ethically, that matters because thinking isn’t just a skill, it’s a responsibility. When we let AI do our thinking, we risk losing ownership of our ideas and accountability for their consequences. Using AI responsibly isn’t just about protecting our grades or creativity, it’s about protecting our role in the moral loop. The moment we stop questioning how an answer was formed, we stop being ethical participants in our own decision-making.
Writing this post made me realize how quietly dependency can creep in. The scary part isn’t that AI will take over our thinking overnight, it’s that we might stop noticing when we’ve handed too much of it away. I don’t think the goal is to squeeze every ounce of efficiency out of ChatGPT. For me, it’s about using it with intention, as a collaborator, not a crutch.
In the end, I want to keep what makes thinking feel human: curiosity, patience, and the satisfaction of figuring something out on my own.
