The GPS Test: AI, cognitive atrophy, and the quiet surrender of critical thinking
I can't read a map anymore.
Not "I prefer not to" – I'm pretty sure I actually can't. All my orienteering training in Boy Scouts? Gone. Fifteen years of GPS dependency has left me a husk of the outdoor adventurer and road warrior I once was. And while this sounds very "haha, hilarious" – my journey to this dependency on GPS for navigating the natural world has been gnawing at me more and more.
Not because I miss the good old days of paper maps strewn across the dashboard. I don't. And not because I'm nostalgic for the MapQuest era – cursing my uncooperative printer for its inexplicably low magenta ink levels.
GPS solved the immediate problem – getting from Point A to Point B – but we never thought to ask what kinds of problem solving, spatial awareness, or all-around worldliness we might be trading away.
And now we're about to make the exact same trade with AI. Except this time, the stakes are considerably higher than missing a freeway exit.
Welcome to the Deskilling Era
We're all worried about whether AI will "take our jobs." It's a good question.
But that's downstream from the more immediate danger. The better question – the one that matters for your career in the next five years assuming any of us have careers after that – is this: Which of your skills are you willing to lose?
That's why I've been asking people to take what I call the "GPS Test."
Consider this: Would you feel comfortable driving out of state to see friends and family without GPS? I posed this to a group of young leaders and students recently. Most of the room said no.
This isn't me advocating we go back in time to worse solutions. Rather, as users of AI, it's about being intentional about what skills we're willing to lose.
Here's what I've learned helping organizations navigate AI adoption and having studied the history of disruptive automation: you're going to trade something away. The only question is whether what you gain outweighs what you lose – and whether you're making that choice consciously or letting it happen to you.
Replacers vs. Enhancers
In my work with organizations navigating AI adoption, I'm watching two types of AI users emerge.
There are people using AI to avoid difficult thinking. They offload their core competencies, rely on AI for fundamental decision-making, and risk becoming helpless when the tools aren't available. They're the professional equivalent of someone who can't find their way home without turn-by-turn directions – left with all their anxieties about getting lost, but none of the ability to solve their own problems without the crutch of AI.
Call them the Replacers. They're deskilling themselves in real-time.
Then there are people using AI strategically to eliminate routine work so they can focus energy on complex, high-value thinking. They maintain their core competencies while augmenting both their capabilities and their capacity – doing more, applying their skills faster, expanding what's possible. They work effectively with or without AI – but certainly faster with it.
Call them the Enhancers. They're building anti-fragile careers – careers that can withstand the coming disruption.
The difference isn't about being “pro-AI” or “anti-AI.” It's about what you're choosing to protect – and what you're willing to trade away.
The Corrosive Effects of Over-reliance on AI
Recent research reveals what happens when people become Replacers. Three patterns keep showing up:
Your diagnostic thinking atrophies. A recent study in The Lancet found that endoscopists' diagnostic skills actually declined (or “deskilled”) after using AI. In business contexts, I'm seeing analysts lose the ability to design their own studies. Executives lose source evaluation and synthesis capabilities. In my classroom I see students struggle writing outlines or five paragraph essays sans AI. AI becomes a crutch, and the cognitive muscle atrophies.
Your creative problem-solving declines. When you consistently outsource idea generation and iterative problem solving to AI, you lose what learning scientists call "productive struggle" – the cognitive friction that builds creative muscle. Original thinking requires practice. Solving complex problems requires pattern recognition and the earned confidence of having navigated your way out of tricky situations before (and knowing you can do it again). Skip the practice, lose the capability – and confidence.
Your critical evaluation gets weaker. Here's the scariest one to me as a former journalist – AI produces confident-sounding but incorrect outputs all the time. Hallucination is what GenAI does best. Without strong domain knowledge, you can't distinguish AI accuracy from AI confidence. We've already seen this pattern with media literacy – people's widespread inability to parse fact from fiction in their media diet. If that isn't a scary harbinger of what's coming with AI-generated content, you're not paying attention (see the super weird use cases OpenAI chose to highlight of Sora 2 in action or the recent Department of Health and Human Services report that cited fake studies and papers that don't exist.). Armed with AI, people become confidently wrong at scale – and a whole new generation of bullshit artists and snake oil salesmen escape Arkham Asylum with no Dark Knight to corral them.
The TRUST Framework: Being Intentional in an AI World
So how do you avoid becoming a Replacer?
I've been using a framework to help people decide when to use AI versus when to rely on their own thinking – when to invite "productive struggle" versus when to take the path of least resistance with ChatGPT, Claude, Gemini or your magic text box of choice.
I call it the TRUST framework (cute, I know):
T - Type of Learning Required: Skill-Building vs. Task-Completion
Are you in learning mode? Minimize AI assistance to build cognitive muscle.
Or are you in production mode? Use AI to augment established capabilities.
Example: Writing your first research report vs. generating your hundredth report.
R - Risk of Skill Atrophy: Core vs. Peripheral Skills
Protect your hard won core skills: critical thinking, domain expertise, creative problem-solving.
Automate skills on the periphery of your work: formatting, routine calculations, basic research.
GPS Test: Would losing this skill leave you helpless when the technology fails?
U - Understanding Depth Required: Surface vs. Deep Comprehension
Getting an answer? AI can help.
Understanding why and how? Human thinking required.
The danger: Using AI without understanding reduces your ability to evaluate AI outputs. You become confidently wrong.
S - Stakes of Being Wrong: Low-Stakes Experiments vs. High-Stakes Decisions
Low stakes (draft emails, brainstorming)? Experiment freely.
High stakes (medical diagnoses, strategic planning)? Lead with human judgment.
A recent MIT study found 95% of AI pilots fail because organizations skip the hard thinking about what problem they're actually trying to solve. They start with “Hey, let’s use AI” before asking "Wait, what's the actual problem?" Solve the right problem before solving the problem right.
T - Time Horizon: Short-Term Speed vs. Long-Term Capability
AI can accelerate immediate tasks, but over-reliance creates dependency.
Ask yourself: If AI disappeared tomorrow, who would you be? Could you still do the work that matters most to you? If you don’t like the answer, that’s a tell you’re leveraging it wrong.
Don't let AI do your homework. What you don't practice, you lose.
What You Can Do
Here's what keeps me up at night – for my own career, my students, and my children: the job market is already shifting. Writing jobs are down 30 percent. Software development roles have contracted post-ChatGPT. Meanwhile, jobs that successfully integrate AI are becoming more complex and paying more on average. These roles require genuine skill enrichment, not just prompt engineering.
The professionals who will thrive aren't those who avoid AI or those who rely on it completely. They're the ones being thoughtful about what they gain and what they trade away.
So here's the choice: Replacers pocket the efficiency gains (Microsoft research estimates 29% time savings on routine tasks) and call it a day. Enhancers reinvest those gains into skills AI can't commoditize:
Deep domain expertise that lets you know what "good" looks like – what we're increasingly calling "taste"
Strategic thinking about systems, second-order effects, and long-term implications
Ethical reasoning for ambiguous, values-driven decisions
Creative synthesis – connecting disparate ideas and facilitating breakthrough thinking
Relational intelligence – the interpersonal skills that build trust and influence
Think of your skills as an investment portfolio. Routine cognitive tasks are depreciating assets – AI is commoditizing them in real time. The capabilities above? Those are appreciating assets that become more valuable as AI handles the routine work.
The Bottom Line
How many days can you skip the cognitive gym before you're suddenly out of shape?
Your competitive advantage in an AI world isn't just what you know – it's your capacity to keep learning, thinking, growing, and connecting to other humans even when machines can do the immediate work for you.
AI companies are asking you to make a trade – even if it's not immediately apparent. Take a longer view and ensure that trade doesn't include your hard-earned skills or your intellectual identity.
Mark Micheli is a consultant and researcher focused on human-centered AI and experience innovation. He helps organizations navigate the intersection of emerging technology and human experience. Connect with him on LinkedIn or learn more at saydo.design