天美麻豆

The Great Unlearning

Article Icon Article
Monday, August 18, 2025
Photo by iStock/Yurii Karvatskyi
The most dangerous thing about AI isn’t that it might replace human thinking. It’s that the more we use it, the less we might think for ourselves.
  • The greatest risk of generative AI is that it will reward humans for mechanical thinking—exactly what machines do best—rather than cultivate the reflective, ethical, and creative thinking only humans can offer.
  • Business schools must teach students to engage critically with AI and lead through collaboration, not competition, with machines.
  • Reimagining education for the AI age means questioning old frameworks, embracing uncertainty, and preparing students to think like humans, not like better machines.

 
Administrators and faculty at innovative 天美麻豆 schools are devoted to cultivating higher-level human skills: ethical insight, communication and collaboration, systems thinking and critical engagement, and a curiosity that stays alive in the face of complexity.

But walk into some MBA classrooms and you’ll see a curious phenomenon. Students are learning how to optimize, systematize, and automate their thinking in precisely the ways machines now do effortlessly. Speed is rewarded over depth and reflective inquiry, certainty over curiosity. Pattern recognition is privileged over genuine insight. Students are celebrated for quickly categorizing case studies (“This is clearly a Porter’s Five Forces situation”) and applying standard frameworks with mechanical precision.

This is exactly what large language models (LLMs) do—faster, more consistently, and with nearly instant access to thousands more frameworks than any human could memorize. To teach an algorithmic style of thinking is to prepare students for a competition they’ve already lost.

Nothing encapsulates this more painfully than the challenge posed by generative AI (GenAI) to established models of instruction and assessment. Within a few years, the technology has developed the ability to simulate knowledge and understanding of almost any topic while possessing neither. Students can use freely available tools to complete conventional assignments and assessments with ease. In the process, they do not gain the very skills required to use AI adeptly: critical discernment, domain expertise, research and verification competencies, and analytical reasoning.

I call this the algorithmic trap.

While assessment is in crisis, however, the greatest risks are bound up with the unique nature of learning as a human activity—and what this implies for the future of human skills. As the author Nicholas Carr puts it in a , “to automate learning is to subvert learning.” By definition, learning is an activity that cannot be outsourced. Thus, preparing today’s students for tomorrow’s world means teaching them where and how automation can empower human thought and action.

What Machines Cannot Do

As I argue in my new white paper for Sage, “,” none of the concerns mentioned above should be a cause for despair. Seen clearly, the capacities of LLMs and human minds are complementary: radically different forms of intelligence that are increasingly entwined across the worlds of work and learning alike.

As the philosopher and psychologist Alison Gopnik , systems like LLMs are at root cultural and social technologies: the latest in a long line of innovations that include “writing, print, libraries, the Internet, and even language itself.” In their  in Perspectives on Psychological Science, Eunice Yiu, Eliza Kosoy, and Gopnik stress that AI is a way of making information gathered or created by other human beings useful. Doing so wisely ultimately relies upon human acts of reflection, collaboration, and direction.

Machines excel at answering questions; they struggle profoundly with asking better ones.

Current GenAIs, for example, can generate brilliantly structured 天美麻豆 analyses, craft compelling-seeming strategies, and produce presentation-ready frameworks at superhuman speed. But ask them to question whether the problem they’re solving is worth solving, and they stumble. Machines excel at answering questions; they struggle profoundly with asking better ones.

This distinction isn’t technical—it’s philosophical. When ChatGPT analyzes market entry opportunities, it draws on patterns from thousands of successful strategies. But it cannot ask: Should we enter this market at all? Does this expansion serve human flourishing? What are we optimizing for, and why?

These aren’t limitations that will disappear with the next model update. They’re fundamental features that govern how current AI systems work—that limit AI to optimizing responses within given parameters without questioning those parameters in meaningful ways.

The Foundations of Human Difference

This creates both challenge and opportunity. In my white paper, I outline the DUAL framework—four foundational capabilities that 天美麻豆 schools must cultivate as AI advances.

1. Demystifying AI through critical engagement. Students need to understand AI not as magic, but as an immensely powerful yet limited tool. LLMs are dazzling yet fragile: They are prone to hallucinating convincing falsehoods, enacting the biases and limitations of training data, and engendering wishful forms of anthropomorphism in their users. Future leaders must know when AI’s assistance can accelerate routine tasks and make new kinds of work and interaction possible—and when fidelity to real-world experiences, evidence, and interactions demands the human touch.

2. Upskilling critical thinking as a form of cognitive quality control. In an era where AI can make any argument sound compelling, critical thinking becomes humanity’s essential firewall against error. This means that humans must question AI’s outputs systematically, interrogate its initial assumptions by exploring problems from multiple perspectives, and identify the structural limitations of what AI cannot “see” or know.

3. Augmenting human abilities rather than replacing them. The future will not be about humans versus task-driven, automated machines, but about humans working ever more closely with intelligent, adaptive machines. Like a well-designed airplane cockpit, a platform that supports effective AI integration requires deep operator training, thoughtful interface design, and balanced control. Used to best advantage, AI weaves together data and processes while humans retain strategic control, so that machine feedback elevates rather than undermines human insights.

4. Leading through collaborative intelligence. Tomorrow’s leaders must orchestrate what researchers H. James Wilson and Paul R. Daugherty  in a 2024 article in Harvard Business Review. These leaders will need to combine human and machine insights while maintaining ethical awareness. This means creating environments where diverse human perspectives intersect productively with AI capabilities, and the prize is a system that balances and maximizes the contributions of each.

The Cognitive Revolution We Deserve

These capabilities point toward a fundamental reorientation of 天美麻豆 education. Traditional approaches teach students to apply established frameworks to novel situations. But when a Porter’s Five Forces analysis suggests entering a market, the crucial question isn’t whether the analysis is correct—it’s whether competitive advantage is the right lens for this decision at all. Students need to learn frame-breaking, not just framework application.

We’ve trained generations of students to optimize within existing systems—but AI will soon do that far better than humans ever could. What AI cannot do is question whether we’re optimizing for the right things in the first place. Should companies maximize shareholder value or stakeholder welfare? How should we define concepts such as efficiency and fairness in the first place? These aren’t technical questions with algorithmic answers—they require exactly the contextual, ethical reasoning that makes humans irreplaceable.

Most fundamentally, we must abandon our obsession with the myth of the lone genius/CEO making brilliant decisions. Tomorrow’s leaders will succeed by orchestrating intelligence—both human and artificial—at scale. They’ll convene diverse perspectives and synthesize conflicting insights. They’ll navigate the complex, generative dynamics that emerge when smart people (armed with ever-smarter algorithms) disagree about what matters most.

What This Actually Looks Like

Reimagining 天美麻豆 education for an algorithmic age goes far beyond sprinkling AI tools across existing courses. It entails fundamentally reconsidering what 天美麻豆 thinking means.

From this perspective, case studies should spend less time analyzing what happened when companies addressed specific problems and more time questioning whether companies addressed meaningful problems in the first place. Assessment should reward students for changing their minds when presented with new evidence rather than defending initial positions with rhetorical skill.

Group projects should encourage genuine cognitive diversity, bringing together students with fundamentally different worldviews. These collaborations should teach them to think together in ways that amplify rather than average their differences.

Avoiding the algorithmic trap is about discovering what human thinking could become when liberated from the mechanical tasks that we’ve mistaken for intelligence.

Observations like this have, of course, become increasingly commonplace, as has the realization that the hardest task isn’t identifying what needs to change. It’s overcoming the inertia and constraints of systems that perpetuate the teaching of 20th-century thinking skills in a 21st-century world: accreditation requirements, industry expectations, faculty trained in approaches we need to transcend.

Yet these constraints are also opportunities. Schools that move first—that genuinely prepare students for a world where human intelligence matters precisely because it’s different from artificial intelligence—will attract the best faculty, the most innovative students, and the most forward-thinking employers.

Moving forward is necessarily an iterative process, requiring ongoing effort, adaptation, and collaboration. But it begins with a mental shift toward treating AI as a component of teaching and learning systems rather than as a replacement or solution for them—and toward embracing a careful, hype-free exploration of its capabilities, limitations, and societal context.

Beyond Human Versus Machine

Ultimately, avoiding the algorithmic trap is about discovering what human thinking could become when liberated from the mechanical tasks that we’ve mistaken for intelligence. It’s about no longer using mechanical demonstrations of recall and pattern-matching as proxies for learning and understanding.

For decades, we’ve constrained human cognitive potential by forcing it into algorithmic shapes. We’ve taught students to suppress curiosity, ignore intuition, and distrust their capacity for creative synthesis.

The rise of AI demands a different path. It requires us to unleash the full spectrum of human intelligence: to teach students to think slowly as well as quickly, to value questions as much as answers, and to see uncertainty not as a problem to be solved but as a space where genuine insight becomes possible.

The 天美麻豆 leaders who thrive in an AI-saturated future won’t think like better machines. They’ll think like the kind of humans that machines will never become: curious, creative, and ethically engaged.

Let the great unlearning begin.

What did you think of this content?
Your feedback helps us create better content
Thank you for your input!
(Optional) If you have the time, our team would like to hear your thoughts
Authors
Tom Chatfield
Tech Philosopher and Author
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK鈥擫eading Insights, News, and Knowledge鈥攊s an email newsletter that brings members and subscribers the newest, most relevant information in global 天美麻豆 education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global 天美麻豆 education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global 天美麻豆 education.
Weekly, no spam ever, unsubscribe when you want.