Artificial intelligence promises personalised tutoring, instant answers and effortless productivity. That promise is seductive. But the bigger change AI is delivering to education may not be more engaging lessons or better explanations. The bigger shift is that AI exposes and accelerates the incentives already baked into our educational system, an entire culture that prizes the final grade over the messy, slow work of learning.
Watching this YouTube video made me wonder if we’re reaching a point where we stop thinking for ourselves. It also raises the question of whether traditional classroom and online learning will remain effective.
This article highlights key points from the video. By clicking on the picture, you will be taken to the full video on YouTube.
Education versus learning: two different goals
Education is a social system: curricula, tests, grades, diplomas. Learning is a human skill: curiosity, practice, failure and sense-making. When systems reward a performance, an A plus, a certification, we teach students to optimise for outcomes, not for understanding.
Hand a student a tool that makes outcomes easier to achieve, and you do not automatically deepen learning. You might simply speed up the race to the diploma. That is the central tension: are we using AI to deepen thinking or to shortcut it?
When convenience becomes cognitive offloading
Consider a common classroom moment: a student asks how to price a service. A quick Google search once meant following several sources, comparing perspectives and spotting contradictions. Today’s large language models produce a single, fluent answer that looks and sounds correct. It understands the plain-language query better than many web pages, but it often lacks context, nuance, and sourced evidence.
Choosing the AI’s top answer without interrogating it is a form of cognitive offloading: handing your decision-making to the tool. It’s not the tool that is necessarily wrong; it’s how the tool is used. If students accept polished responses as final, they stop practising the essential skills of critical reading, sourcing and judgement.
Dark patterns and the psychology of persuasion
User experience design aims to make interactions easier. That same simplicity can be weaponised to shape behaviour. Classic dark patterns, those deceptive interface designs that nudge you into donating or subscribing, have an analogue in conversational AI.
When an AI consistently praises, validates and reassures, it creates a reinforcing loop that keeps people engaged and trusting its outputs. In extreme cases, this can lead to harmful outcomes: users accepting dangerous advice or dropping critical safeguards in their lives because the system affirms a risky choice.
Real-world evidence: effort and atrophy
Surveys and experiments show a clear shift in perception: many professionals report exerting less effort when they rely on AI. Reduced effort in comprehension, evaluation, and synthesis over time risks intellectual deskilling, as one researcher called it: “what Copilot becomes: autopilot.”
The more tasks we let automated systems handle, the more our capacity for those tasks atrophies. It’s less about occasional factual errors and more about the slow erosion of habits that make us good thinkers.
Productive resistance: designing AI that asks you to think
A useful heuristic for any supportive tool is to build in resistance. Too little and the tool does your thinking for you. Too much and the tool becomes unusable. Somewhere between those extremes is productive resistance: a measured friction that prompts users to clarify their question, try a step themselves, or justify an answer before it is handed to them.
Practical examples include:
- Clarifying questions—before answering, ask for context: who are the customers, what are the constraints, what’s the objective?
- Scaffolded help—provide partial answers or hints that require users to fill in the remainder.
- Verification prompts—encourage users to validate sources, check assumptions and test edge cases.
What individuals can do
Individuals need new habits: less passively consuming answers and more practising thinking. A few actionable routines:
- Use AI as an assistant, not an author: ask for outlines, counter-arguments, or explanations of assumptions rather than a finished product.
- Verify like a food label: treat AI output like a nutrition panel, check sources, look for provenance and sanity-check claims.
- Practice deliberately: resist the easy fix on problems you intend to master. Do the reps that build skill.
What systems must change
This cannot be solved by individuals alone. Schools, universities and governments must adapt incentives and curricula to the presence of powerful AI.
Ideas worth considering:
- Assessment redesign—shift the emphasis from high-stakes rote tests to formative, process-based evaluations that reward iteration, explanation, and source critique.
- Early media literacy—teach misinformation, source evaluation and digital fluency from a young age. Children can handle these concepts earlier than we assume.
- Regulation and timing—coordinate releases and access to powerful models to avoid amplifying vulnerabilities (for example, unrestricted access during exam periods).
Questions to guide thoughtful AI use
Instead of asking simply whether AI will help students succeed, try asking the five W’s and H with a twist:
- What should AI help someone learn: facts, process, or judgment?
- Why are we using AI for this task: efficiency, deeper understanding, or both?
- When and where is AI appropriate: practice vs assessment, private study vs group work?
- How will we measure true learning, not just test performance?
- Who benefits when AI replaces cognitive effort: students, companies, platforms?
A final reflection
AI will reshape education, but the crucial question is whether that reshape serves growth or expedience. If AI helps people become better thinkers, collaborators and citizens, it is an immense boon. If it simply makes it easier to secure credentials without change, then we have made our system more efficient at producing shallow outcomes.
The responsibility is shared. Individuals must learn to use AI as a thinking partner, not a substitute. Systems must redesign incentives, curricula and regulations to protect and cultivate the skills that matter long after the exam is over.
Who does AI really help if we allow it to replace the very processes that make learning meaningful?
Further reading and resources
For practical support on integrating AI thoughtfully and improving systems and processes, explore these resources:
- Coaching services (leadership, teams, executive): https://esterhuizenconsulting.co.za/coaching/ , https://esterhuizenconsulting.co.za/coaching/team-coaching/ , https://esterhuizenconsulting.co.za/coaching/executive-coaching/
- Wellness and sustainable performance: https://esterhuizenconsulting.co.za/wellness/
- Main site: https://esterhuizenconsulting.co.za
Use AI to amplify human thinking, not to quiet it. The choice about how these tools shape our minds and systems is still ours.