If a student can get a top grade using ChatGPT, what exactly are we assessing?
This isn't a hypothetical anymore. Across the world, students are submitting AI-generated essays that look brilliant on the surface - sophisticated analysis, elegant prose, compelling arguments. Teachers are awarding top marks without realising the work was created by feeding plot summaries to an AI with instructions like "write like a GCSE student who's really into Shakespeare."
The system works perfectly - except it's completely broken.
We're not just facing a cheating crisis. We're facing an educational reckoning. After decades of rewarding students for memorising, regurgitating, and reproducing information, AI has called our bluff. The game has changed, but we're still playing by the old rules. The uncomfortable truth is that most of what we've been calling ‘education’ is actually just an elaborate memory exercise. Sit still, listen carefully, remember precisely, reproduce reliably. It worked fine when information was scarce and workers needed to follow instructions. But AI can now do all of this better than any human ever could.
What AI can't do - at least not yet - is think critically about the information it processes. It can't question its own assumptions. It can't recognise when it's been fed rubbish. It can't pause and wonder if an elegant answer is actually addressing the wrong question.

These distinctly human capabilities have been shoved to the margins. Treated them as ‘soft skills’ or enrichment activities rather than survival skills. That was always a mistake. Now it's a crisis.
The Great Exposure
There’s a thing that's keeping me awake at night: we've built an education system that rewards compliance over cognition. Students learn to perform understanding rather than actually understand. They get brilliant at giving us back what we want to hear, rather than thinking through what they actually believe.
Let’s take AI hallucination, for example - when ChatGPT confidently tells you that Harold II survived Hastings and ruled England for another decade. It delivers this complete fiction with exactly the same confidence it uses for actual historical facts. Without critical thinking skills, students have no way to tell the difference between accurate information and convincing BS.
This isn't just an academic problem. Ofcom research shows that four in ten UK adults encounter misinformation, and more than four in ten adults say they have seen a story on social media that looked deliberately untrue or misleading in the last year. When you combine AI's talent for generating persuasive content with our students' underdeveloped thinking skills, you've got a perfect storm brewing.
I keep coming back to what John Dewey wrote over a century ago,
"We do not learn from experience... we learn from reflecting on experience.” John Dewey
Today's students are drowning in experiences - digital ones, AI-generated ones, virtual ones - but they lack the tools to learn from them meaningfully. We've accidentally been training students to compete with machines at what machines do best: processing information quickly and accurately. The future belongs to something else entirely.
What Makes Humans Irreplaceable
Here's what makes humans uniquely valuable in an AI world: not our ability to find answers, but our capacity to question them. Not our speed at processing information, but our wisdom in knowing when to slow down and think harder.
Daniel Kahneman talks about System 1 thinking (fast, automatic, intuitive) and System 2 thinking (slow, deliberate, analytical). AI excels at System 1 processes. Our advantage lies in System 2 - the ability to pause, question, analyse, and evaluate.

Yet most educational tasks have historically targeted System 1: quick recall, pattern recognition, immediate response. We've been preparing students to lose to machines at their own game.
The future belongs to System 2 thinkers. People who can spot when a confident-sounding AI response contains logical fallacies. People who can trace arguments back to their assumptions and forwards to their implications. People who can, as Socrates might say, know what they don't know.
But teaching critical thinking isn't straightforward. The California State University system developed the CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose) for evaluating sources. It's useful, but it was designed for a pre-AI world. I have used it multiple times and still think it has value but today's students need something more sophisticated. Something designed for an age when AI can generate academic papers, news articles, and research studies that look completely legitimate but are entirely fabricated.
That's where CLEAR thinking comes in - and why I have developed this framework that I am working on with educational institutions across the world.
Introducing the CLEAR Framework
The CLEAR framework is my attempt to give educators a practical approach to critical thinking that's actually designed for the AI era. Each element builds on the previous one, creating a scaffold that grows more sophisticated as students develop.
C – Curiosity: Start by asking better questions
L – Logic: Learn to spot patterns, fallacies, and cause/effect relationships
E – Evidence: Distinguish between opinion and proof
A – Argumentation: Build, challenge, and defend ideas clearly
R – Reflection: Pause, review, and adjust your thinking
This isn't just another acronym to forget by half-term. It's a systematic way of developing the intellectual habits that AI can't replicate: questioning assumptions, following logical threads, weighing evidence critically, engaging in reasoned discourse, and changing minds when confronted with better arguments.
Let me walk you through each element.

C - Curiosity: The Question Revolution
The first element addresses what might be education's biggest oversight: we've taught students to seek answers without teaching them to question the questions themselves.
In an AI world, this flips completely. Machines excel at providing answers. Humans excel at wondering whether those answers are addressing the right questions in the first place.
I'm not talking about idle wonder here. I'm talking about intellectual courage - the willingness to challenge received wisdom, to probe beneath surface explanations, to ask uncomfortable questions especially when they threaten settled conclusions. This means moving from "What's the capital of France?" (Google-able) to "Why do capital cities matter?" (Ungoogle-able or at least less google-able). It means creating classroom cultures where "I don't understand" is celebrated as the start of enquiry, not a sign of failure.
Imagine this transforming a Year 8 geography class studying climate change. Students could move from simply identifying causes and effects to questioning who benefits from particular climate narratives, how historical power structures influence environmental policy, and whether ‘natural disaster’ might be a problematic term. These deeper questions emerge when students develop genuine curiosity about complexity.
The ancient Greeks called it aporia - that productive state of puzzlement when you realise you don't understand something you thought you understood. Teaching curiosity means creating these moments deliberately.
You can use AI to enhance this if you're clever about it. Instead of asking ChatGPT for answers, ask it to generate ten different questions about a topic from ten different perspectives. Then evaluate which questions are worth pursuing. Turn AI from an answer machine into a curiosity catalyst.
But curiosity without direction becomes mere speculation. That's where logic comes in.

L - Logic: Spotting the Patterns and the Pitfalls
Logic provides the structural foundation for reasoning. In an age of information abundance, the ability to trace connections, spot contradictions, and identify flawed reasoning becomes survival level important. This isn't about formal logic or symbolic manipulation. It's about developing what philosophers call "natural logic" - the everyday reasoning skills that help us navigate complex arguments and competing claims.
AI makes this particularly urgent because AI systems can produce responses that are logically coherent but factually wrong. They excel at maintaining internal consistency within a piece of text while building that consistency on completely false premises.
Think about how misinformation spreads on social media. The arguments are often well-structured - if A, then B; A is true; therefore B - but they rest on false or misleading starting points. Without logical thinking skills, students become sitting ducks for persuasive but flawed reasoning.
One simple example I used to use in my philosophy classes was:
All bachelors are male
John is a bachelor.
Therefore, John is male.
That makes logical sense. However, a similar argument is logically flawed:
All bachelors are male.
John is male.
Therefore, John is a bachelor.
The psychologist Keith Stanovich researches what he calls "dysrationalia" - the inability to think rationally despite adequate intelligence. Even highly intelligent people can be poor reasoners if they haven't developed specific logical thinking habits. They might ace IQ tests but fall for conspiracy theories or statistical fallacies.
“To be rational, a person must have well-calibrated beliefs and must act appropriately on those beliefs to achieve goals-both properties of the reflective mind. The person must, of course, have the algorithmic-level machinery that enables him or her to carry out the actions and to process the environment in a way that enables the correct beliefs to be fixed and the correct actions to be taken. Thus, individual differences in rational thought and action can arise because of individual differences in intelligence (the algorithmic mind) or because of individual differences in thinking dispositions (the reflective mind). To put it simply, the concept of rationality encompasses two things (thinking dispositions of the reflective mind and algorithmic-level efficiency) whereas the concept of intelligence-at least as it is commonly operationalized-is largely confined to algorithmic-level efficiency.” Keith Stanovich
Teaching logic in the CLEAR framework means helping students recognise common reasoning patterns and their pitfalls. This includes obvious fallacies (attacking the person rather than the argument) but focuses more on everyday reasoning errors: false dilemmas, slippery slope arguments, appeals to emotion rather than evidence. I will be doing a future article about different fallacies and logical mistakes people make in arguments.
I love doing ‘logical autopsy’ exercises with students - examining reasoning mistakes with curiosity rather than judgement. When someone argues "all politicians are corrupt because the ones in the news are corrupt," we can explore why easily recalled examples might mislead us. When another claims "if we allow phones in school, we'll end up allowing anything," we can examine slippery slope thinking.
The key insight from cognitive psychology is that these aren't character flaws - they're features of human thinking that usually serve us well but can lead us astray in certain contexts. Teaching logic means helping students recognise when their fast, intuitive reasoning needs slower, more deliberate analysis.
But even perfect logic leads to wrong conclusions if it's based on dodgy evidence.

E - Evidence: Separating Signal from Noise
Evidence evaluation addresses perhaps the most practical challenge students face: distinguishing reliable information from convincing rubbish. When AI can generate academic papers and news articles indistinguishable from human-created content, traditional source evaluation isn't enough. This is where the CRAAP model definitely still works.
Students need to evaluate different types of evidence - statistical data, expert testimony, historical records, scientific studies - and understand their strengths and limitations. They need to recognise when evidence is missing, cherry-picked, or manipulated.
Karl Popper argued that good evidence isn't just evidence that supports a claim - it's evidence that could potentially disprove it. Teaching students to ask "What would change my mind?" transforms them from passive consumers of information into active evaluators.
It’s worth considering AI-generated academic papers then. These can contain fabricated citations, non-existent studies, and fictional data that nonetheless appear academically rigorous. Students need skills beyond checking whether sources exist - they need to evaluate whether sources are appropriate, methodologies sound, conclusions justified.
Psychologist Chip Heath's research explains why some false information feels more credible than accurate information. Content that's concrete, emotional, and story-driven tends to be more memorable and persuasive than abstract statistical evidence - even when the statistics are more accurate.
“If a message can’t be used to make predictions or decisions, it is without value, no matter how accurate or comprehensive it is.” Chip Heath
This isn't about creating sceptics who doubt everything. It's about developing what Richard Paul called "strong-sense critical thinking" - applying critical thinking tools fairly to ideas you disagree with as well as those you favour. Practical evidence evaluation means teaching students to trace information back to primary sources, recognise correlation versus causation, identify inadequate sample sizes, spot statistical manipulation, evaluate expertise claims, and recognise cherry-picked evidence.
But evidence evaluation without the ability to construct and challenge arguments remains incomplete.

A - Argumentation: The Lost Art of Productive Disagreement
Argumentation moves beyond individual reasoning to collaborative thinking. In an AI world where echo chambers can be algorithmically reinforced, the ability to engage constructively with opposing viewpoints becomes a democratic survival skill.
This isn't debate for its own sake. It's what philosophers call ‘charitable interpretation’ - representing others' arguments in their strongest form before attempting to challenge them. It's the opposite of the straw man fallacy that dominates online discourse, where opposing views are caricatured to make them easier to dismiss.
“There is a principle in philosophy and rhetoric called the principle of charity, which says that one should interpret other people’s statements in their best, most reasonable form, not in the worst or most offensive way possible.” Jonathan Haidt
I love the concept of ‘steel-manning’ - constructing the strongest possible version of an argument you disagree with. This requires genuine understanding and intellectual humility. It's particularly valuable when AI can generate sophisticated-sounding arguments for any position, regardless of validity.
Daniel Dennett suggests that before dismissing someone's position, you should be able to restate it so accurately that they'd say "Thanks, I wish I'd thought of putting it that way." This transforms argumentation from combat to collaboration.
Jonathan Haidt's research shows that people often form judgements intuitively and then construct post-hoc rationalisations. Effective argumentation requires recognising this pattern and creating space for genuine belief revision rather than mere position defence.
Teaching argumentation means helping students distinguish arguments from assertions, identify unstated assumptions, recognise valid but unsound reasoning, practise charitable interpretation, develop skills for productive disagreement, and learn when and how to change their minds.
This connects to the broader challenge of democratic discourse in the digital age. Social media algorithms reward engagement over accuracy, promoting content that generates strong reactions rather than thoughtful reflection. Students need skills for engaging constructively across ideological divides.
But even sophisticated argumentation can become rigid without the final element.

R - Reflection: The Courage to Change Your Mind
Reflection addresses perhaps the most challenging aspect of intellectual development: the willingness to revise beliefs when confronted with better arguments or evidence. This isn't about lacking conviction - it's about intellectual humility. Mark Leary's research found that people high in intellectual humility are better at distinguishing accurate from inaccurate information, more willing to seek disconfirming evidence, and more capable of revising beliefs when presented with compelling counterarguments.
“People who are really low in intellectual humility go through life not considering the possibility that they could be wrong.” Mark Leary
This goes deeper than Carol Dweck's growth mindset. It's not just believing abilities can be developed - it's believing that beliefs themselves should be held tentatively, subject to revision based on new evidence or better reasoning. Teaching reflection means creating classroom cultures where changing your mind is celebrated rather than seen as weakness. It means using thinking routines like "I used to think... now I think..." to make belief revision visible and valued.
Karl Popper argued that intellectual progress depends on "conjecture and refutation" - making bold hypotheses and subjecting them to rigorous testing. Students need experience with this cycle: forming judgements, gathering evidence, testing against counterarguments, revising when necessary.
This is particularly challenging in educational contexts that prioritise certainty over enquiry. Students learn to perform confidence even when uncertain, defend positions they're unsure about, avoid admitting ignorance or confusion. But reflection requires intellectual courage.
Making It Real: CLEAR Across the Curriculum
The power of CLEAR thinking lies not in teaching it as a separate subject, but embedding it across everything we do. In English, students can examine how AI might interpret texts differently from humans, explore how algorithmic recommendations shape reading choices, or analyse AI-generated poetry. They can question canonical assumptions: why these texts? Whose voices are missing?
Science offers natural opportunities for logic and evidence evaluation. Students can examine AI in research, from drug discovery to climate modelling. They can explore ethics of AI-generated hypotheses or debate the role of artificial intelligence in scientific knowledge. History can address how AI changes research and interpretation. Students can examine AI-generated narratives, consider algorithmic bias in databases, explore how deepfakes challenge traditional evidence.
Even PE can incorporate CLEAR thinking through sports analytics, AI training systems, or algorithmic decision-making in sport. Students might question performance measurement assumptions or reflect on how technology changes athletic achievement. Pattern-recognition has caused lots of harm in elite sport (think about Caster Semenya, as one sad example). Navigating this kind of ethical and philosophical challenge with students would be a brilliant place to take the non-memorisation focussed curriculum.
However, time pressure remains the biggest barrier. Teachers feel they barely have time to cover content, let alone add thinking skills. But this reflects a false choice between content and thinking. CLEAR thinking isn't extra - it's a way of engaging more deeply with existing content.
Assessment presents another challenge. Current exams reward quick recall over careful reasoning. Until assessment changes to value thinking processes, teachers face competing pressures. I have long been banging this drum and I hope and pray that change is coming. It’s the overreliance on quantitative data that comes from exams that make this CLEAR style thinking to become mainstream and normalised.
Teacher confidence is a third obstacle. Many educators feel unprepared to teach critical thinking explicitly, especially when it might challenge their subject expertise or institutional practices. There's also resistance from communities worried that critical thinking might undermine traditional values or authorities. This concern isn't unfounded - critical thinking does empower students to question received wisdom.
But as Paulo Freire argued, true education is necessarily transformative. It changes students and communities. This transformation should be guided by ethical principles: respect for human dignity, commitment to justice, dedication to truth.
“If the structure does not permit dialogue the structure must be changed.” Paulo Freire
The Stakes
We're at an educational inflection point. AI has exposed the limitations of information-transmission education. Students can access vast knowledge and sophisticated reasoning tools. What remains uniquely human is the wisdom to question deeply, reason carefully, evaluate evidence critically, argue constructively, and reflect honestly.

The future doesn't belong to those who use AI most effectively - it belongs to those who know when not to use it. Who can evaluate its outputs critically. Who can ask questions AI cannot answer. Who can navigate the complex ethical challenges AI creates. In a world where machines process information faster, store facts longer, and execute procedures more accurately than humans, our advantage lies in capacities that make us human: curiosity about complexity, logical analysis, careful evidence evaluation, respectful disagreement, and humble willingness to change our minds.
These capacities cannot be automated. They must be cultivated. And the window for doing so is narrowing rapidly.
The choice is ours: continue preparing students for a past that AI has already rendered obsolete, or embrace the harder but more hopeful task of preparing them for a future where clear thinking remains humanity's essential contribution.
Key Takeaways
- Start with questions, not answers: In an AI world, the ability to ask un-googleable questions becomes more valuable than finding googleable answers.
- Embed CLEAR thinking across subjects: Don't treat critical thinking as a separate subject - make it the lens through which students engage with all content.
- Teach logic explicitly: Students need to recognise common fallacies, trace assumptions, and distinguish valid from sound arguments.
- Move beyond fact-checking to evidence evaluation: Develop sophisticated skills for analysing sources, methodologies, and statistical claims in an age of AI-generated content.
- Foster productive disagreement: Teach steel-manning, charitable interpretation, and the lost art of changing your mind when presented with better arguments.
- Model intellectual humility: Create classroom cultures where admitting ignorance, expressing uncertainty, and revising beliefs are celebrated rather than penalised.
- Reform assessment to value thinking: Develop approaches that capture reasoning processes, not just final answers or memorised content.
Further Reading
Discover more interesting articles here.