What Happens to Critical Thinking When AI Can Summarise?
AI tools can summarise articles, simplify texts, and condense complex arguments, often in seconds. But when students rely on AI-generated summaries, what happens to the cognitive work of comprehension
This post is part of a series exploring how schools can integrate AI meaningfully, ethically and strategically. It offers insights and strategies for educators across all curricula and contexts, from Dubai to Dublin, Delhi to Durban and everywhere in between.
Subscribers get exclusive access to CPD slides, planning templates, and classroom questioning frameworks for developing AI-informed critical thinking.
Why This Matters
Critical thinking has always been central to academic success. It is the ability to analyse information, evaluate arguments, weigh evidence and form independent judgements. AI's summarising capabilities risk flattening this process by pre-packaging information, potentially bypassing the struggle and synthesis that build deep understanding.
AI tools also reflect their training data. Bias, perspective, and knowledge gaps baked into large language models may shape what is included or excluded from any AI summary. If students are not taught how to engage critically with AI outputs, we risk creating surface-level comprehension rather than robust reasoning. The challenge is not whether AI summarises. The challenge is whether students know how to critique, question and build upon those summaries.
Where AI Helps and Where Caution Is Needed
AI summarisation tools offer real potential as scaffolds for comprehension, especially when dealing with complex or high-volume reading. However, when used without structure or oversight, these tools risk limiting the very cognitive skills we aim to build in students. Teachers must guide students to understand the strengths and limitations of AI-generated summaries before adopting them as reliable learning aids.
AI can:
📝 Generate first-draft summaries of complex readings
💡 Identify key themes, arguments or concepts
📊 Extract and organise supporting evidence
🎯 Provide multiple perspectives or interpretations to explore
🔍 Suggest guiding questions for further analysis
AI cannot:
🧠 Replace close reading for nuance, subtext and tone.
AI processes words literally but does not truly understand implied meaning, emotional undertones, or complex narrative shifts that readers extract through close reading.
⚠️ Detect bias, fallacies or flawed reasoning in original texts.
While AI can surface patterns, it cannot independently evaluate argument validity or logic. Its outputs reflect training data rather than applying independent reasoning.
🎭 Interpret authorial intent, irony, or rhetorical strategies.
AI may label devices but cannot fully grasp the writer's deeper purpose, persuasive moves, or intended audience effects.
🔬 Weigh competing arguments with real-world context.
AI lacks the lived experience, ethical judgement, and real-world grounding to evaluate which arguments are stronger in practical, social, or moral terms.
🎯 Build independent synthesis of ideas across multiple sources.
AI can merge content but struggles to prioritise evidence, reconcile contradictions, or form nuanced synthesis without teacher-guided judgement.
📚 Preserve disciplinary literacy features like subject-specific terminology and conventions.
AI often oversimplifies technical vocabulary and may lose key language features critical for academic rigour within disciplines.
🚩 Prevent hallucinations. Summaries may confidently present false or fabricated details.
Even advanced models still generate plausible but incorrect information. They cannot guarantee factual accuracy or source verification.
Unchecked AI use risks training students to accept answers, not interrogate them. Summaries may simplify but they also distort.
The 'Read-Critique-Extend' Model for AI-Assisted Thinking
To make AI-supported summarisation serve real learning, schools need intentional models that strengthen rather than shortcut thinking. The Read-Critique-Extend structure offers a simple, practical classroom approach that ensures students remain actively engaged with both AI outputs and original sources.
Read — Students engage with the original source material, identifying key questions before seeing any AI summary.
Critique — Students compare the AI summary against the original text.
What’s missing?
What’s oversimplified?
What bias might the AI be reflecting?
Does the summary reflect authorial intent?
Has any key subject-specific terminology been lost?
Extend — Students use AI-generated summaries to springboard deeper tasks.
Build alternative interpretations
Identify counterarguments
Weigh the validity of evidence
Compare multiple AI summaries across platforms or prompts
Connect across texts or disciplines
AI should serve as a partner in metacognition, not a substitute for it.
In Practice: Real Classroom Examples
Schools across phases are beginning to embed these critical thinking routines using AI in subject-specific and cross-curricular ways. The examples below show how teachers are already scaffolding AI-assisted thinking.
English (Language of Instruction): Students compare AI-generated summaries of persuasive essays to analyse how well nuance, tone and rhetorical devices are captured.
History: Students use AI to generate opposing interpretations of historical events, then critique the accuracy and fairness of each version.
Science: Students prompt AI to explain conflicting scientific theories, then evaluate strengths, weaknesses and evidence.
IB Extended Essay: Students review AI-summarised articles to identify gaps, misrepresentations or missing perspectives before final research drafting.
Primary Reading Comprehension: Teachers use AI to generate different versions of a story summary (factual, opinionated, biased), and students analyse differences.
Exam Preparation: Students cross-check multiple AI-generated summaries of complex topics to evaluate consistency, omissions, and inaccuracies before exam practice.
Whole-School Debates: AI is used to generate opposing argument frameworks, which students then research and challenge using source material.
Next Steps for Leaders
Leaders play a vital role in protecting curriculum depth as AI summarisation becomes more accessible. School policies, training, and classroom practice must work together to ensure AI tools serve as cognitive scaffolds, not cognitive shortcuts.
Curriculum Audit – Review where critical thinking outcomes may be undermined if AI summaries replace deeper engagement.
Staff CPD – Train teachers on classroom routines for interrogating AI-generated summaries.
Assessment Design – Ensure exams, coursework and projects reward evaluation, not recall of AI outputs.
Academic Integrity – Embed clear guidelines on acceptable AI assistance for research and analysis tasks.
Departmental Case Studies – Facilitate subject teams to trial and document best practices for AI-supported analysis.
Student Voice – Involve students in discussions around AI’s role in developing independence versus overreliance.
Parental Communication – Share how AI tools are positioned to support, not replace, student cognitive effort.
Explainability Focus – Build student understanding of how AI generates summaries and why outputs may vary across different platforms or prompts.
Useful Links
1. Common Sense Education — 3 Core Skills Before AI Use
🔗 https://www.commonsense.org/education/articles/3-core-skills-before-ai-use
Practical classroom guide identifying key cognitive and metacognitive skills students need before engaging with AI tools, directly aligned to critical thinking development.
2. Forbes — In the Age of AI, Critical Thinking Is More Needed Than Ever
🔗 https://www.forbes.com/sites/roncarucci/2024/02/06/in-the-age-of-ai-critical-thinking-is-more-needed-than-ever/
A global perspective on why AI makes deep reasoning, independent judgement and critical questioning more essential than ever in both education and leadership.
Reflective Question
Are we teaching students how to challenge AI, or simply how to consume it?
AI in Education Blog Series – Full List
This 4-week series explores how schools can embed AI meaningfully, ethically and strategically across curriculum, CPD, leadership and inclusion. New posts are published four times a week throughout June and July 2025.
Week 1: Orientation – Understanding the Shift
1. Why AI in Schools Is a Pedagogical Shift, Not a Tech Trend
2. How to Talk to Students About AI (Even When You’re Not an Expert)
3. Bridging the Gap: What Parents and Teachers Need to Understand About AI
4. How Ready Is Your School for AI? A Leadership Reflection
Week 2: Teaching, Equity and Ethics
5. Planning with AI Without Losing Professional Judgement
6. Are We Teaching Students to Think Ethically About AI?
7. What Inclusive AI Use Looks Like in EAL and SEND Contexts
8. Keeping Students Safe: The New Rules of AI and Safeguarding
Week 3: Teaching Across Subjects
9. Reimagining Reading and Writing: AI in English Classrooms (and Beyond)
10. AI in Math and Science: From Calculation to Simulation
11. (You are here) What Happens to Critical Thinking When AI Can Summarise?
12. Creativity and Authenticity in the Age of AI
Week 4: Strategy, Assessment and Future Readiness
13. What Every School Needs Before Saying “We Use AI”
14. Why CPD on AI Should Start with Questions, Not Tools
15. What Does “AI Literacy” Really Mean, and How Do We Know Students Are Gaining It?
16. From Pilot to Policy: Embedding AI in the School Development Plan