With the use of LLMs, we enter a new terrain of critical thinking. The challenge is no longer just having the right answer, but understanding the topography of AI's thinking landscape—identifying peaks of superhuman competence and valleys where systems confidently generate plausible-sounding errors. This makes critical thinking more about knowing the fundamentals and even more about interrogating proposed answers.
Ethan Mollick calls AI's uneven capabilities the "Jagged Frontier": they are systems that can draft sophisticated prose about technical subjects while failing at tasks that seem elementary to humans. This irregular landscape becomes even more complex when combined with Kambhampati's concept of "Fractal Intelligence,” suggesting AI capabilities shift unpredictably even on similar tasks. This isn't merely a technical curiosity; it fundamentally reshapes what critical thinking means in the AI era.
Every AI interaction becomes a test of our ability to ask fundamental questions:
- Is this output correct?
- Is this output relevant to the actual, nuanced problem rather than just the superficial prompt?
- What isn't this AI telling me? What are its inherent biases, data limitations, or the unknown unknowns that require human tacit knowledge?
- If this output is wrong, why? Is it a simple data error, a misinterpretation, or a fundamental limitation in its understanding - perhaps mistaking correlation for causation with unwarranted confidence, or lacking common-sense grounding?
This cognitive work is expensive. It's tempting to accept AI's fluent output and simply outsource all cognitive effort, especially when wrapped in well-structured prose that appears authoritative. But the cost for uncritical AI adoption is a gradual erosion of genuine understanding, potentially leading to significant errors downstream. Think of it as cognitive leverage: AI amplifies production capacity but also amplifies risk if the underlying reasoning is flawed.
The New Shape of Thinking
Traditional critical thinking often involved solitary contemplation, deep research, and careful argument construction from first principles. We may be outsourcing much of the application of the work, but not in the curation of the work. This makes valuable thinking more expensive, precisely because the volume of plausible-sounding AI-generated content makes genuinely insightful human contributions harder to identify and more crucial to cultivate. While AI commoditises the grunt work of idea generation and drafting, the premium shifts to uniquely human abilities. We find ourselves spending more time at the top end of higher order thinking as we:
- Define novel problems in ways AI can approach, or recognise when problems fall outside AI's current capabilities
- Synthesise insights across disparate AI outputs, identifying connections or contradictions that siloed AI processing might miss
- Apply ethical frameworks and real-world context that AI inherently lacks
- Take ultimate responsibility for final outputs - AI won't face boards, regulators, or clients when decisions go wrong!
Understanding Tacit Knowledge
We can know more than we can tell
— Michael Polanyi
But curation is difficult. This challenge connects directly to Polanyi's paradox - the observation that humans "can know more than we can tell." This tacit dimension creates fundamental obstacles for both AI development and human learning, since programming automated systems or teaching skills requires explicit descriptions that tacit knowledge resists providing.
As a quick refresher, I've borrowed from José Luis Ricón Fernández de la Puente's Nintil blog on different types of knowledge:
Explicit Knowledge
- Public: Facts readily available in textbooks and online
- Private: Information restricted by NDAs or competitive advantage but theoretically shareable
Tacit Knowledge
- Public: Skills acquirable through publicly available resources (like learning guitar from videos)
- Motor skills: Physical capabilities like bike riding.
- Intellectual skills: Pattern recognition like judging when food is properly cooked.
- Private: Knowledge requiring apprenticeship or community embedding
- Individual: Skills gained through one-on-one teaching.
- Social: Knowledge embodied in networks rather than individuals.
The Tacit Knowledge Problem in Practice
Tacit knowledge creates fundamental challenges for automation and learning. We can see this in areas like taxation and medicine where even with models doing exceptionally well in benchmarked exams in math, law and biology, this does not translate to results in the field.
There are priors even as humans to scaling tacit knowledge. For example, language learners achieve near-native proficiency in about a year through massive contextual exposure, chess masters often develop expertise through solitary practice with puzzles rather than gameplay. These top performers are relentlessly resourceful in their goal to improve themselves, and we can see how resources rather than direct instruction can play a big part in their development.
Does this mean AI can replace us simply by having better resources? I don't believe so (for now). As José notes, the path forward to scaling tacit knowledge combines these approaches:
- Libraries of expert performances with commentary
- Rich simulations for experiential learning
- Documentation of difficulty and failure patterns
- Online communities providing field immersion benefits
This is a gargantuan task. For both AI and humans it is a noble goal: we want to reduce expertise development from decades to years. Even partial success in transmitting tacit knowledge offers significant value in our increasingly automated world.
Mo’ AI, Mo’ Thinking
As AI systems become more capable, the premium on human judgment isn't disappearing; it is being redirected. The cost for expertise is increasing precisely because the ability to evaluate AI outputs, synthesise across domains, and apply contextual understanding becomes more valuable as routine tasks are automated.
Some knowledge may inherently resist codification. We must develop better methods for transmitting experiential understanding while recognising that human expertise remains the sine qua non precisely where knowledge resists this formal specification.
In this new landscape, critical thinking evolves from solitary analysis to orchestrating hybrid human-AI systems. Success requires understanding both the peaks and valleys of AI capability while cultivating the uniquely human abilities that no amount of pattern matching can replicate. The tacit dimension that Polanyi identified isn't just an obstacle to overcome, it's the source of enduring human value in an automated world.