Why Structured Training Will Triumph Over AI EdutainmentSubstack Icon

1882 words

Recent discussions on platforms like Hacker News and in my friend group got me thinking about how learners experience AI-assisted education versus structured training platforms. Users of Math Academy consistently report that the platform provides "infinitely better" learning outcomes compared to prompting ChatGPT or Claude for mathematical instruction. This isn't merely a matter of content quality or accuracy—though AI systems do struggle with mathematical precision—but rather points to fundamental differences in how human cognition processes new information and develops expertise.

Math Academy succeeds not through revolutionary content, but by implementing what "every good application or service does: make things convenient." This convenience, however, runs deeper than simple user interface design. It reflects a profound understanding of cognitive architecture and the scaffolding necessary for genuine skill development. As one user noted, Math Academy incorporates "spaced repetition, interleaving, etc. the way a dedicated tutor would, but in a better structured environment."

The Connectivist Versus Instructionist Divide

At the heart of this educational technology revolution lies an old pedagogical debate with new technological implications. Large Language Models are inherently Connectivist in their design philosophy—they excel at providing access to vast networks of information, supporting learner-driven inquiry, facilitating connections between concepts, and adapting to individual questions and interests. The learner navigates this web of knowledge, making connections and constructing understanding through exploration and discovery.

However, serious skill development often requires Instructionist design principles: sequential, scaffolded learning progressions, enforced mastery checkpoints before advancement, systematic identification and remediation of gaps, and external validation of competency. These approaches recognise that expertise develops through deliberate practice within carefully structured environments, not merely through exposure to information or even sophisticated explanations.

The tension becomes apparent when we consider what each approach demands of the learner. Connectivist AI tools assume the learner possesses sophisticated meta-learning skills: the ability to generate productive questions, recognise knowledge gaps, sequence learning appropriately, and self-assess understanding accurately. Instructionist platforms, by contrast, externalise much of this cognitive overhead, allowing learners to focus their mental resources on domain-specific skill development.

The Enforcement Problem: Why AI Cannot Replace Structure

LLMs face a fundamental enforcement problem that reveals their limitations as primary educational tools. Unlike structured training platforms, AI systems cannot force learners to struggle through difficulty—users can simply ask for answers instead of working through problems independently. They cannot prevent premature advancement, as nothing stops someone from opening a new chat and jumping to advanced topics without mastering prerequisites. They cannot ensure genuine comprehension, since users can appear to understand by repeating AI explanations without true internalisation. Most critically, they cannot create productive failure experiences, as the temptation to get immediate help short-circuits the beneficial cognitive load of grappling with problems.

This enforcement problem extends beyond simple cheating or shortcuts. Even well-intentioned learners using AI tutoring systems may inadvertently undermine their own learning by seeking help too quickly, accepting explanations without sufficient processing, or advancing before achieving true automaticity with foundational concepts. The AI system, designed to be helpful and responsive, has no mechanism to resist these counterproductive behaviors.

Structured training platforms, by contrast, can enforce mastery through design. They can require demonstration of competency before advancement, identify and remediate specific gaps, and ensure learners experience the productive struggle necessary for deep learning. This external scaffolding becomes particularly crucial for novice learners who lack the expertise to effectively guide their own learning process.

Recent randomised controlled evidence supports this distinction: a carefully engineered AI tutor that embeds sequencing, step-by-step worked solutions, timed feedback, and explicit load-management outperformed in-class active learning in an authentic university physics course, with large effects (~0.7–1.3 SD) and lower median time-on-task (~49 vs 60 minutes). The gains came from the pedagogical control system implemented through AI—not from free-form chat. [Kestin et al., 2025 RCT]

The Meta-Learning Skills Gap

The assumption that learners can effectively self-direct their education using AI tools reveals a critical oversight: the meta-learning skills required for effective self-direction are themselves complex competencies that must be developed through practice and guidance. These include metacognitive awareness (knowing when you don't actually understand something), strategic knowledge (understanding which learning strategies work best for different types of content), calibration (accurately assessing your own competence level), and transfer recognition (seeing how concepts apply across contexts).

These sophisticated abilities typically develop through guided practice with feedback over extended periods. They cannot be acquired simply through casual interaction with AI systems. In fact, the learners who succeed with pure AI approaches likely already possess these meta-learning skills and substantial domain knowledge—they are essentially experienced autodidacts using a powerful new tool rather than novices being effectively taught.

This creates a paradox: effective self-directed learning requires substantial domain knowledge and/or learning experience, but many beginners lack exactly both of these. Asking novices to simultaneously learn new domain content while also managing their own learning process places impossible demands on their cognitive resources.

The Bottom Rungs Problem: How AI Removes Essential Scaffolding

AI systems, in their eagerness to be helpful, systematically remove what could be called the "bottom rungs" of the learning ladder—the foundational scaffolding that novices need most. This includes structured problem sequences that build systematically, imposed cognitive constraints that focus attention appropriately, external validation loops that catch misconceptions early, and productive failure experiences that build both knowledge and meta-learning skills.

When AI provides instant answers and explanations, it short-circuits the very struggle that develops both domain mastery and learning expertise. The learner becomes dependent on external support for processes they need to internalise. Rather than developing the capacity to work through difficult problems independently, they learn to prompt AI systems effectively—a valuable skill in some contexts, but not equivalent to domain mastery.

This scaffolding removal is particularly problematic because it feels helpful in the moment. Receiving clear explanations and immediate answers creates a sense of understanding and progress. However, this apparent learning often lacks the depth and retention that comes from effortful processing and independent problem-solving.

As a contrast, engagement-optimised systems like Duolingo deliberately smooth difficulty to avoid churn and constrain LLM interactions (e.g., vocabulary-bounded chats), which can improve short-term motivation but risks bypassing the productive struggle needed for durable skill if not coupled to mastery gates—see Duolingo — The Verge interview with Luis von Ahn and The Duolingo Handbook.

Why Mastery Training Wins: Design Principles for Cognitive Architecture

Effective training platforms succeed by recognising human cognitive architecture and designing around its constraints rather than ignoring them. They implement single-task focus, allowing learners to concentrate on domain content while the system manages meta-learning concerns. They provide external scaffolding through systematic pacing, sequencing, and assessment. They offer constrained choices that limit decision fatigue and cognitive overhead. Perhaps most importantly, they implement graduated autonomy, transferring self-direction skills only as domain knowledge solidifies and cognitive resources become available.

These platforms understand that automaticity in foundational skills is not merely convenient but necessary for higher-order learning. When basic operations become automatic, working memory is freed for more complex processing. This is why structured training that ensures mastery of prerequisites consistently outperforms more flexible approaches that allow premature advancement.

The spaced repetition and interleaving implemented by platforms like Math Academy are not arbitrary pedagogical preferences but evidence-based techniques that align with how human memory and skill development actually function. They force the kind of distributed practice and varied retrieval that builds robust, transferable knowledge.

The False Promise of Personalisation

Much of the enthusiasm for AI in education centres on promises of personalisation—adaptive systems that respond to individual learning needs, preferences, and pace. However, this personalisation often operates at the wrong level of analysis. True personalisation in learning is not about adjusting explanations to individual preferences or allowing learners to pursue whatever interests them in the moment. Rather, it involves identifying specific knowledge gaps and skill deficits, then providing precisely targeted instruction to remediate these issues within a coherent developmental sequence.

AI systems excel at surface-level personalisation—adjusting language, providing varied examples, or adapting to stated preferences. But they struggle with the deeper personalisation that effective teaching requires: recognising subtle misconceptions, identifying prerequisite gaps, and designing instructional sequences that build systematically on existing knowledge while addressing individual deficits.

Structured training platforms can implement this deeper personalisation through careful assessment and adaptive sequencing while maintaining the instructional backbone necessary for systematic skill development. They personalise the path through well-defined learning objectives rather than abandoning structure in favour of individual exploration.

Effective personalisation looks like targeted, timely feedback, diagnosis of specific misconceptions, self-pacing, and mastery gating inside a coherent sequence.

The Emerging Hybrid Model: AI as Enhancement, Not Replacement

The future of educational technology lies not in choosing between AI and structured training, but in understanding their complementary roles. The emerging model positions training platforms as the instructional backbone—enforcing mastery, managing progression, conducting assessment—while AI serves as connectivist enhancement, providing explanation, supporting exploration, and offering personalisation within the structured framework.

This hybrid approach leverages the strengths of each system while mitigating their weaknesses. Structured platforms ensure systematic skill development and appropriate cognitive load management, while AI provides the flexibility and responsiveness that can enhance motivation and understanding. Critically, the AI component operates within the constraints imposed by the training system rather than replacing its essential functions.

This model also recognises that different learners require different balances of structure and flexibility at different stages of their development. Beginners need heavy scaffolding with minimal AI enhancement, while advanced learners can benefit from more AI interaction as they develop the meta-learning skills necessary for effective self-direction.

Vendors are converging on this backbone-plus-AI pattern: mastery platforms like Math Academy pair knowledge graphs and schedulers (spaced repetition, interleaving) with tightly constrained practice flows, while tools like Claude for Education introduce "Learning Mode" and LMS integrations that can slot AI explanations and feedback inside institutional rails. The lesson is clear: AI serves as the enhancement engine; the training system remains the governor. [The Math Academy Way (Readwise); Anthropic — Claude for Education updates; Use of Generative AI Tools to Support Learning (Readwise)]

Implications for Educational Technology Development

The triumph of structured training over AI edutainment has significant implications for how educational technology should be developed and deployed. Rather than focusing on ever-more sophisticated conversational AI tutors, developers should prioritise intelligent training systems that implement proven pedagogical principles while leveraging AI for targeted enhancement.

This means building systems that can accurately assess learner knowledge states, identify specific gaps and misconceptions, sequence instruction appropriately, enforce mastery requirements, and provide targeted practice opportunities. AI can enhance these systems by generating varied practice problems, providing alternative explanations when learners struggle, and identifying patterns across large numbers of learners to improve the underlying instructional design.

The focus shifts from making AI systems better teachers to making training systems more intelligent and adaptive while maintaining their essential structural integrity. This approach is more likely to produce measurable learning gains because it aligns with rather than fights against human cognitive architecture.

References

© Oisín Thomas 2025