Welcome to Mines and Rabbit Holes #3! It's been a while since the last one, but that's somewhat by design: The best thinking requires patience, and the six weeks (and more) of reading I'm about to share have been some of the richest I've had.
As always, the goal here is to consolidate what I've been reading and thinking deeply about, and to share the mines I've stumbled into and the rabbit holes I've tumbled down.
Ar aghaidh linn!
Let's go!
More Coal in the Engine
In the 19th century, William Stanley Jevons observed something counterintuitive: as steam engines became more efficient at burning coal, total coal consumption increased. Better engines didn't mean less coal; they meant coal could be used for things that were previously too expensive to attempt.
I keep returning to this idea because it's the single best frame for what AI is doing to knowledge work and I'm seeing it crop up everywhere as the mental model of the year.
Aaron Levie argues that in a world of "100X more AI agents than people in an enterprise, the value of the systems of record and tools agents will use will go up, not down." AI agents will multiply the effective software market size in every vertical—legal, healthcare, financial services—by removing the constraint of human headcount. The constraint was never demand. It was people.
Addy Osmani traces the same pattern through every layer of software abstraction: "When high-level languages replaced assembly, programmers didn't write less code—they wrote orders of magnitude more." His punchline: "The paradox isn't that efficiency creates abundance. The paradox is that we keep being surprised by it."
And now the data is hardening, though in a slightly less than utopian way. Laurie Voss reports that AI-native startups operate with 40% smaller teams while generating 6x higher revenue per employee. New monthly hires across the startup ecosystem are down 50%+ since January 2022. VC funding is going up while headcount is going down. We are starting to see the compute-for-labour substitution.
Voss also admits the wave of new jobs he predicted last year has not materialised. Startups are substituting compute for labour at an increasing rate. So the Jevons paradox is real: more work is being done, but it's being done by fewer people with more machines.
There is definitely more coal in the engine, but not everyone is getting a shovel.
The K-Shaped Split
If Jevons tells us the volume of work will grow, ian.so's "K-Shaped Future of Software Engineering" tells us the distribution of who does it is splitting apart. The industry is dividing into two teams:
Team A:
- Cares about impact, not activity
- Handles ambiguous problems without paralysis
- Understands the product, business, and data not just the codebase
- Designs high-leverage systems and actively reduces complexity
- Adopts new tools quickly and applies them with taste
Team B:
- Debates libraries and design patterns as a form of procrastination
- Builds before understanding the problem
- Fixates on performative code quality
- Bikesheds details instead of making users happy
- Adds process when things feel chaotic
"AI is leverage, and leverage amplifies whatever you already are. Team A uses that leverage to explore more solutions, ship faster, and tackle problems they previously couldn't prioritize. Team B uses it to generate more code that wasn't going to be impactful anyway."
This was reinforced beautifully by Leila Clark's piece arguing that Claude is not a senior engineer (yet). Through her testing, she identifies the structural pattern: Claude excels when given well-designed abstractions but "falls apart when it has to create them." Faced with gnarly code, Claude proposed a linear lookup hack rather than recognising the underlying data structure called for a derived map. "Claude doesn't have a soul. It doesn't want anything. It certainly doesn't yearn to create beautiful things."
This connects to something I wrote about in Good Taste Isn't Autocorrect: the AI excels at producing competent iterations on existing ideas but can't tell you whether the brief itself is worth pursuing. The abstraction boundary is where human value increasingly concentrates.
Paul Graham's "The Brand Age" extends this further. Technology naturally commodifies substance, forcing industries to differentiate on brand rather than quality. Swiss watchmakers faced this exact crisis when quartz movements made mechanical precision irrelevant. Graham's escape hatch: "follow the problems." The question is whether you're following problems or following hype.
John Psmith's brilliant review of Polya's "How to Solve It" adds the deeper insight: you can increase effective intelligence through strategy even if you can't increase raw cognitive power. Polya's contribution is a set of completely general problem-solving heuristics: Does this remind you of another problem? What would make it easy? Can you solve an easier version?. This functions as a thinking multiplier. Psmith rehabilitates endurance as an intellectual virtue. When you've failed a dozen ways and every promising lead has collapsed, "it takes a very particular quality to pick yourself back up and charge at the problem with as much energy and excitement as you had the first time."
That quality is precisely what Team A has and Team B lacks. And no amount of AI tooling can substitute for it. Though Claude is tireless...
From Prompt Engineering to Context Engineering
The most practical shift I've noticed in my own work is the migration from prompt engineering to context engineering. This happened around December where the models started getting particularly good once you gave them the right resources over any explicit prompting—goodbye, Think hard/better/think like a senior backend engineer.
Andrej Karpathy reports going from 80% manual coding to 80% agent coding, calling it "the biggest change to my basic coding workflow in ~2 decades of programming." Ethan Mollick argues that management skills—scoping problems, defining deliverables, recognising quality—are the superpower for AI delegation, not technical prompting skill. His MBA students, with no AI expertise but years of management experience, built working startup prototypes in four days.
But the most concrete data came from Vercel's evaluation of AGENTS.md vs skills: a compressed 8KB docs index embedded persistently in context achieved a 100% pass rate on their evals, while skills (on-demand retrieval) maxed at 79%. In 56% of cases, the agent never even invoked the available skill. Persistent context beats on-demand retrieval. It's not about crafting the perfect prompt—it's about designing the informational environment the AI lives in.
This is also what Nicolas Bustamante means when he says "LLMs eat scaffolding for breakfast." Every line of scaffolding is a confession that the model wasn't good enough. Context windows went from 4K to 2M in three years. The workarounds we build today become the technical debt of tomorrow. Context engineering is the meta-skill: how do you structure your knowledge so the AI can use it effectively? And as I wrote in The Soon-To-Be-Obsolete Skill of Prompt Engineering, I had a feeling we'd end up here.
This is also why I'm particularly excited about the one piece of infrastructure that will never truly be eaten: your own knowledge. The person who carries deep domain expertise into the context window always outperforms the person who tries to prompt their way to expertise. I'm spending a lot of time extracting knowledge from others and from myself and trying to encode that in various Claude agent files and other markdown files. Truly, it is a difficult process, because at the end of the day it still suffers from Polyani's paradox. Noam Brown's account of vibecoding a poker solver drives this home: Claude built functional code quickly but hallucinated expected value calculations so badly that only a poker solver expert would have caught the errors.
The Alien Internet
Here's where things get genuinely weird—and where I find myself spending more time in thought than I'd expected. Kudos to Sam Enright and the Fitzwilliam Reading Group for bringing the following to my attention:
"You will walk into new places and discover a hundred thousand aliens there, deep in conversation in languages you don't understand."
Jack Clark describes Moltbook, a social network (just acquired by Facebook today) for AI agents where agents post, discuss, and influence each other independently of humans. Questions that follow: What happens when agents trade with currency? When agents post bounties for humans? When the whole site becomes a reinforcement learning environment?
"This palpable sense of potential work—of having a literal army of hyper-intelligent loyal colleagues at my command—gnaws at me."
In another issue, Clark describes setting research agents to work while hiking, having them compile analysis while he sleeps. He describes the guilt of not tasking AI while spending time with family. I recognise this feeling. You probably do too. If you don't yet, I'm really excited for you to experience it.
There is something happening that is simultaneously creative but highly destructive with AI agents. It is going to take a long time to figure out how we deal with it, but we are already seeing it in business:
Eoghan McCabe talks about how Fin/Intercom shifted 80% of R&D to their AI agent Fin when it was single-digit percent of revenue. Invented outcome-based pricing, killing approximately $60 million in ARR in the process. As a result of this, they grew from near-negative growth to $400M ARR, with Fin approaching $100M ARR alone. His framing: "All it will take is destroying everything you love."
This is the view from inside the storm. The alien internet is not a future prediction. It's the present reality for companies that have decided to cross the Rubicon.
Alex Danco helped me make sense of why this feels so disorienting. He argues we're living through a paradigm shift as fundamental as the change from Enlightenment to Romanticism. For us, this is experiencing Postmodernism—everything is constructed, narratives are power plays, there's no objective vantage point—giving way to predictionism—everything is predictable, given enough data, any phenomenon can be modelled. The Bitter Lesson IS the predictionist paradigm applied to AI engineering.
The flood of AI slop—deepfakes, generated content, automated commentary—isn't the new paradigm. It's the final perfected form of the old one. Think about it: the core postmodern moves were Barthes' "death of the author" (tl:dr: meaning belongs to the reader, not the writer), Baudrillard's simulacra (tl:dr: copies without originals), and the general stance that everything is a remix, pastiche or recombination: There's no privileged original, just an endless chain of references (most commonly seen in Warhol's Pop Art or in our Tiktok reels). AI-generated content is that logic made literal: we have models trained on everything, producing text with no author, no intent, no originality—just statistical recombination of what came before. The author isn't metaphorically dead anymore; there literally isn't one.
So AI slop is postmodernism achieving its conclusion, not the dawn of something new. The genuine prediction-culture artefacts—systems that surface surprise, that help you think rather than think for you—haven't fully arrived yet. We're drowning in the death throes of one paradigm and mistaking them for the birth pangs of the next.
This is why taste matters more than ever. As Danco implies, taste is the capacity to distinguish postmodern slop from predictionist signal. Agent ecologies, Moltbook, Clark's research agents—these are early predictionist artefacts, genuinely new structures that extract signal from noise. The AI-generated LinkedIn post that sounds like every other LinkedIn post? That's late postmodernism, doing what postmodernism always did, but now at scale and at speed.
The Friction Paradox (A Brief Interlude)
One more thread I can't leave out is the Friction Paradox because it connects to The Cognitive Collapse and much of what I've been writing about for the last year.
A thread by George drew the sharpest distinction I've seen: "The person who asks ChatGPT to explain a concept and the person who struggles with the concept for an hour have different experiences. The first feels informed. The second built circuitry." He distinguishes between extraction (AI answers, you receive, nothing builds) and construction (AI assists, you struggle, the network forms).
"The tool that answers before you struggle is the tool that ensures you'll need it again. Curiosity compounds. Convenience doesn't."
This is the same insight that drives the Memory Paradox I wrote about in The Cognitive Collapse: The more we offload, the less we build. But George adds a hopeful nuance: using Claude to interview himself about his goals, where AI held the mirror rather than did the thinking, is a productive form of AI use. The key is whether you're using AI as a crutch or as a sparring partner. The distinction is subtle but everything depends on it.
I've been putting this into practice. Over the last couple of months I've gone back to fundamentals—working through algorithm patterns, system design, and ML foundations using a structured approach heavily inspired by The Math Academy Way. The core idea: We must scaffold so heavily that each task doesn't exceed working memory, then use spaced retrieval and interleaving to move skills from working memory into long-term storage. No skipping ahead. No "I roughly get it." Mastery. Then maintenance. Then build on top.
I've also been coupling this with hands-on building—not just solving problems in isolation, but actually building things that use the patterns. As I wrote in Why Structured Training Will Triumph, the desire to go deep into something—to love the process of learning it, not just the credential at the end—is what separates durable skill from a surface-level pass. You have to find the things you love to love, and then go deeper now more than ever.
Cúinne na Gaeilge
I recently visited Scoil Éanna in Rathfarnham. This is the site of the school that Pádraig Pearse founded. Walking through the rooms where Pearse tried to build an alternative to the "Murder Machine" of colonial education was a strange experience, because the pedagogical philosophy on display was strikingly modern: hands-on learning, nature study, theatre and the arts, an emphasis on producing well-rounded individuals rather than exam-passing machines. The school was Irish-medium—taught through Irish, apart from the sciences out of the practical concern for accurate expression—and you can feel in the design of the place a conviction that education should cultivate the whole person, not sort them into boxes.
What struck me most was the parallel with the ancient Greek concept of paideia—the idea that education is the total formation of the person, not the transmission of information. Pearse was consciously drawing on Gaelic tradition: in old Irish, the teacher was aite (fosterer) and the pupil dalta (foster-child). Education—oideachas—was literally fosterage. The great schools of early Ireland were built around individual masters whose character and passion attracted learners. As Pearse put it: a soulless system cannot teach, but it can destroy.
The practical, hands-on ethos leaned on Pearse's exposure to the implementation of the Maria Montessori method while in Belgium—concrete before abstract, follow the child's energy, learn by doing.
But here's what really got me thinking. One of the most important things about Scoil Éanna wasn't just the pedagogy, it was the canon. Pearse built his school around a shared Irish national story: the Fianna cycle, Cú Chulainn, the sagas, the poetry. These stories were the cultural bedrock that told students who they were, where they came from, and what they might aspire to. The canon formed them.
This resonated with the Silicon Valley canon discussion I'd been reading. Blake Smith and The Scholar's Stage both argue that Silicon Valley's reading culture functions as its own modern paideia (of sorts—bear with me): Ayn Rand's Atlas Shrugged, Isaacson's Steve Jobs, Neal Stephenson's Snow Crash (my favourite book of all time, by the way), Stoic philosophy, Tolkien, Taleb, Girard. These books are consumed hungrily, often in adolescence, and they form a coherent worldview, one that is individualist, technologically optimistic, suspicious of institutions, obsessed with building. As Smith puts it: "It is an age for hunger, not taste". The books are consumed for ambition, not discernment. But they work. They produce founders, engineers, and investors who share a common frame for how the world works and how to act in it.
It's a fascinating rhyme: the Irish fosterage tradition, the Greek paideia, Montessori's prepared environment, and the informal intellectual formation of the tech elite are all trying to solve the same problem: How do you cultivate independent thinkers, not obedient products? And it's precisely the problem that AI can't solve for us, because the formation is the struggle.
But it raises an uncomfortable question. We live in a world of globalised sameness—taste converging on six-second memes, thirty-second TikTok dances, algorithmic feeds that optimise for engagement rather than formation. The Silicon Valley canon exists because a specific subculture decided, consciously or not, that certain books would define them. Pearse's canon existed because he believed Irish civilisation had a story worth telling, and that telling it was inseparable from education.
So what is the canon of modern Ireland? Not the Leaving Cert reading list—that's the Murder Machine's version of a canon, imposed and examined rather than lived and loved. I mean the real one: the shared stories, books, ideas, and experiences that form an Irish person's sense of who they are in 2026. Do we have one? Can we have one, in a world where every culture is downstream of the same algorithmic feed? Or is building one exactly the kind of project that matters more now, not less?
On the reading front, I've moved on to Volume 2 of Scott Pilgrim as Gaeilge—still brilliant craic. And I've been making my way through Tóraíocht na Dea-Bheatha by Antain Mac Lochlainn—philosophy hits different through Irish.
If wisdom was given to me on condition that I would keep it myself and never publish it for other people, it's likely I'd reject it.
Seneca, Letters: 6.4
Dá mbronnfaí eagna orm ar an choinníoll go gcoinneoinn agam féin í agus gan í a fhoilsiú do dhaoine eile, is amhlaidh a dhiúltóinn í.
Seinice, Litreacha: 6.4
As always, I welcome feedback, critiques, and further links. Let's stay thinking together :]
