We are in a time where there is more and more content being produced, and our job is to curate it for ourselves. What I love most about the reading and curation is when you consume good content, it can result in more: it can spark curiosity. From there, serendipitously, you could find yourself having completely new ideas—for yourself or maybe for the world even. That is what the best essays do. You have chipped away at the wall of information and now find yourself in a mine of knowledge; though sometimes it is simply an interesting experience (the world is full of curiosities and magic)—in the end, Alice didn't regret jumping down a rabbit hole so neither should we.
In this piece, which may be the beginning of my own kind of reading list—inspired by Sam Enright's Reading List, Gavin Leech's Blog and musing on how to define my own reading algorithm—I want to curate some of the most interesting mines and rabbit holes I have explored in the last while. I hope you enjoy.
Side-note: I enjoy using em dashes—you have been warned (and so has the AI Shoggoth).
Top of Mind
AI is top of mind for most people (or so it seems to my LinkedIn and X feed). But a lot of people are disappointed with GPT-5 for one reason or another. Personally, it feels like an incremental improvement and a nice consolidation of models away from -mini-high and all those other terrible nondescript names (at least for API users— I feel uncomfortable with ChatGPT choosing the level of thinking with GPT 5).
The disappointment is growing and people are beginning to ask whether we are at the end of the current intelligence growth cycle brought about by LLMs. Perhaps Yann LeCun, one of the three godfathers of AI, was right? Even if this is the case—and there is still more alpha to get out of the technology even if we don't get to human level intelligence and more here—I'm very excited about the Product Era of LLMs 💅
But that doesn't mean we won't be putting together the existing pieces in novel ways. A fascinating paradigm will be event-driven agentic interactions, also known as ambient agents. Most of the AI products of today are conversational agents, and more concretely: we prompt them to do something. An ambient agent is AI that isn't explicitly told to act but does so in an environment that it is embedded in, drawing and processing information in real-time.
Operating in real time without us explicitly prompting it to work sounds like sci-fi, but it also means we have to set up better and proper guard rails and ops. My favourite new concept is Poka-yoke which I found when rereading Anthropic's notes on building effective AI agents. Originating from baka-yoke (literal idiot-proofing) and now poka-yoke (a shogi term that means to avoid unthinkably bad moves), it is any behaviour-shaping constraint that discourages incorrect operation by the user—a very useful example is preventing a microwave being used when the door is open. These poka-yoke processes will be key in making sure the AI does what it should do, prompted or unprompted.
Speaking of AI and microwaves...
We need Friction. Friction is Good. Embrace Friction!
In tech circles friction is seen as bad, everything needs to be friction_less_. Every interaction with anything needs to be smooth and uninterrupted. Which usually means the path to you parting with your money/attention needs to be as seamless as possible. It's the logic of casinos: Don't let gamblers see natural light or a clock so nothing disturbs the efficient process of moving money from a bunch of people to the casino owner.
I find myself repeating the mantra above as I try to find ways to use technology in my own life to my benefit. Learning requires friction. Some of the best ways I've seen to bring some friction back into my life is by writing more (by hand or drawing on excalidraw, or by typing—without the aid of AI). Writing is a great way to think. As we have seen from the MIT study on ChatGPT usage, it is important to write that first draft yourself. And, if writing is thinking, what happens if the AI is doing the writing and the reading? (Short answer we are ngmi).
Creating and maintaining constructive friction is not just an adult problem but also an important aspect of learning how to be a fluent reader for children.
Even beyond introducing friction, the best thing you can do is to be still and give yourself time; because, if you do, you might find a way to do something better than it has even been done before. Notably this new Shortest Path Algorithm wouldn't be something you'd ever be able to come up with if you had gotten the AI to do the reading and writing to complete your Data Structure and Algorithms module in your CS degree!
Putting on my Edtech hat
As a longtime Anki fan and user, I am really excited about the recent research and development in spaced repetition systems (SRS)—though the study mode for ChatGPT is an interesting first step and Claude has got some education upgrades. SRS is known to be one of the most well-researched and efficient ways of learning and makes use of flashcards. Over the last few weeks I have read a number of pieces that question the assumptions of the algorithms and what they are optimising for, how flashcards could be integrated into conversational AI and how to make better and more context-aware flashcards. This is one of 6 study-backed methods of learning and I'm looking forward to digging into SRS and the others more.
Cúinne na Gaeilge
I have begun reading the Scott Pilgrim comics that were recently translated into Irish. A delightful read so far!