We've built increasingly sophisticated tools to help us get things right—from autocorrect to predictive text to AI that can write entire articles. But each level of sophistication reveals a deeper problem: being technically correct isn't the same as being actually right. There's a hierarchy to getting things right, and most people stop climbing too early.
Autocorrect: Element Level Incorrect, Structure Level Incorrect
"I'll be there in a sex" (unfortunately, not a sec!)
—Everyone who had a phone in 2014
This is what happens when correction operates purely at the character level. Autocorrect is the drunk friend of text correction: it sees letter patterns and makes substitutions without any understanding of context, meaning, or intent. It is simply fixing at the most superficial level; a proofreader who might be actively hostile to human communication.
Predictive Text: Element Level Correct (Sometimes), Structure Level Incorrect
Over time we got to predictive text, but that isn’t much better. Do you remember the old game of just tapping the middle suggestion repeatedly? You'd start with "I think" and end up with: "I think I have a great day and I will be there at the same time I don't have a car so I can get a ride to the airport and then I will be able to make it to the meeting."
This time we have more from simple word level issues that perhaps mangle to the meaning of the whole sentence to grammatically coherent gibberish: every word makes sense, but the whole thing drifts toward the algorithmic mean of human expression.
LLMs: Element Level Correct, Structure Level Plausibly Correct
Large language models can generate text that looks like writing—proper paragraph structure, smooth transitions, all the conventional markers of competent prose. But most of it is slop.
LLM-generated content follows patterns perfectly. It knows that blog posts should have introductions, that arguments need supporting evidence, that conclusions should summarise key points (I see a suspicious amount of numbered lists and bulleted paragraphs with bolded header sentences). The output is structurally sound and elementally correct, but it's optimised for looking like good writing rather than actually being good writing.
And isn’t this the real danger? The pernicious, insidious and terrible danger of AI slop: it is text that passes every surface test while saying nothing meaningful. It's grammatically perfect, stylistically appropriate, but completely hollow. It's written to satisfy algorithms and approval processes rather than to communicate genuine insight or solve real problems.
You can’t Autocorrect, Predictive Text or Tokenise Your Way from Zero to One
The same pattern emerges when LLMs design interfaces. Ask one to mock up a restaurant app and you'll get something that looks exactly like every other restaurant app—menu categories, search functionality, checkout flow, review system. It hits every expected element and follows every UX convention. The wireframes look professional. The user flow makes sense.
But what if people don't want another restaurant app? What if the real opportunity is helping restaurants build community, or solving food waste, or something else entirely? The AI will never suggest throwing out the conventional structure because it doesn't know that established patterns can be fundamentally wrong.
LLMs excel at producing competent iterations on existing ideas. They're like having a talented intern who has studied every design portfolio online and can execute any style flawlessly. But they can't tell you whether the brief itself is worth pursuing, or whether there's a completely different approach that would better serve what people actually need.
So what happens if the fundamental shape is wrong? No amount of autocorrecting, predicting, or pattern-matching will fix something that's misconceived from the start.
True taste operates at a level above structure. It's the ability to sense whether something is right not just grammatically or conventionally, but essentially. It's what tells you that a startup idea is pursuing the wrong problem, or that a product feature will annoy users in practice, or that a design approach misses the point entirely.
The person with taste doesn't just fix typos or improve user flows. They can look at something and say, "This is the wrong approach entirely. Start over, but start from here instead." They see the problem behind the problem.
This is why great founders pivot entire visions, why great writers throw away months of work and why great designers scrap polished concepts to pursue something that feels more true: they're not fixing errors—they're sensing wrongness at a deeper level.
Autocorrect operates on symbols. Predictive text operates on words. LLMs operate on tokens. But taste operates on truth—the recognition that no amount of polishing will make something fundamentally misconceived into something good.