In this issue:

  • A specific moment I keep seeing in class — and what it reveals

  • Why AI is a mirror, not a machine

  • The gap between doing your work and understanding your work

  • What a philosopher from the 1980s figured out before any of us

  • Context engineering and its hidden prerequisite

  • Why the struggle itself might be the point

The Pattern

I teach a course called Applied AI for Knowledge Work. Four weeks, small cohort, hands-on. One of the pillars of the course is what I call the orchestrator mindset — the idea that working with AI is less like using a tool and more like directing a process. There are four pillars:

  1. Define the outcome

  2. Plan with AI

  3. Decompose before delegating

  4. Review and refine the output

Simple enough on a slide. In practice, this is where people stall.

I've been watching people hit a specific moment in class. It looks different each time — sometimes it's someone who's never had to break down a task for anyone before, sometimes it's a seasoned leader who's delegated hundreds of things. But the moment is the same. They sit down to direct AI through something they do every day, and they realize they can't describe how they do it. The work is in their hands, not in their words.

It's not a knowledge gap. Everyone in the room is good at their job. It's something else — a gap between doing the work and being able to articulate the work. Between executing a process and understanding it well enough to hand it off.

And the reaction is almost always the same: they try once, get mediocre output, and pull back. Not because the tool failed. Because the experience of not being able to describe your own expertise is genuinely disorienting. It's easier to conclude the AI isn't ready than to sit with that discomfort.

AI is a mirror. When the output is vague, it's reflecting vague input.

The Mirror

Here's what I've started telling people in class: AI is a mirror.

When the output is vague, it's because the input was vague. When the output misses the point, it's because the point wasn't made explicit. The model isn't failing to understand. It's reflecting, with uncomfortable accuracy, the gap between how you do your work and how well you understand your work.

That's not a criticism. It's an observation about something genuinely new.

Most of us have never needed to articulate our process. You develop expertise over years — pattern matching, intuition, shortcuts you can't name. You get good at your job by doing it, not by describing it. And that works. It works for decades. Until you sit down with an AI and realize it needs the description, not the intuition.

Think about how delegation works with people. When you tell a colleague "just handle it," you're relying on shared context built over months or years. They know the preferences, the dynamics, the unwritten rules. None of that transfers automatically to AI. And so for the first time, you have to make the implicit explicit. That's harder than it sounds. For some people, it's the hardest part.

The Reflection Gap

In the 1980s, a philosopher named Donald Schon drew a line between two modes of professional work. He called them knowing-in-action and reflection-on-action.

Knowing-in-action is what most of us do most of the time. It's the expertise that lives in the doing — the analyst who instinctively weights recent data over historical trends, the writer who feels when a paragraph needs cutting, the project manager who senses which stakeholder needs a heads-up before the meeting. You don't think about it. You just do it. The reasoning has been compressed into reflex.

Reflection-on-action is the deliberate work of examining what you did and why. Unpacking the reflex. Turning intuition into something you can articulate, question, and share.

Schon's argument was that professionals need both, but that most organizations reward only the first. You get promoted for executing, not for reflecting. The result is a workforce full of people who are excellent at their jobs and largely unable to explain how they do them.

AI didn't create that gap. It just made it visible.

When you sit down to give an AI context — not a prompt, but real context about how you think about this work — you're doing reflection-on-action, whether you know the term or not. You're examining your own process, identifying what matters, separating the essential from the habitual. And most people haven't done that since they were learning the job in the first place.

The Metacognition Problem

There's a thread in the research that takes this further than I expected.

CSIRO — Australia's national science agency — published work on what they call collaborative intelligence between humans and AI. One of their key findings: AI systems cannot engage in metacognition. They cannot think about their own thinking. That entire burden falls on the human.

This sounds obvious until you consider the implication. If the AI can't evaluate whether it's approaching a problem correctly, you have to. If the AI can't recognize when it's missing context, you have to. If the AI can't step back and ask "am I even solving the right problem?" — that's on you.

The research gets more uncomfortable from there. Studies on AI-assisted decision-making found that people without strong metacognitive awareness don't just fail to improve with AI. They get worse. The term researchers use is "metacognitive laziness" — the tendency to offload cognitive responsibility onto the AI and disengage from deeper thinking. The AI does the work, the human rubber-stamps it, and the quality of judgment degrades quietly over time.

The paradox: AI assistance frequently improves immediate task performance while simultaneously undermining the skills you need to evaluate that performance. It makes the output better and the human worse — unless the human is actively thinking about their own thinking.

That's the metacognition problem. And it doesn't get solved by learning a new tool.

The places where you freeze are the pockets of tacit knowledge that run your work.

Context Engineering and Its Prerequisite

The AI world is catching up to this idea, even if it hasn't fully connected the dots.

Andrej Karpathy — one of the founders of OpenAI, now at Eureka Labs — recently coined the term context engineering. He described it as "the delicate art and science of filling the context window with just the right information for the next step." Anthropic published a technical guide on it. LangChain started cataloging failure modes — context poisoning, context distraction, context confusion.

The framing has shifted. It's no longer about writing a better prompt. It's about architecting the right information environment before the conversation starts. What does the model need to know? What's the decision logic? What are the constraints? What's the history behind this particular problem?

This is real progress. But here's what I notice: the conversation about context engineering is almost entirely technical. It's about retrieval systems, chunking strategies, token management. Important work. But it skips the prerequisite.

Before you can fill the context window with the right information, you have to know what the right information is. You have to understand your own work well enough to curate it. That's not a technical skill. It's a reflective one.

The best context engineers I've seen — in my class, in my own work — aren't the most technical people. They're the ones who've spent time examining how they think. They know what they optimize for. They know which parts of their process are load-bearing and which are habit. They can describe their decision logic, not just execute it.

Context engineering has a philosophical prerequisite. And that prerequisite is self-knowledge.

The Ancient Instruction

There's something almost funny about where this lands.

Two thousand years of philosophy. Decades of management theory. Billions of dollars in AI infrastructure. And the skill that matters most — the one that separates the people getting extraordinary value from AI from the people who gave up after the third mediocre output — is the one inscribed on the Temple of Apollo at Delphi.

Know thyself.

Not in the self-help sense. Not "find your purpose" or "live your truth." In the operational sense. Know how you work. Know what you optimize for. Know which decisions you make consciously and which ones you've automated into reflex. Know the difference between what you do and why you do it.

The AI literacy gap isn't about technology. It never was. It's about self-awareness.

The people who struggle with AI aren't less intelligent or less technical. They just haven't had a reason to externalize their process until now. For their entire career, the process could stay internal — compressed, intuitive, invisible. It still worked. The work still got done.

AI changed the equation. Not by demanding new skills, but by exposing an old absence.

The Mirror Works Both Ways

Here's where I've landed, and it took me a while to get here: the thing that exposes the gap is also the thing that closes it.

That moment in class — the one where someone freezes because they can't describe their own process — isn't a failure. It's the beginning of something. The discomfort is the practice. Every time you try to give AI context and fall short, you're getting feedback. Not about the model. About yourself. About where your tacit knowledge lives, unexamined.

I've started encouraging people to stay in that moment instead of retreating from it. The output came back wrong? Good. What did it miss? What did you assume it would know that you never actually said? That delta — between what you meant and what you communicated — is a map of your own unexamined expertise.

Work the delta. Here's what that looks like in practice:

  • Stay in the discomfort. The urge to close the tab after a bad output is strong. That's the exact moment where reflection starts. The frustration isn't a signal to stop — it's a signal that you've found an edge of your own understanding.

  • Treat every mediocre output as feedback about yourself, not the tool. What did you assume it would know? What context did you carry in your head but never put into words? That gap is the most useful information in the entire interaction.

  • Iterate out loud. Don't just retry with a better prompt. Narrate what was wrong and why. The act of explaining the miss to the AI forces you to examine what "right" actually looks like — and that's often the thing you've never articulated.

  • Notice what you can't describe. The places where you freeze, where you wave your hands, where you say "you know what I mean" — those are the pockets of tacit knowledge that run your work. Finding them is the point.

The people who push through the initial frustration don't just get better at using AI. They get better at their work. Period. Because the reflective muscle this builds — examining your own process, making the invisible visible — that's a professional skill. AI just happens to be the thing that forced it into existence.

The irony is real: struggling with AI might be the most efficient path to professional self-knowledge most of us have ever had access to. Not because AI teaches you. Because it refuses to let you stay vague.

Schon would have loved this. He argued that reflection requires a "conversation with the situation" — a back-and-forth where you act, observe the result, and adjust your understanding. That's exactly what happens when you iterate with AI. The model responds to what you gave it. You see what's missing. You refine. Each cycle surfaces something about your own process that you didn't know was implicit.

The process needs to be visible now. Not for the AI's benefit. For yours.

Open Questions

  • What happens to organizations that skip this and automate on top of unexamined processes? The research on metacognitive laziness suggests a slow, quiet degradation of judgment. Hard to measure. Easy to miss.

  • How do you build this reflective practice into a team, not just an individual? The lone practitioner can iterate. But organizations run on shared process — and most of that is undocumented intuition.

  • How much of the "AI doesn't work" narrative is actually a reflection gap in disguise?

This dispatch started as a conversation about context engineering. It ended somewhere I didn't expect — circling back to a Greek temple. But the part I keep thinking about is simpler than any of that. The next time AI gives you a mediocre response, don't close the tab. Ask yourself what you didn't say. The answer is usually more interesting than the output.

That’s it Folks

Thanks for reading through.
I’d love to know how you felt about today’s newsletter. This will help me make the newsletter better.

Keep Reading