AI is the rubber duck that pushes back

Last month I sat down with Cris Roata to record an episode of her podcast, Catching More Green Lights in Life. It went live today!
Cris asked me what I tell people who are afraid of AI, and I gave the answer I usually give: treat it like an intern, train it patiently, expect the first attempts to be useless.
Then I spontaneously said:
It’s also a very self-reflective exercise. Many times we’re not aware of our own styles, our own way of working. We only become aware of them if we’re forced to explain them to somebody else, like an AI.
I heard myself say it, and thought: “that’s an interesting aspect. I hadn’t really considered this before.”

The naive picture of AI assistance is output: You ask for a thing, the model delivers a thing, you ship it or refine it. That’s the productivity story most people are familiar with.
The bigger thing happens earlier, before the AI has produced anything useful. It happens in the moment you have to explain what you want.
Most of what I know about my own work is hidden. I have a hundred preferences I have never written down: how I start a post, what cadence feels right, what I refuse to say even when the trend says to say it. The trouble with this kind of knowledge is that it stays invisible until something forces it out.
An AI assistant forces it out by being generically competent. The first draft it gives you is well-formed and almost right, which is the worst kind of wrong: You read it and go… meh. “That’s not how I’d say it. Not that word. Not that structure. Not this way.” The flinch is a signal: You are detecting the shape of your own voice by noticing how it does not sound.
Michael Leibovich captured the mechanic well in a post I shared yesterday: “Externalizing your thinking forces you to examine it in a way that makes you sharper independent of the AI.” The work is in the externalising. The AI is the forcing function.
This is rubber duck debugging, the trick programmers use where you explain your code to a yellow rubber duck on the desk and the bug pops out. The duck doesn’t help. The explaining helps. AI is the rubber duck that also pushes back: When you correct its draft, you tell it more about yourself than you knew yourself. The next iteration absorbs the correction. Over a few rounds, your hidden knowledge has become explicit text the model can hold and you can read.
There is one catch. This works in Cyborg mode, the sustained-dialogue posture where the conversation is the artifact and the draft is the byproduct. I wrote about that distinction in a longer post recently. It does not work in Centaur mode, where you write a prompt, take the output, and walk away. In Centaur mode you get the AI’s draft. In Cyborg mode you get yourself.
Here is a simple way to try it: take a piece of your work (an email, a slide, a paragraph you are proud of), open Claude or whichever assistant you trust, paste it, and ask: “What hidden assumptions can you read from how this is written? What do I seem to believe about this kind of work that I haven’t said out loud?” Read the answer carefully. Some of it will be wrong. Some of it will be embarrassingly accurate. That can be fascinating!
Ruben Hassid proposed a more elaborate version. His Substack piece I am just a text file describes a method where you let the AI interview you across seven categories: beliefs you would defend, sentences you would never start with, words you refuse to use, things that make you cringe. The output is supposed to be a portable voice file you can hand to any AI later. Claude and I used a modified form to create my own voice file to help with editing my writing. The voice file was the goal. The interview was the unexpected insight: most of the questions made me notice answers I knew but had never said out loud.
I went into the conversation with Cris expecting to talk about AI as a thinking partner. I came out remembering that the thinking it surfaces can be a mirror of your own. One that helps you magnify the details.
If you want the longer version of that conversation, the podcast is here.
What other questions and answers did you notice that carried some unexpected insights? Let me know on LinkedIn, Bluesky or using the contact page above!
