Thinking with Claude: why Cyborg writing works better than Centaur writing

In February this year, I wrote about why I stopped trying to engineer prompts and started delegating intent instead. That post argued the skill of working with AI is managerial, not technical. I still think that’s right. But over the last couple months I’ve noticed something else: even with good delegation, the AI’s output sometimes lands close to what I want and sometimes lands feeling oddly not me. So I started investing more time in working more closely with AI (I like Claude) to figure this out and try to come up with a good method that more closely matches my own voice, style and way of writing, while allowing me to also become a better writer and improve the quality of my work. Here’s what I found:
Generic AI output is a structural, not a prompting problem. A better way is to engage in deeper conversation: a back-and-forth where the AI gets to absorb your reasoning, your style, your taste over time. Ethan Mollick calls this Cyborg mode (vs. Centaur mode where the AI just does the work). There are three elements that I found make Cyborg mode work well for me, and a fourth shift in how you “edit” that I became aware of only recently.
If you’re more the audio-visual type, here’s an eight-minute video shot in a forest near Munich on a sunny morning: Thinking with Claude on YouTube.

And here are the details:
AI slop is breaking the reading ecosystem
Here’s an LLM-style LinkedIn post. The prompt: “Write a LinkedIn update about human-AI collaboration.” This is a typical output:
🚀 Excited to share some game-changing thoughts on human-AI collaboration!
The future of work isn’t about humans OR AI — it’s about humans AND AI working together. Not a tool. Not a replacement. Just a perfect partnership.
In my experience, the best teams treat AI as:
✨ A trusted thought partner 🎯 An always-on creative engine 🚀 A multiplier of human potential
The future belongs to those who embrace this shift.
What’s YOUR take on human-AI collaboration? 👇
#AI #FutureOfWork #Innovation #ThoughtLeadership
You’ve probably seen posts like this a hundred times this month. Possibly this week. “AI slop” has become so over-used and thus recognizable that most readers spot it in seconds. And they scroll past.
Three things bother me about these slop posts:
- They’re painful to read.
- They signal that their authors didn’t put much effort into writing them.
- They’re damaging the entire reading ecosystem.
When LinkedIn (or other social networks, or your blog feed) becomes dominated by AI-generated posts, readers stop reading. They scroll faster. They engage less. They walk away. Honest authors who put real work into their writing (using AI or not) are the ones who pay the price. Even when the work is genuinely theirs, the slop around them poisons the well.
I think that’s why so much good online writing feels like it’s being read less. Not because it’s worse, but because the trust budget is collapsing.
The usual response to all of this is “prompt better.” Get more specific. Use better instructions. Try this format. Use that framework. There are entire industries selling cures for AI slop that operate at this level.
It doesn’t really work. Even good prompting produces AI-shaped output. Even careful delegation can leave you with text that’s correct and useful but feels off when you read it back. The voice isn’t quite yours. The reasoning isn’t quite yours. The output is about you, but not of you.
I argued in the article about delegation that the skill of working with AI is managerial, not technical: transfer intent, don’t engineer prompts. That’s the right move for proper, delegation-driven AI work. But noticeably better Centaur output is still Centaur output, not “you”.
This article is about putting the “you” back into your work.
Centaur and Cyborg are different modes, not different skill levels
Ethan Mollick, in Co-Intelligence (2024), distinguishes two ways humans work with AI:
- Centaur mode: clean handover. You describe the task, AI does the work, you take the result. Like a horse and rider, two beings doing different things in coordinated fashion.
- Cyborg mode: integration. You and the AI work together throughout, blending decisions and steps, hard to draw a line between who did what. Like a single being with extended cognition.
Most people use AI in Centaur mode. That’s perfectly fine for many tasks. “Explain how this code works,” “summarize this PDF,” “draft an outline for a proposal” — these don’t need you in the loop while the AI works. Centaur is the right tool. Quick, clean, scales well.
The catch: Centaur mode can’t absorb you. By design. You hand the task off, and the AI doesn’t get to see how you’d think about it, where you’d push back on yourself, what you’d cut and why. It can’t learn your taste because you’re not there while it’s working. So when the task is voice-loaded, when “sounds like me” matters, Centaur mode hits a limit that no amount of prompting can break through.
Cyborg mode is for voice-loaded work. It’s slower per task, doesn’t scale the same way, and feels weird at first. But it produces something Centaur mode can’t: output that actually sounds like you, because you stay in the lead and the AI got to participate in how you got there.
Here’s a simple heuristic to spot which mode you’re in: Watch how much you type versus how much the AI outputs. Eyeballing is fine here, just look at the overall amount. In a Centaur conversation, the ratio is usually something like 10:90, maybe 20:80 if you’re being thorough with your delegation brief. In a Cyborg conversation, you’re typing more, longer, more iteratively, with many turns. The ratio moves to something like 40:60, maybe closer to 50:50. The conversation stops feeling like request-and-response and starts feeling like an intense working session with a trusted colleague.
The ratio of you-in to AI-out is the simplest tell for which mode you’re actually in.
Three elements that make Cyborg mode work
Cyborg mode isn’t a single move. It’s three elements that work together. They’re not strict steps in a strict order. You can cycle between them, blur their boundaries, double back. Each one is doing something distinct, so it’s worth naming them separately.
1. Deep dialoging (the resonance chamber)
Most people start an AI session by typing a prompt. It can be simple, or elaborate, boilerplate, with or without delegation patterns. But still: a prompt.
In Cyborg mode, I prefer to start with a brain dump. That’s where you write down what you’re thinking about the topic, what you’re stuck on, what you’ve half-considered, what you’ve ruled out and why. You ask the AI to poke holes in your thinking. You let it suggest alternatives. You react to those alternatives. You bring in people you admire (“what would Derek Sivers say about this?”) and let the AI play their voices back to you.
This is where one important shift happens. People talk about social media as echo chambers: an algorithmic loop that amplifies your existing views back to you, narrowing rather than expanding. Done wrong, AI conversations can become echo chambers very easily, because LLMs are trained to be agreeable.
The positive side of the coin is what I’ve started calling a resonance chamber, which has the opposite effect: You deliberately shape what resonates. You bring in challengers. You ask for blind spots. You explore the opposite view. You instruct the AI to give you its honest opinion, not flatter you. You curate the dialogue so it expands rather than confirms.
The mechanic underneath all of this is simple. The more “you” you put into the conversation, the more “you” the AI has to absorb. Brain-dumps, ramblings, half-formed thoughts, side notes about why you do or don’t believe something aren’t waste. They’re the raw material the AI can use to model your reasoning. You can’t ask an AI to “sound like you” if it doesn’t know you. The dialog is where it starts to know you.
2. Drafting, with a twist (and the wrongness reflex)
Most people, when it’s time to write something, write a draft and ask the AI to edit it. This feels natural. The AI is the assistant, you’re the author, drafts are the author’s job.
I found the reverse to work much better. After a long dialog (Element 1), I ask Claude to write the draft. Not me.
This produces a strange and useful effect that I’ve started calling the wrongness reflex. You’ve probably heard about the “blank page syndrome”, where an author sits in front of a blank page, freezing. The blank page is the hardest part of writing, always has been. But when I see an imperfect draft Claude wrote based on our conversation, it’s different: I react. “Wait, that’s not quite right.” “That part needs more nuance.” “That’s actually backwards.” “This whole section can go.”
Correcting wrongness is much easier than trying to fill a page. Like an itch, each “wrongness” screams at you: “Fix me!” If the blank page is paralysis, the wrong draft is a flow of corrections, bringing the content closer and closer to the finishing line.
It’s a bit of a reverse-the-order trick. I let Claude write first, not because Claude is a better writer, but because reading and reacting is a fundamentally different cognitive task than originating. By letting it fill the blank page, I create an environment that invites a flow of writing for me.
3. Editing, with and without a text editor
Which brings us to the part that took me the longest to realize.
The traditional move once you have a draft is: open the text editor, type changes directly. Move sentences. Cut paragraphs. Rewrite the awkward bits. This is what “editing” has always meant, and it’s been the part of writing where my author voice fully takes over.
When going Cyborg mode, I discovered a different move: editing through dialogue. Tell the AI what you want changed, and why. Let it apply the changes. Iterate.
The first time I did this consistently, I felt lazy. I love my text editor (vi, in my case). Editing has always been the part of writing where I’m most clearly the author, the moment where my voice asserts itself over whatever raw material I started with. Delegating that step felt like cheating.
Two things shifted my view.
First, I noticed the connection back to the delegation post. When you tell an AI what to change and why, you’re not actually delegating editing. You’re delegating the typing while keeping all the editorial decisions. The decisions are still yours. The keystrokes aren’t.
Second, and this is the bigger one: when you explain your edits, the AI absorbs your reasoning. Not just the change, the reason for the change. And it remembers.
The explanation bit is the key thing here: Like taking your time to train a new employee, you should take your time to help your AI generate the output you want to get. In fact, I’m editing this very article right now in Zed (vi mode, of course), but I’m also keeping a log of change explanations in Claude’s chat window, so it can take notes. When I’m done, I’ll feed back my finished article to Claude, and ask it to update its voice.md file (where we keep a description of my personal writing voice and style) with my explanations and any other patterns it notices.
Note: today’s LLMs don’t “learn” the way we do: their training is fixed at release. But modern AI tools support persistent memory. Claude Cowork, for example, keeps a CLAUDE.md file with project-specific notes in its folder and can use tools to update a “memory” file across sessions. You can also formalize learning by curating specific files for AI to reference during its work. For example, Ruben Hassid suggested letting AI interview you for generating your voice file in his I am just a text file article.
That’s the difference between the caffeinated intern most people experience with AI (eager, fast, generic, no memory of you between sessions) and a colleague who knows you well. The intern stays an intern as long as you stay in Centaur mode. The colleague emerges over time when you stay in dialogue through editing.
The blur
I called these three elements out separately, so they’re distinguishable, but in practice they don’t stay separated. During a Cyborg session, they tend to blur together.
You start brain-dumping. The AI offers a thought you hadn’t considered. You react, and your reaction is more dialog. The AI suggests a draft of one paragraph. You read it, kill half of it, type why. The AI offers the next paragraph based on what you just told it. You’re not doing Element 1, then Element 2, then Element 3. You’re doing all three at once, often inside the same exchange, often inside the same paragraph.
After a couple of such “blurs”, and reflecting on the process, Claude and I started calling this state a mind-jam. It’s when the brainstorm-draft-edit boundaries dissolve into one continuous, deep conversation. You stop noticing which mode you’re in. It has its own flow-like momentum, and you’re fully engaged in it, working. Like an artist immersed in their work, becoming one with their tools. Beyond writing, I also started to use this mind-jamming approach for taking, editing, and organizing notes and thoughts, which is a topic for its own article, someday.
What this looks like in practice
The article you’re reading is itself a product of this practice.
I started with a brief from a working session with Claude, where we’d extracted the spine of what I wanted to say. We had a transcript of a video (created using the same approach) on the same topic from the week before, so we knew the core argument.
Before drafting, Claude proposed a structure. I pushed back. We assembled a panel of three coaches in dialog: Derek Sivers (for value and minimalism), Ethan Mollick (for accuracy on his own taxonomy), and Nancy Duarte (for narrative engagement). Each was instructed to actively suggest improvements, not just rubber-stamp.
The panel changed three things I’d missed. Sivers pointed out that the spine had two competing ideas in it, so we tightened to one. Mollick reminded us not to strawman prompting craft, since better delegation is a real and useful skill. Cyborg mode complements it rather than replaces it. Duarte said the problem section was too short to earn the solution, and pushed for lengthening it from 300 to 500 words. And that’s just a tiny bit of the dozen or so pages of dialog that went into improving the article meaningfully before a single word was written.
The 10:90-vs-40:60 ratio handle came up in this same article-planning conversation. I was trying to explain to Claude how the difference between Centaur and Cyborg sessions feels to me, and the ratio thought-experiment surfaced. It’s a good heuristic that I wouldn’t have produced on my own. The dialog did.
A summary, and a question for you
Generic AI output isn’t a prompting problem. It’s a structural problem. Centaur mode is fine for certain tasks, like quick chunks of work, or when the voice doesn’t matter. For voice-loaded work, where “sounds like me” is the whole point, Cyborg mode is the way to go. And vice versa, you wouldn’t use Cyborg mode if all you want is to get a simple task done. The real question is how to use AI more intentionally. The three elements (dialog, draft, edit-through-dialogue) work together rather than in sequence. They blur into a state that’s noticeably different from prompting, and they produce output that’s noticeably more you.
The simple heuristic: how much of the conversation are you typing? If it’s 10–20%, you’re probably in Centaur mode. If it’s 40% or more, you’re closer to Cyborg territory. It’s not a goal to optimize for. It’s a guideline that points at what mode you’re probably working in.
If you want to try this, pick one piece of work where the voice matters. Don’t prompt-engineer it. Brain-dump instead. Ask Claude to draft. React to what comes back, in dialogue, not in the text editor. See how the conversation feels. Overcome any feelings of laziness, and instead dive into the flow of collaboration.
Here are some questions for you to ponder:
- What’s the longest dialogue you’ve had with an AI?
- How did you feel about it afterwards?
- What was the rough ratio of you-typing to AI-output?
Drop me a line on Bluesky, LinkedIn, or via the contact link on this site. I’ll read everything you write me, as long as it is true “you”, with or without AI.
