What AI-Assisted Writing Actually Is (And Why the Debate Around It Is Missing the Point)
There’s room on Substack for every kind of writer. Including the ones using AI.
Substack is having a moment. And not the fun kind.
Over the past few months, a full-blown moral panic has erupted over AI-assisted writing on this platform. Writers are launching campaigns with downloadable anti-AI logos for their profiles. Opinion pieces have declared the platform officially “enshittified” by AI slop. And there’s a whole genre of investigative posts now documenting how top Substacks rack up thousands of likes with what one writer called “surface profundity” while hand-crafting writers struggle to break fifty.
On the other side, self-published authors are giving interviews titled “I Write With AI. Deal With It.” Nearly half of Substack writers in a 2025 survey admitted they use AI for writing assistance. And a whole lot of people are quietly using these tools every single day and saying nothing about it because the climate has become so hostile that admitting you use AI feels like confessing to a crime.
I’m not doing that. I use AI, I’m not ashamed of it, and I have some things to say about this whole circus.
The Binary That’s Breaking Everything
The debate has collapsed into two camps, and honestly, both of them are the problem. But the camps aren’t equal, and they’re not broken in the same way.
You’ve got the purist crowd who say AI writing is cheating. Full stop. If a machine touched it, it’s not real writing. They’re calling for platform-wide bans, shaming anyone who admits to using AI content creation tools, and positioning themselves as the last guardians of authentic human expression. They’ve turned “I write everything by hand” into a whole moral identity. Complete with logos you can download and put on your profile. (I wish I were joking.)
Then there’s the crowd on the opposite end who treat AI like a content vending machine. Open ChatGPT, type a prompt, copy what comes out, maybe change a word or two, hit publish. No voice. No editorial process. And some of them are getting THOUSANDS of likes for content that says absolutely nothing. That part is genuinely maddening for anyone who takes writing seriously, whether they use AI or not.
The people in the middle, the ones using AI with intention and actual standards, are getting drowned out. And that’s where I am, so let me just say what I think.
My Actual Process (Since I’m Not Interested in Being Vague)
I’ve got skin in the game on this one. I use AI every single day for content creation and I’ve been transparent about that from the start.
I have a voice sample that I created by recording myself talking naturally for about ten minutes, then transcribing it. Just me talking the way I actually talk, with the fragments and the “And” at the beginning of sentences and the way I circle back when I’m working through an idea. AI gets that sample before it writes anything for me.
Then there’s the guardrails document. Over 90 specific patterns that AI must avoid. Mirrored contrast phrases, stacked rhetorical questions, performed vulnerability, perfect parallel structures, and dozens of other tells that make content sound like a robot cosplaying as a human. (Yes, I have literally catalogued the ways AI sounds fake. Like a lunatic. For months.)
When I sit down to write, I give AI context about who I’m writing for, what I’m trying to say, and what I want the reader to walk away understanding. Then I iterate. The first draft is NEVER final. I read it out loud and notice where it doesn’t sound like something I’d actually say. I push back on specific phrases and reject entire drafts when they miss the mark. Sometimes I reject five in a row before something clicks.
Nothing publishes without my review. Nothing gets to you that I haven’t read, edited, and approved.
The idea that this process makes me less of a writer than someone who spent four hours hand-crafting a post is, frankly, rubbish. I’m still the one deciding what gets said, how it gets said, who it’s for, and whether it meets my standards. AI handles first drafts. The thinking, the editing, the quality control, and the final call are all mine.
That’s AI-assisted writing. It looks nothing like what most people picture when they hear the term.
What the Purist Crowd Isn’t Saying Out Loud
I have respect for writers who choose to write everything by hand. Legitimate choice and I’m not here to take it from anyone.
But turning that personal choice into a moral position and then using it to gatekeep an entire platform? That’s where I check out.
The purist crowd already knows AI won’t produce better writing than they can. What actually scares them is that AI will help other people produce good writing faster, and that changes the economics of content creation in ways that feel deeply unfair to someone who spent years building their skill the traditional way.
I get that. For real. Change is disorienting when it feels like the rules you played by suddenly don’t apply anymore. But gatekeeping Substack like it’s some kind of literary institution with admission requirements is not the answer. Substack is a platform. The whole point of a platform is that anyone can show up and build something. Readers are smart enough to sort the good from the bad, and they always have been. Trust them.
The Slop Problem Is Real and I’m Not Going to Pretend Otherwise
I’ve seen the posts. The ones that sound polished and say absolutely nothing. Content with perfect structure and zero substance that could have been written about any topic by anyone because there’s no human fingerprint anywhere on it.
These are the posts where someone opened ChatGPT, typed “write me a Substack post about productivity,” and published whatever came back without reading it twice. No voice training, no guardrails and definitely no human input. It’s obvious there was zero editorial review or actual thinking about whether it was even worth saying.
And yes, some of those posts are outperforming real writers. The algorithm doesn’t care about substance. It rewards engagement metrics, and polished-sounding content can generate engagement even when there’s nothing real underneath it.
This is the crowd giving AI-assisted writing a bad name. When someone hears “AI writing” and pictures soulless, generic, forgettable content, THIS is what they’re picturing. And they’re not wrong to be frustrated.
Banning AI from Substack won’t fix that problem. Standards will. So will transparency, and so will readers making informed choices about what they actually subscribe to.
What Would Actually Move This Forward
Content creation tools have always evolved. Every tool that made writing faster also made it easier to produce bad writing at scale. That’s always been the trade-off. And nobody fixed it by banning the tool. The bar got raised. Readers figured out what was worth their time.
What would actually help is transparency. If you use AI in your process, say so. Not with shame, and not buried in fine print. Just say it plainly and let your readers know how you work. Some of them won’t care AT ALL because they’re there for the ideas, the perspective, and the value. Others will care a lot, and they’ll seek out the hand-crafted writers they prefer. That’s their right, and the marketplace is big enough for everyone.
What I can’t get behind is one group of writers trying to delegitimize another group’s process while standing on moral high ground they constructed for themselves.
The Part That Gets Missed
Good content has always required clear thinking. And I don’t mean that in a theoretical way. I mean when I sit down with fuzzy thinking and expected AI to sort it out for me, and the output is rubbish. Every single time. AI reflects back whatever you bring to it. On days when I do the thinking first, the drafts are close. But when I haven’t? I get the same slop everyone’s complaining about. I’ve experienced that more times than I’d like to admit, sometimes in the same week.
The people producing great AI-assisted content did the thinking first. That’s the actual problem worth talking about. Whether someone brought anything real to the table before they started matters infinitely more than which tools they used to get it written.
Where I Stand
I use AI. Every day. I have a system with guardrails, voice training, and editorial review that I’ve been building and refining for a long time now. Every piece of content that goes out under my name has been through my hands, my judgment, and my standards before it reaches you.
I’m not apologizing for that. And I’m not going to be the person who stays quiet while people who’ve never examined their own process tell me mine isn’t legitimate.
I’m also not going to pretend the slop problem doesn’t exist. It does. People publishing raw AI output without editing or thinking, people with no investment in what they’re putting into the world, are degrading content quality everywhere, not just on Substack.
But the answer is better standards, more transparency, and trusting readers to be the intelligent adults they are. Not bans. Not shame campaigns with downloadable logos.
Substack is big enough for hand-crafted essayists, AI-assisted creators, and everyone in between. The only thing it shouldn’t have room for is the idea that one group of writers gets to decide who belongs.
Write well. Be honest about how you do it.
The work will speak for itself.
If you want to see what a rigorous AI-assisted writing process actually looks like in practice, the AI Writing Guardrails system is the exact framework behind everything I publish. It includes over 80 specific patterns to avoid and use, a voice sample template with instructions, example prompts that work with the system, and a video walkthrough of exactly how I set this up.
Get the AI Writing Guardrails here
And if you’re ready to go deeper into how AI is reshaping content creation, online business, and personal branding, the AI Revolution Secrets training walks you through the full picture. Free, practical, and built by someone who’s been doing this for years.
P.S. I asked Jack if he knew what my job was the other day and he said “making AI Videos to make money for the family and talking to robots” and honestly, that’s the most accurate job description I’ve ever been given.





Resonate deeply with the 'guardrails' approach. In my professional and academic work, I treat AI as a 'Scribe' rather than a 'Creator.' It requires a rigorous internal protocol to ensure the soul and depth of the work isn't lost to some sort of algorithm— although I let the AI optimize tags for SEO. Point is, transparency isn't just about ethics; it's about maintaining the integrity of the connection with the reader.
Hi Leah. I don't know if you wrote this particular post using AI or not. I can't tell the difference, but more importantly, I feel I have read an authentic piece written by Leah and I can hear her voice coming through loud and clear. And the piece has educated me pretty well on the the moral debated around AI assisted writing. Speaking of, it is a reality none of us, including the purists, can ignore. And many times it makes wonder, is there any future in being a (human) writer?