The People Who Think AI Writing Will Be “Unmarketable” Don’t Understand What’s Coming
AI content creation is advancing faster than anyone predicting its death wants to admit.
I read three books on millennial kingdom eschatology last year that I’m fairly certain were AI-generated. I noticed it early. The sentence patterns, the way transitions landed, and the slightly-too-even pacing across chapters... to name a few. I recognized the tells because I have been documenting AI writing patterns for the last three years.
But I kept reading all three books because they were well-researched, theologically grounded, and genuinely informative. I learned things I hadn’t encountered in months of studying the topic on my own. The content was solid and whoever curated it clearly knew what they were doing. I really didn’t care how the words got onto the page.
I was supposed to care though. I’ve been in this debate long enough to know I was supposed to reject AI-generated writing on principle. But I’ve never been one to manufacture outrage and I’m not planning to start now. I was learning from books I knew were AI-generated so I didn’t care that AI wrote them. In some ways it actually made them easier to read quickly. I think I finished all three books in two days.
Someone replied to this note I wrote last week about book publisher Hachette pulling a novel over suspected AI use. They said: “People are becoming more aware and trained in AI text recognition. I’m convinced that soon AI writing will be for the most part unmarketable.”
I have skin in the game on this topic. I’m writing an ethical AI use for Kingdom entrepreneurs book. I’m also developing a seven-book series spanning from the Garden of Eden to the modern day, following three women who discover they descend from the lost daughters of Adam and Eve. It’s a genre-bending blend of psychological thriller, spiritual warfare epic, and historical fiction, rooted in Divine Council theology. And I’m co-authoring them with AI.
So when someone tells me AI writing will soon be unmarketable, I’m paying attention. And after three years of studying AI obsessively, building systems around it, producing content with it every single day, and reading AI-generated books that genuinely taught me something, I don’t think they’re right. Let me tell you why.
What They’re Right About
I want to honor the concern before I push back on it, because it’s not unfounded.
Raw, unedited AI output is getting recognized. The Shy Girl situation proves it. Readers on Reddit and Goodreads caught the patterns before the publisher (Hachette) did. People noticed that every noun was preceded by an adjective, that similes were overused, and that all description came in perfect little lists of three. And look, that was either incredibly ignorant or sloppy on the part of the author because those are literally (haha see what I did there... pun intended) three of the most common AI tells. It is CLEAR that nobody had gone in and manually edited the text either, so that tells me the publisher must be a little bit suspect as well.
Kobo rejected nearly 45% of books submitted to its self-publishing platform in 2025, about 80% of which were based on suspected AI-generation. Self-published fiction ISBNs jumped from 306,781 to 477,104 in a single year, and while nobody can prove that spike was entirely AI-driven, it’s not a stretch to connect the dots. The flood is real.
Amazon’s approach is telling too. KDP now requires authors to disclose AI-generated content during upload, but they don’t reject books for using AI and they don’t show readers the disclosure. The distinction they draw is between AI-generated (AI produced the text) and AI-assisted (you used AI for brainstorming, grammar, editing), and only the first one requires disclosure. They’ve ramped up enforcement in 2025 and 2026 but the message is clear: use AI if you want, just tell us, and make sure the quality is there.
And readers do care about human connection. A YouGov survey found 54% of literary fiction readers would feel “much less fulfilled” if they learned a book was AI-authored. The 2026 State of Reading Report found that personal recommendations from people readers know have overtaken ALL other discovery sources. People want a real person behind what they’re reading.
If all you’re doing is prompting ChatGPT and hitting publish, the market is already turning against you. That part is accurate.
But that is a very specific kind of AI use. And it is NOT where the technology is heading.
The Speed of Advancement Nobody’s Accounting For
I feel like I’m learning 50 new things a week just trying to keep up with how fast content creation tools are advancing (and I do this full time). Eighteen months ago, AI writing tools were simply generic text generators.
Now I’m drafting a seven-book fiction series and the tools are entirely different. That’s why I decided I COULD write fiction with AI. I’m about to maybe bore you with some literary jargon and requirements, but stick with me. There are now AI writing tools built specifically for fiction that track character continuity across 100,000-word manuscripts, maintain relationship progression arcs for romance, and track clue revelation pacing for thrillers. The scope of what I’m building would have been unmanageable with AI even a year ago, and the tools keep getting better month over month.
It is predicted that by 2030, running a large language model (like ChatGPT or Claude) will cost providers over 90% less than it does today. That means better tools, available to more people, for almost nothing. The barrier to entry is approaching zero.
Amazon Web Services estimates that 57% of online content is ALREADY generated or translated by AI. Whether you find that exciting or unsettling, the direction is clear and it is not reversing.
Detection Won’t Sort This Out
I know the instinct is to think detection will solve this. That we’ll build better detectors and the problem goes away. The data says otherwise.
Independent testing in 2026 shows AI detector accuracy averages 73% across eight major tools in real-world conditions. That’s for raw, unedited text. When a human edits the content (which is how most people actually use AI), no detector exceeded 62% accuracy. After a few passes through a paraphrasing tool, no detector consistently identified AI content AT ALL.
Human accuracy at identifying AI-written text? Nineteen percent. That’s indistinguishable from random chance. For real.
Every time detection tools improve, the models release updates that produce more human-like text. The statistical gaps that detectors rely on keep narrowing. Even the Authors Guild acknowledged that no reliable detection method currently exists for vetting manuscripts. Their “Human Authored” certification is on the honor system and you get it by signing a form. That’s it.
I know what this means for me personally. I use AI in my writing workflow. Someone could point at my work and make the same accusation that took down Shy Girl. I’ve thought about that. And I’ve made peace with it, because I know the theology is mine, the voice is mine, the discernment is mine, and the 100+ guardrails I built to protect my writing are mine. If someone runs my work through a detector and it flags, that doesn’t change what I know about how it was produced. But I understand why that’s a vulnerable position to be in. If you’ve spent years developing your craft and someone can produce something comparable in a fraction of the time now, I can see how that sucks.
But the frustration doesn’t change the trajectory of where AI writing is headed.
AI-Assisted Writing Done With Integrity
My daily workflow looks like this.
I built a guardrails document with over 100 specific patterns to avoid and use. I trained AI on my voice using transcripts of how I actually talk. I review and edit every single word of every piece of content before it goes out. The result is content that sounds like me, carries my thinking, and reflects what I actually believe. AI accelerates the production, but what you’re reading when you read my work is my mind, my convictions, and my voice. Every time.
My eschatology reading experience made this concrete for me. Those books were good because whoever created them understood the subject deeply enough to curate, organize, and present it in a way that was genuinely useful, regardless of what produced the first draft. What I cared about as a reader was the theology. And whoever was behind those books clearly knew their eschatology.
Among fiction authors who use AI, only 11% use it to generate publishable text. The vast majority use it for brainstorming content creation ideas, research, and finding the right phrasing. A Bynder study found that when readers were shown two articles without knowing which was which, 56% preferred the AI-assisted version. Genre fiction readers rate well-edited AI-assisted work as comparable to human-authored category fiction.
The line between “assisted” and “unassisted” is disappearing. And as the tools improve, it will disappear entirely.
Where I Think This Goes
Here’s my prediction.
Within five years, virtually every published book will involve AI at some stage of the process. The authors who refuse to use it will become the rare exception, the same way someone who refuses to use a word processor is the exception today. The tools will simply become so embedded in the writing process that asking whether someone “used AI” will stop meaning anything.
Think about it - spell check is AI. Autocomplete is AI. Those functions are already accepted and not even considered in this argument.
Ninety-seven percent of content marketers plan to use AI for content creation in 2026. That was 64.7% in 2023. Gartner predicts that by 2027, 75% of hiring processes will include testing for AI proficiency. The skill is becoming a baseline expectation.
Gartner and OpenAI are both projecting that by 2030, AI systems will function as collaborative partners that understand your project history, your audience, your voice patterns, and your strategic goals. I’m already seeing early versions of this in how I use AI for my fiction book series and the Kingdom entrepreneurs book, and it is changing how I think about what’s possible for a single author with a full teaching calendar and a life that doesn’t stop for content schedules.
The question “did you use AI?” will sound the way “did you use Google for research?” sounds today. A question that means nothing in practice because the answer is obviously yes.
This is part of why I’m writing the AI for believers book. Because the faith community needs a framework for thinking about this that isn’t built on fear or blind adoption. We need discernment. We need clarity about what AI should and shouldn’t touch. And we need it before the tools outpace our ability to steward them well.
To the Person Who Left That Comment
If we were sitting across from each other, I wouldn’t argue with you. I’d tell you I understand exactly where you’re coming from. The slop is real. The flood of low-effort content is real. The impulse to protect what’s human and authentic in creative work, I share that impulse completely.
But I’d also tell you that the technology you’re judging today is not the technology that will exist in two years. Or five. Or ten. And every month you spend waiting for AI writing to become “unmarketable” is a month someone else spent learning to use it well.
If you’re a business owner making decisions about content creation and business growth right now, the question that actually matters is whether you have something real to say. Expertise and conviction that come from actually doing the work, not performing it. And personal branding grounded in who you actually are.
Done well, AI-assisted content is getting better every single day. I see it in my own work and in the work of people I respect who are building with these tools seriously and with real guardrails in place. Get it wrong (no expertise, no voice, no guardrails), and readers will reject it.
But the idea that AI writing itself will become unmarketable? Not going to happen.
And yes, this article was written with AI assistance, using the exact guardrails and voice-training process I just described. If you couldn’t tell, that’s the point.
If you want to understand how AI is reshaping content creation and online business, and how to make money online without compromising your voice or your integrity, the AI Revolution Secrets training is where I’d start. It’s free, it’s practical, and it gave me the framework for deciding what AI should touch and what stays human. Everything I’ve built since came from that foundation.




One thing people seem to overlook: the quality from AI slop authors will weed themselves out of the market. Yes, they are flooding the market right now but as you said, AI writing is becoming more obvious as we train ourselves to recognize its patterns.
It is a good tool to assist in writing so no different than spellcheck in that regard or photoshop when that boom came about.
For me the purist, god bless them, will still have their place in writing. Those of us not so blessed and well-trained in those skills might have a better chance in sharing the stories we have carried around within us…for decades.
AI isn’t a monster. Don’t let it make one out of your fears.