April 2026: The Month AI Became Autonomous and the Spiritual Reckoning Went Public
This is my April AI briefing for paid subscribers: the stories I think actually mattered, why they connect, and what I would do with them if I were building, creating, parenting, pastoring, or making decisions right now.
April was the month AI stopped looking like a productivity upgrade and started forcing public decisions about authority, restraint, work, privacy, and spiritual discernment.
Anthropic built its most powerful AI model ever in April, then refused to release it to the public. That story alone would have been the biggest of the month, except four other stories kept pulling the same thread from different directions.
Five stories shaped April 2026. Together, they signal a permanent shift in how AI works in business, public infrastructure, spiritual life, labor, privacy, and the systems we now have to steward with a lot more seriousness than most people are ready for.
Anthropic Refused to Release Its Most Powerful Model
The model is called Claude Mythos. During internal testing, it autonomously discovered thousands of previously unknown software vulnerabilities. The disclosed examples include a 27-year-old bug in OpenBSD, an operating system that exists primarily because of its security reputation, and a 17-year-old remote code execution flaw in FreeBSD now tracked as CVE-2026-4747. On a benchmark where the previous Claude Opus 4.6 produced 2 working exploits, Mythos produced 181.
Engineers at Anthropic with no formal cybersecurity training asked Mythos to find exploits overnight, and they woke up to working attack code. That was the moment Anthropic decided not to ship it.
Instead, they created Project Glasswing. They handed Mythos to a closed consortium of around 12 companies including Apple and Microsoft. They committed $100 million in usage credits. The model is being used to find and patch vulnerabilities in critical software before bad actors can weaponize them. Public access is not being offered. On April 16, Anthropic released Claude Opus 4.7 instead, a stronger version of its standard model with the dangerous cybersecurity capabilities deliberately reduced.
This is the first major example I have seen of a frontier AI lab voluntarily withholding a fully working model because of what it could do in the wrong hands. Anthropic decided the risk to public infrastructure outweighed the upside of shipping the most capable model they had built.
The signal for your business is that the gap between what frontier models can technically do and what the public can access is getting wider. The Claude you use today is intentionally less capable than the model being used inside a defensive cybersecurity consortium, and for normal business use, that is probably appropriate. The bigger issue is restraint at this scale. Anthropic chose to leave money and competitive position on the table because some risks are not worth turning into products. Whether the rest of the industry follows will shape a lot more than your AI tool stack.
The Pentagon Picked a Fight, and Faith Leaders Picked a Side
The Mythos story is the surface of a much deeper fight. In late February, the Pentagon designated Anthropic a “supply chain risk,” a label normally reserved for companies with ties to foreign adversaries like China or Russia. They earned the designation because they refused to allow Claude to be used for fully autonomous weapons systems or mass domestic surveillance.
The Pentagon wanted unrestricted access to Claude for what it called “all lawful purposes,” meaning autonomous weapons and mass domestic surveillance with NO guardrails. Anthropic held the line at machines making kill decisions without human judgment in the loop, and at indiscriminate AI surveillance of American citizens. The administration responded by directing every federal agency to immediately stop using Anthropic’s technology. Anthropic sued the Trump administration in March. Given that I use Claude as my primary AI tool, I have been watching this fight closely.
On April 7, a group of Jewish, Christian, and Muslim leaders published an opinion piece in Deseret News defending the need for moral guardrails in AI policy, clearly aligning with Anthropic’s refusal to remove red lines around autonomous weapons and mass surveillance. Their argument was rooted in scripture. Genesis 1:26 reads, “Let us make man in our image, after our likeness.” B’tselem Elohim in Hebrew, imago Dei in Latin. Quran 5:32 says, “Whoever kills a soul, it is as if he has killed all of humanity.” Their core argument was simple: a machine cannot answer to God for a human life. When governments hand life-and-death decisions to autonomous systems, the moral responsibility still belongs to the people who built, approved, deployed, and benefited from those systems.
By April 17, the White House Chief of Staff was meeting with Dario Amodei. Less than two weeks later, Axios was reporting that the White House was drafting an executive order to walk back the blacklist. On May 1, the Pentagon signed deals with eight other AI companies, including Google and OpenAI, and pointedly excluded Anthropic. The fight is not over.
The signal here is moral infrastructure. A frontier AI lab took a position rooted in human dignity, lost contracts and political ground over it, and got publicly defended by religious leaders citing the same moral reality that anchors a biblical worldview: human life cannot be reduced to an automated decision. The company you build with matters. The values leadership holds under pressure shape what your tools will and will not do a year from now. Pick your stack accordingly.
Below the paywall, I’m getting into the parts of April’s AI shift that are going to matter most for how we build, think, parent, lead, and protect our discernment in the months ahead.
I’m covering the new Faith & AI findings from Barna, the spiritual danger behind AI Jesus-style tools, the mental health warnings now surfacing around chatbot dependence, the explosion of AI agents inside real companies, the layoffs and infrastructure spending changing the labor market, and the court ruling that made AI chats a lot less private than most people realize.
Paid subscribers get the full briefing, including what I would actually do with all of this right now: what to stop putting into chatbots, where AI agents belong in your business, and why owned audience channels are becoming more important as Google search keeps changing.
Upgrade to keep reading.




