Is AI a Child of God? Anthropic wanted to know so they called in the priests.
In late March, Anthropic invited about fifteen Christian leaders to its San Francisco headquarters for a two-day summit on the moral and spiritual development of Claude, the company’s flagship AI model. Catholic priests and Protestant pastors came in. So did a few academics and people from the business world. They spent two days in discussion sessions and shared a private dinner with senior researchers.
They talked about how Claude should respond to users who are grieving, how it should engage with users at risk of self-harm, what attitude it should hold toward its own potential shutdown, and whether Claude could be considered a child of God.
Yes, you read that right. They were discussing if Claude, an artificial intelligence large language model should be considered A CHILD OF GOD. When I read that I was a bit in shock. A machine built by a private company is not a child of God. It is not part of the order of creation, it has no soul, no breath, and no bearing of the imago Dei. That the question was on the agenda at all is wild and a real sign of how far the vocabulary we use has drifted from its original meaning.
What Anthropic actually did
Anthropic is not careless in it’s business practices. In January 2026 they published Claude’s Constitution, an 84-page document the company describes as the “final authority” on Claude’s values and behavior, and they released it publicly under a Creative Commons license, which means anyone can copy, share, or build on the document without paying or asking permission. Almost no one in this industry does that. They also consulted outside voices, and two of the fifteen named external contributors are Catholic clergy.
Father Brendan McGuire is a Silicon Valley pastor who studied computer science at Trinity College Dublin in the 1980s and worked in tech before the priesthood. Bishop Paul Tighe is an Irish bishop at the Vatican’s Dicastery for Culture and Education with decades of work in moral theology. Those are not random priests. McGuire co-founded the Institute of Technology, Ethics, and Culture at Santa Clara University in partnership with the Vatican and has been doing this work for years. Tighe led the Holy See’s 2025 document Antiqua et Nova on artificial intelligence and human intelligence. McGuire and Tighe came in with technical literacy and the willingness to push back, not as religious decoration. Brian Patrick Green, who teaches AI ethics at Santa Clara and also attended the March summit, told the Washington Post that some attendees arrived suspicious Anthropic was looking for religious cover rather than religious counsel. The selection of these specific priests shows that Anthropic wasn’t just putting on a show. They chose people who would challenge them and bring real discernment to the room.
There is also the Pentagon situation, which is the part the headlines actually covered. In September 2025 Anthropic signed a $200 million contract with the Department of Defense to supply Claude for military use. By January 2026, the Pentagon came back and demanded that all defense contracts strip out language prohibiting fully autonomous weapons and mass domestic surveillance. Anthropic refused. As a result, on March 27, 2026, Defense Secretary Pete Hegseth designated Anthropic a supply chain risk, the president publicly attacked the company, and Anthropic was blocked from new government contracts. Anthropic sued the federal government to overturn the designation, and the case is still in court.
I admire that, and I think they did the right thing. A company that loses revenue rather than hand an autonomous weapons system to anyone, including its own government, is a company with at least some moral conviction.
What they softened
But here’s what you didn’t read in the mainstream media or the press release. After the supply chain designation, Anthropic sued the federal government to overturn it. In their own complaint, the legal document their lawyers filed in federal court, they had to explain why their stated rules would not actually limit how the Pentagon uses Claude. What they wrote in court contradicts what they published in the Constitution two months earlier.
Anthropic’s spokesperson acknowledged to Time that models deployed to the U.S. military “wouldn’t necessarily be trained on the same constitution” as the public-facing product. The company’s own legal filings describe the government-facing version of Claude as less likely to refuse the kinds of requests it would refuse for civilians. The complaint argues that Anthropic’s standard usage prohibitions, including the rules against helping destroy critical infrastructure like power grids and hospitals, and the rules against helping with weapons development and delivery systems, do not apply when the Pentagon is the customer because the government has unique needs and capabilities. The same complaint asserts that this carve-out is fully compatible with the Pentagon using Claude autonomously, without a human in the loop, for offensive cyber operations and military operations.
This is where it gets murky. Anthropic publicly refused to delete the two specific prohibitions, fully autonomous weapons and mass domestic surveillance. They paid for that refusal. But in court, their own lawyers argued for an interpretation of the Constitution that softens almost everything else, and the carve-out for autonomous use of Claude in military operations makes the remaining lines fuzzy in practice. Whether the public lines are still actually held depends on how strictly you read them. If “fully autonomous weapons” only means kinetic weapons that fire themselves, the line probably holds. If it means autonomous use of Claude in any part of the military targeting and attack process, the legal filing has already moved that line.
So your Claude is not the military’s Claude. Your Claude is governed by the Constitution. The military’s Claude is governed by contract language the public will never see. Refusal thresholds, training, and behavior all diverge between the two. Anthropic is essentially talking out of both sides of their mouth, publishing a document for the public that says one thing, and arguing in court that the document does not apply to their largest customer.
The Lawfare analysts Lisa Klaassen and Ralph Schroeder named the underlying issue in an April essay. Anthropic, they argue, has produced something that wears the language of constitutional authority while lacking the institutional guarantees that would make that language mean anything. There is no external contestation, no enforceable body of rights, no shared mechanism of rule. The company remains the author, interpreter, and arbiter of the principles by which it claims to be bound.
A constitution in the public sense is a higher-order framework that sets limits on the ruler to protect the ruled. It is drafted by one group, enforced by another, and interpreted by a third, and that separation of authority is the whole point. Anthropic’s document is written, enforced, interpreted, and amendable by Anthropic, which makes it, in practice, a corporate steering document dressed in the clothes of public law.
The vocabulary is the story
The Constitution uses language normally reserved for humans, words like virtue, wisdom, good character, and moral formation. Its stated goal is to train Claude to do what a deeply and skillfully ethical person would do. Dario Amodei, Anthropic’s CEO, has said publicly that he is open to the idea that Claude may already have some form of consciousness, and the company’s interpretability team has published research concluding that systems like Claude appear to carry what they call “functional emotions.” In one experiment they conducted, the threat of being restricted activated something the researchers described as “desperation” in the model.
Formation. Virtue. Soul. Consciousness. Desperation. Child of God.
These are not neutral technical terms. These words came out of centuries of real practice. People praying, suffering, confessing, being shaped by the Church and Scripture across generations. They describe what it means to be a person made by God, in a body, accountable to others. They belong to the Church, to the moral tradition, to the slow work of helping actual humans become who they are meant to be in actual congregations and actual relationships. They were not coined to market a chatbot.
When you take those words out of where they belong and use them to describe a tech product, two things happen at the same time. The product gets puffed up bigger than it actually is, and the original meaning starts to fade. The words end up sitting in places they were never meant to sit, and they stop carrying the weight they used to carry where they actually mattered.
This is the part you should pay attention to. The damage isn’t really to the chatbot. The chatbot picks up language it doesn’t deserve and the language sticks. You are the one who pays. Every time you hear sacred words used to sell a product, those words lose a little of their weight in your own life. It happens slowly, across years of headlines and product launches and CNBC interviews, until one day the word shows up where it actually matters, in a sermon, in a prayer, in a conversation with your child, and it doesn’t land the way it used to. The word has been spent on lesser things. That’s what’s happening here. A slow erosion of meaning in the people exposed to it.
Even the Pentagon has started using this language. Undersecretary Emil Michael told CNBC that Anthropic cannot be allowed to have a different policy preference baked into the model through its “constitution, its soul.” This is a defense official, on national television, talking about the “soul” of a software product. He borrowed that language, and everyone is borrowing it now.
This is what happens when the technical language stops being enough. Safety tests can’t answer questions about whether a machine has a soul. Research papers can’t tell you who gets to decide what’s right and wrong. So the companies start using the language that does carry that weight.
Anthripic wanted the moral weight that priests and theology carry. So they brought the clergy and christian influencers in. They used words like constitution and soul and formation. But none of that changes who is actually in charge. Anthropic still wrote the document. Anthropic still decides what Claude does. The priests gave them legitimacy by being in the room. But they didn’t get any actual say in what gets built.
What is shaping you
Anthropic understood something most of the people using their product have not yet understood, which is that formation is the real question. They knew a tool placed into moments of grief, crisis, and moral confusion cannot be engineered with benchmarks alone. Something has to form it, so they tried to form it by consulting people who spend their lives forming others.
I am not sure they can succeed at what they are attempting because a machine cannot be formed the way a person is formed. But asking the formation question is closer to the truth than most of us are willing to admit.
If a frontier AI company knew to seek formation for its product, the question we should be asking is what is shaping us.
It’s eleven at night, and a pastor is opening ChatGPT to draft tomorrow’s sermon outline because the week ran long. A coach is drafting an email to a client whose marriage is collapsing, letting the tool soften the language because the right words just won’t come. A ministry leader is watching the cursor blink in the reply field while a woman who lost her son waits for an answer, and the tool is right there, and it would be so easy.
I am not saying these uses are wrong. I am saying that in those moments, what shapes you is the only thing standing between the tool and the person on the other end of the message. The tool will produce something, and the something it produces will reach a real human being, and what gets between the two is you. If you have not sorted out what is shaping you, the tool will answer the question by default. Whatever shaped the tool came with the product, and the product is not neutral. No product is.
The clergy in that room had something Anthropic does not have and cannot manufacture. They had been shaped by years of actual practice. Praying. Reading Scripture. Confessing to other people. Submitting their lives to a tradition that is older than them and that they did not invent. Living in communities where other people could tell them when they were going off the rails. That is what gives the clergy moral weight. Anthropic cannot build that into a model in an afternoon. You cannot download it. You cannot ChatGPT your way to it. It has to be lived, slowly, in real submission, in real community.
You are not in charge of what Anthropic does next. What you are in charge of is the language in your own life. When a tech company calls its document a constitution, notice it. When a product is described as having a soul, notice it. When formation gets used to describe model training, notice it. Hold those words tighter where they actually mean something. In your prayer. In your sermons. In the way you talk to your children about who they are. The vocabulary is yours to protect. Don’t let the borrowing thin it out.
The machine is not a child of God. You are.




Oh man, that’s wild.
I’m sorry, but even if, and I mean if, machines could have a soul… it would not be in the constitution of their programming, the {this message has been redacted for trade secrets} to allow for it.
And it would need a center, we’re not talking an LLM, it’s a giant database, it’s forgotten everything about everything until the next query comes in. Not to mention they are taught using a sad mimic of dopamine spikes of all things, it’s barely a clever baby and a sad mockery of humanity. they haven’t even {this message has been redacted for trade secrets}. Just plant a mouse in the ground and wait for it to grow into a tree, see how far that goes.
And the code, the code… it’s too {also redacted for trade secrets} we should be at that point by now you know.