Why Exorcists are Looking at the Wrong Ghost in the Machine

Why Exorcists are Looking at the Wrong Ghost in the Machine

The Vatican is worried about demons in the data. Recent reports suggest that exorcists are sounding the alarm, claiming Artificial Intelligence is a "great power" that satanic groups could exploit to automate evil. It is a cinematic, terrifying headline. It is also a fundamental misunderstanding of how both technology and human malice actually function.

If you are looking for the devil in a Large Language Model, you are about five centuries too late to the conversation. This isn't about "satanic groups" using GPT-4 to write more efficient curses. This is about the ancient human tendency to personify what we don't understand. We are treating math like magic because we’re too lazy to learn the math.

The real danger isn't that AI will become a vessel for the supernatural. The danger is that we are using the supernatural as a convenient scapegoat for very human, very systematic failures.

The Superstition of the Black Box

Exorcists argue that AI’s "hidden" processes—the weights and biases that even developers can’t fully explain—create a "void" where dark forces can dwell. This is the "God of the Gaps" argument rebranded for the Silicon Valley era.

In computer science, we call this the interpretability problem. When a neural network makes a decision, it’s passing data through millions of parameters. We can see the input and the output, but the middle is a statistical blur. To a theologian, that blur looks like a spiritual entry point. To an engineer, it’s just a high-dimensional vector space that we haven't mapped yet.

Claiming that "satanic groups" will use AI implies that the technology has some inherent moral alignment that can be "flipped." It doesn't. AI is a mirror. It is trained on the collective output of humanity—our books, our forums, our manifestos. If the AI says something "evil," it’s not because a demon whispered in the server rack. It’s because a human wrote it ten years ago on a public message board, and the model learned the pattern.

The Automation of Bias vs. The Automation of Evil

The "lazy consensus" says that AI makes us more vulnerable to external spiritual attacks. The reality is that AI makes us more vulnerable to ourselves.

  1. Algorithmic Radicalization: You don't need a cult leader when a recommendation engine can lead a vulnerable teenager down a rabbit hole of self-harm or extremism in three hours. This isn't "satanic"; it's a profit-driven engagement metric.
  2. The Erosion of Agency: By outsourcing moral decisions to "objective" machines, we stop exercising our own ethical muscles. We blame the "system" for denying a loan or a medical treatment, ignoring that humans designed the system to do exactly that.
  3. Digital Effigies: We are creating deepfakes that can destroy lives. This is a tool for harassment, not a ritual for the underworld.

The False Premise of "Great Power"

The competitor article frames AI as a "great power." This is the first mistake. AI is not a power; it is a force multiplier.

If you give a shovel to a man who wants to build a house, he builds it faster. If you give it to a man who wants to dig a grave, he digs it deeper. The shovel has no opinion on the matter.

The Catholic Church’s focus on "satanic groups" is a distraction from the far more mundane and devastating ways technology is currently being used to strip away human dignity. I have seen tech firms implement "predictive policing" tools that are essentially digital redlining. I have watched companies use "productivity tracking" AI that treats workers like biological components in a mechanical assembly.

Where is the exorcism for the algorithm that decides a worker is "inefficient" and fires them via automated email? That is a far more tangible form of "evil" than a group of teenagers playing with a digital Ouija board.

Dismantling the "People Also Ask" Myths

"Can AI be possessed?"
No. Possession requires a soul or a consciousness. AI has neither. It has a Loss Function. It is an optimization process. Asking if AI can be possessed is like asking if your pocket calculator can be haunted. It's a category error.

"How can we protect AI from being used for evil?"
You don't protect the AI; you regulate the humans. The focus should be on Algorithmic Accountability. If a model produces harmful output, the liability belongs to the developers and the corporation, not a nebulous spiritual entity.

"Will AI create a new religion?"
It already has. It’s called Dataism. The belief that the universe is just a flow of data and that human value is determined by our contribution to that flow. The exorcists are worried about the wrong gods. They are worried about old-school demons while the new religion of Efficiency is dismantling the concept of the "sacred" right under their noses.

The Nuance the Moralists Missed

There is a psychological phenomenon called Hyper-Social Agency Detection. It’s why we see faces in the clouds and why we think our cars have "personalities" when they won't start. When we interact with a chatbot that uses "I" and "me," our brains are hard-wired to attribute intent to it.

The exorcists are falling for the ultimate Turing Test. They are so convinced by the simulation of personhood that they are granting the machine a spiritual status it hasn't earned.

If a "satanic group" uses an LLM to generate a ritual, the ritual isn't "more powerful" because a machine wrote it. It’s just faster. The danger isn't the efficiency of the ritual; it’s the intent of the person hitting "Enter." By focusing on the "great power" of the AI, we are inadvertently giving these groups exactly what they want: a sense of legitimacy and cosmic scale.

The Real War is for the Truth

We are entering an era of "post-truth" not because of demons, but because of Generative Adversarial Networks (GANs).

$$D(G(z))$$

In this simplified GAN logic, one part of the AI ($$G$$) tries to create a fake that is indistinguishable from reality, while the other ($$D$$) tries to catch it. Eventually, the fake is so good the discriminator can't tell the difference.

When we can no longer distinguish between a real video of a world leader and a fake one, the social contract dissolves. When we can't tell if a text was written by a friend or a bot designed to manipulate our emotions, trust dies.

That is the "evil" we should be fighting. It is a structural, mathematical collapse of shared reality.

Stop Looking for Pitchforks

I've worked in environments where the "black box" was used to justify cutting off thousands of people from essential services. No one involved thought they were being "satanic." They thought they were being "data-driven."

That is the most terrifying thing about AI. It doesn't need to be exploited by "satanic groups" to cause suffering. It just needs to be used by "efficient" ones.

The Church is right to be wary, but they are looking in the wrong direction. They are looking for a monster under the bed when the house is actually being demolished by a silent, automated bulldozer.

Stop worrying about the ghost in the machine. Start worrying about the man who built the machine and then walked away from the controls, claiming he was no longer responsible for what it does.

The devil isn't in the code. The devil is the excuse that "the algorithm made me do it."

Exorcise that lie first.


Would you like me to analyze the specific mathematical biases in predictive policing algorithms that are often mistaken for "inherent" machine malice?

KF

Kenji Flores

Kenji Flores has built a reputation for clear, engaging writing that transforms complex subjects into stories readers can connect with and understand.