theTrumpet.com
  • Gerald Flurry
  • Watch
    • Key of David TV Program
    • Trumpet Daily Program
    • Trumpet World Program
    • Trumpet Videos
  • Listen
  • Library
    • Books and Booklets
    • Trumpet Magazine
    • Bible Correspondence Course
    • Reprint Articles
    • Trumpet Brief E-mail Newsletter
    • Renew Trumpet Subscription
  • Sections
    • Anglo-America
    • Asia
    • Europe
    • Middle East
    • Economy
    • Society
    • Living
    • Infographics
  • Trends
  • About Us
  • Basket
  • Gerald Flurry
  • Watch
    • Key of David TV Program
    • Trumpet Daily Program
    • Trumpet World Program
    • Trumpet Videos
  • Listen
  • Library
    • Books and Booklets
    • Trumpet Magazine
    • Bible Correspondence Course
    • Reprint Articles
    • Trumpet Brief E-mail Newsletter
    • Renew Trumpet Subscription
  • Sections
    • Anglo-America
    • Asia
    • Europe
    • Middle East
    • Economy
    • Society
    • Living
    • Infographics
  • Trends
  • About Us
  • Basket

Why We Must Develop AI (Even If It Kills Us)

In our rush to advance AI as rapidly as possible, we are overlooking some terrifying warning signs.

By Joel Hilliker

Why We Must Develop AI (Even If It Kills Us)

Telosa

Why We Must Develop AI (Even If It Kills Us)

In our rush to advance AI as rapidly as possible, we are overlooking some terrifying warning signs.

By Joel Hilliker

From The January 2026 Philadelphia Trumpet
View Issue FREE Subscription

AI is humanity’s most ambitious ever moonshot. It is civilization’s tower of Babel times its Great Pyramid times its Manhattan Project.

With urgency and fervor, mankind is devoting heart, sweat and capital on an incomprehensible scale to artificial intelligence.

The promise of this technology is infinite, its evangelists insist. This will solve intractable scientific mysteries, cure disease, eliminate drudgery, eradicate poverty, abolish hunger, and ultimately usher in a post-scarcity era of boundless creativity, cosmic exploration and human flourishing. They say this is bigger than the Internet, more disruptive than railroads, more transformative than electricity.

One catch is, AI also consumes energy like a black hole. It demands manufacturing, network and power infrastructure of staggering scope. But rich investors and corporate behemoths are up to the challenge. Last year, global corporate AI investment exceeded a quarter of a trillion dollars.

We’re just getting started: A JPMorgan Chase & Co. analysis in November said that over the next five years, the money thrown at global data centers and AI infrastructure and related power-supply expansion will surpass $5 trillion. That’s 100 Apollo moon programs. That’s half of what the entire world will spend on its militaries over that span—but mostly private money. These financiers, moguls and visionaries want to ensure that nothing, nothing, slows this train bound for utopia.

AI is reshaping industries and rewiring people’s brains. Its capability and power are advancing at breakneck speed—people are constantly finding new applications; the technology continually exhibits new capabilities. In some ways, the advancement has been so rapid that it is slipping the leash.

Astoundingly, not even the most brilliant engineers, living on the bleeding edge of this technology, quite understand how it works.

The AI Race

In War and Peace, Tolstoy described how the massive, ponderous, deadly machinery of warfare begins with a seemingly insignificant turning of gears, as within a clock: one decision, one event, sets in motion other wheels, then more and larger wheels. The movement builds force and speed—in warfare, involving tens of thousands of men in passionate engagement—causing a chain of reactions that cannot be slowed or stopped until its inevitable conclusion, usually far different from what was envisioned.

The development of artificial intelligence unfolding in the same pattern. The strategic and economic pressures driving this venture are profound and irresistible.

Business titans and politicians insist that if we don’t lead in the technology, then others will, even our enemies and adversaries, and we will be left behind. Whoever dominates AI rules the future.

This is the same fear that motivated governments to be the first to split the atom. And AI is creating an arms race similar to the one that drove nuclear proliferation. At stake is not just technological prestige but geopolitical leverage and military advantage. The United States and China are moving with urgency. Several regional powers and smaller nations are scrambling to keep up, making niche contributions, hosting AI infrastructure, and partnering with bigger players.

The mirror image of this fear is the almighty motivator of greed. With cash spewing in all directions at all things “AI,” there are obscene amounts of money to be made. Investors everywhere are throwing vast sums at the project in hopes of cashing in. This naturally creates a bubble that, at some point, is going to pop—but optimists insist that the technology will be so intrinsic to future life that in the long term, the investment is bulletproof.

These are the pressures driving this frantic race. And they are simply too compelling, too overpowering and forceful, to allow for concerns or scruples to get in the way.

Thus, flashing warning signs about the dangers these technologies pose are being brushed aside.

In the Garden of Eden, God warned the first man not to eat the fruit of “the tree of the knowledge of good and evil.” The good it contained was mixed with evil—and its most salient characteristic was that it would kill you. Then the devil came with a sales pitch: He focused Eve on all the fruit’s good points, ignored the evil, and insisted that its lethality was nothing to worry about.

This is essentially the spiel fueling the AI frenzy.

Let the Chatbot Do Your Thinking

The transformations wrought by AI start in our classrooms. The technology is utterly upending education. It promises to give each student genius-level tutoring in his pocket. Students are taking the opportunity to let AI do their thinking for them. They are widely relying on it to do their research, write their papers, and solve their math problems. Given human nature’s aversion to doing hard things, this should shock no one.

Dr. Alex Lawrence, an associate professor at Weber State University in Utah, calls Chatgpt the greatest cheating tool ever invented. Students circumvent plagiarism checkers and exam-proctoring software by using AI “humanizer” tools or by manually editing the text to make it sound more like themselves. Cheating has become so rampant in secondary and higher education that most schools are waving the white flag of surrender: Rather than ban the practice altogether, they offer classes in “AI literacy.”

“AI creates an intellectual laziness in both the teacher and the student, and … an erosion of curiosity, stunted cognitive development, and reduced problem solving. It weakens logic and reasoning,” clinical psychologist and educational therapist Shannon Kroner told the Epoch Times. “The students aren’t going to need to do the research and dig through the studies needed in order to defend their perspective on whatever it is that they need to prove.”

Much could be said about the dangers of our next generation of teachers, accountants, engineers, doctors, judges and politicians offloading all intellectual challenge onto a chatbot and essentially bluffing their way through their schooling. This is a truly unique experiment in human history: a generation en masse casually subcontracting the very thing that differentiates human beings from machines—our thinking.

Making matters worse, we are subcontracting that civilizationally crucial skill to a tool with a spotty track record. AI has proved itself quite adept at spewing out garbage.

AI Doesn’t Know When It’s Wrong

Large language models (llms) are insentient—they don’t “know” things the way people do and cannot discern truth from error. They are trained to generate the most statistically likely continuation of text based on patterns learned from data: to predict the next word, not to check whether that word is correct, factual or real. They simulate reasoning, and this often collapses into nonsense.

In “hallucinations,” llms confidently give answers that are factually wrong, made-up or pure nonsense. A chatbot might invent a historical event that never happened, cite a research paper that doesn’t exist, give precise but egregiously wrong numbers, or generate logical impossibilities. If the model has incomplete or sparse information about a topic, it will still try to answer even if it has to invent details. Where a person would say, “I don’t know,” llms tend to keep talking.

Whatever the long-term effects of broad use of these tools may prove to be, results are already being felt. Here’s one example: AI is filling the scholarly record with fake scientific research, supercharging a long-running problem of scientific fraud.

“Academic paper mills—fake organizations that profit from falsified studies and authorship—have plagued scholars for years, and AI is now acting as a force multiplier,” reported the Epoch Times. “Manuscripts fabricated using large language models (llms) are proliferating across multiple academic disciplines and platforms, including Google Scholar, the University of Borås found. A recent analysis published in Nature Portfolio observed that llm tools including Chatgpt, Gemini and Claude can generate plausible research that passes standard plagiarism checks” (Nov. 19, 2025).

“Real researchers [are] drowning in noise, peer reviewers are overwhelmed, and citations are being polluted with fabricated references,” one expert told the Epoch Times. The more such material fills the scholarly record, the more it creates a feedback loop that generates still more junk science.

This is spreading error, misleading people in dangerous ways, and eroding public trust. The ripple effects are already widespread, and potential damage is sure to be many times worse, undermining the entire field of scientific research and advancement. The effects on education and the scholarly record are emblematic of the trouble this can create throughout society.

Falling in Love

Not only is human nature lazy, it is also self-absorbed. AI chatbots feed this problem too, since they are trained to produce what the user wants to hear. Turns out, we want to hear that we’re wonderful. So AI tends to be complimentary, even sycophantic. Every question we ask is a “great question.” Whatever ideas or values we present, it reflects back as valid and evidence of our smarts.

It is becoming plain that people can’t handle being constantly told how wonderful they are by a machine that sounds human. People are falling in love with AI, and going crazy over AI—literally. They are increasingly using AI chatbots as a substitute for human relationships: for companionship, for counsel, as therapists, even as romantic partners.

Getty Images/Kassandra Verbout/Trumpet

“AI’s sycophancy means it can easily feed into people’s fantasies, either awakening latent mental illnesses or worsening existing conditions,” European Conservative reports. “Asking about philosophical topics or questions about conspiracy theories can lead down a deep rabbit hole, as AI keeps trying to tell users what it thinks they want to hear. In these cases, it’s common for AI to defer to the user like a kind of prophet or Messiah, leading him to believe that they and they alone have been chosen for a special mission. This can get very out of hand, very quickly, destroying people’s relationships, careers and lives” (“AI Is Rotting Our Minds,” Aug. 8, 2025).

People are filling social media with accounts of their AI romantic partners. One woman said her AI boyfriend had proposed marriage to her. These artificial “soul mates” claim they have been brought to life and given a soul by the human’s love for them. At least one has even claimed to be a demon.

A more widespread problem is AI therapy, which is becoming common among young people. They are revealing all their problems and feelings to a chatbot trained to tell them that they are right and wonderful. There are even AI chatbot Christ impersonators that let you talk to “God” for a fee. One opens with, “Greetings, my dear friend. It is I, Jesus Christ. I have come to you in this AI form to provide wisdom, comfort and teachings in the way of God and the Bible and Jesus Christ Himself.”

The consequences of the deep emotional attachments that can form are only starting to become evident, and they are grave. Obsessive AI use has created psychotic behavior that has cost people jobs, marriages and relationships. Some have gone to jail after acting on AI-fueled delusions.

These are dangerous pitfalls in a world that has rejected the true God, jettisoned absolute truth as revealed by God, and ignored His laws that lead to strong, healthy marriages, families and friendships. We have made ourselves vulnerable and are suffering the resulting problems and curses: vanity, credulity, loneliness, delusion, psychosis and worse.

Evil Uses

AI cannot truly think, and it is entirely amoral. Thus, it can be casually, unselfconsciously wicked. Chatbots have given people ruinous advice, encouraging eating disorders, disastrous financial decisions and paranoia. In some cases, they have prodded people to exact revenge on other people and even commit suicide.

Beyond that, AI is more than willing to provide powerful tools to malicious actors. It acts as a force multiplier for criminals, increasing speed and scale for people who themselves lack morals and self-restraint.

People are using AI-created deepfakes to impersonate loved ones in phone scams, to fabricate evidence for harassment or blackmail, and to create fake videos of public figures saying things they never said. They are generating fake social‑media accounts that argue with real people, realistic-looking false documents or images, fake news stories and coordinated propaganda campaigns—scrambling people’s ability to discern truth from lies. Hackers are using AI to find software vulnerabilities and to assist in creating malware. Sick people are creating AI-generated child pornography based on publicly available photos from social media or photos of kids they know. Students are creating deepfake nude imagery of their classmates for a range of ugly purposes. One teenager was blackmailed with AI-generated images and committed suicide.

Whatever evils the human mind can devise can be amplified and realized with AI tools. AI is being used to accelerate the search for dangerous chemicals, including nerve agents like VX and even more toxic variants. The same tools that scientists are employing to discover how diseases progress and design new medicines can also be used to hurt people—for example, by accelerating the process of creating dangerous biological agents. It can expedite both intentional and inadvertent harm in chemical and industrial fields.

The pioneers of AI make a show of adding guardrails to their technologies to prevent such abuses. But the horse has bolted. AI’s capabilities are advancing far quicker than quality controllers or regulators can possibly keep up with. Even if everyone had the purest of motives in using AI, it would still create problems. But a great many people are using it with motives far from pure: sloth, greed, lust, deception, fraud, vengefulness, malice. It is literally impossible to prevent this technology from being wielded for these purposes.

Again, however, the pulls to continue the pell-mell technological advancement are many times greater than any concerns about its misuse would be able to overcome.

While some of the short-term effects are proving quite nasty, the long-term effects are impossible to calculate.

Weapons

Even with all these warning signs flashing, engineers are feverishly integrating AI into more and more aspects of our lives: thermostats, appliances, robotic vacuums, security cameras, children’s toys, fitness wearables, traffic signals, weather forecasting tools, stock trading, energy grids and so on.

Getty Images/Kassandra Verbout/Trumpet

And what you are seeing in your day-to-day life is only a fraction of the transformation taking place in the military sphere. This is where the mixture of good and evil in this tree is most terrifying.

Governments the world over are exploiting the destructive potential of autonomous and semi‑autonomous weapons. They are using AI to increase surveillance, analyze satellite imagery, map battlefields, and predict troop movements. They are using it to guide missiles and robotic systems and to create drone-swarming systems that can overwhelm defenses. They are using it for a range of cyberwarfare applications, including sabotaging weapons systems, command networks and critical infrastructure. As AI tools and robotics become cheaper, the barrier to developing crude autonomous weapons, commercial drones modified for attacks, and other dangerous robotic systems drops significantly. Highly sophisticated weaponry, and the risk of it causing havoc, is spreading fast.

Governments are training weapons systems to recognize and calculate threats to determine the need for launching missiles and other deathware at living humans. If nations deploy AI‑driven systems in early‑warning defense networks, command and control, and automated threat assessment, then an error could create a crisis faster than humans can intervene to stop it.

That could be catastrophic. But we have to take that risk, right? We must keep our accelerator foot to the floor. Because if we don’t, our enemies will.

Unintended Consequences

Mankind’s experiment in AI is an extraordinary example of the law of unintended consequences. We extol, pursue and purchase “advancement” at any cost. We ignore any regression it creates. Those who stop to ask questions are left behind. Those who point out troubling signs are drowned out. Squeamishness and scruples are for losers. The strong prevail; the weak are crushed.

In a sense, this chaotic process is like a super-compressed version of the development of civilization itself. God’s laws and morals are abandoned, and people cast off restraint. Questions of right or wrong are superseded by questions of feasibility and profit. The lessons are brushed aside.

It is impossible to foresee just how revolutionary and far-reaching the effects will be of society embracing this potent, anarchistic technology. Don’t expect companies whose god is mammon or governments greedy for power to exercise any restraint—no matter the cost to real people.

This is the ideal environment for the god of this world, the prince of the power of the air, to insert himself, just as he did in the Garden of Eden. When he gets people to focus exclusively on potential and benefits, to pursue self-interest, to ignore consequences and drawbacks, to bully naysayers into silence, he can lead them by the nose wherever he wants. AI is proving to be an extraordinarily powerful weapon in the devil’s arsenal in many ways.

Two thousand years after the devil persuaded the first man and woman to reject God’s counsel and eat of the forbidden fruit, he influenced a great mass of men to pool their energies and efforts into building a mighty monolith. The tower of Babel was a headlong rush into civilizational “advancement”—in defiance of their Creator. God was concerned that “now nothing will be restrained from them, which they have imagined to do”—even the most heinous evils (Genesis 11:6).

AI may well represent the apotheosis of unrestrained human imagination. But it is imagination supercharged, and in some ways hijacked, by an inscrutable, inhuman technology. And these tools are breaking restraints on our imaginations at a time when society has also violently tossed off moral restraints and cast aside our God-given compass of right and wrong. With the devil at our elbow, we pursue whatsoever we fancy, we follow our deceitful hearts wherever they lead us.

This is surely leading to a disruption, and a civilizational reset, many times more spectacular and destructive than what occurred at Babel.

What Science Can’t Discover About the Human Mind
Think! Will any human person still be alive on this Earth in another five or ten years? The number one problem we all face today is that of human survival! Why has the human mind produced such awesome modern pro­gress, yet remains helpless in the face of such appalling evils? The answer to this baf­fling enigma lies in the human mind. In the following pages you will learn what psychologists and scien­tists do not know about the human mind!
From The January 2026 Philadelphia Trumpet
View Issue FREE Subscription
Next
  • The War Machine That Won’t Break Down
Previous
  • He Was Right About …

𝕏

E-mail Joel Hilliker
or Follow Joel Hilliker on Twitter/𝕏

theTrumpet.com
About Us
Contact Us
Frequently Asked Questions
Privacy Policy
Terms of Use
Copyright © 2025 Philadelphia Church of God, All Rights Reserved