“Venturing into the Unknown: Artificial Intelligence in the Context of Faith"
written by Sarah Keller, class of 2025

The ocean used to be our mysterious unknown. Explorers crossed the sea, searching for new land and people, not knowing what to expect. Some brave souls never made it home, but other voyages were successful, bringing back tales of wealth, natives, and lands of plenty. America was discovered by explorers risking a venture into the unknown. After the ocean, space became the new frontier. Countries spent vast resources in an effort to thoroughly explore the heavens, creating new programs and racing to make their mark on history. Though some astronauts lost their lives in these missions, space exploration also gave rise to tremendous accomplishments, such as landing a man on the moon in 1969. Great risks yielded great rewards. Now, however, we face a new unknown that is markedly distinct from those of the past. Containing the capacity for both immense destruction and enhancement of our lives, this unknown has serious repercussions for all of humanity. While seemingly endless possibilities fill the horizon, dangers also lurk beneath the surface. Undoubtedly, this unknown will alter the future in ways we cannot comprehend. Like explorers of the past, it’s time to weigh the risks, take courage, and navigate the unknown. The venture is underway: AI is here, and the Age of Artificial Intelligence has begun.

This seemingly new wave of technological exploration has actually been developing in the background for decades. Since the term “Artificial Intelligence” was first coined in the 1950’s, computer scientists have been exploring ways to create software that simulates human thinking. Even prior to this, British mathematician Alan Turing pondered a test that would demonstrate a machine had attained human intelligence. In this “imitation game,” the machine would successfully trick a human into thinking it, too, was human. This “Turing Test” has remained the crowning experiment to determine how far AI has progressed (“The History”). At the Dartmouth College Conference in 1956, John McCarthy, a mathematics professor, first considered the possibility of thinking machines or “artificial intelligences” as he called them. Excited about the idea, researchers worked to develop this technology in the decades that followed. The first chatbot, ELIZA, was created by the Massachusetts Institute of Technology in 1966 and was programmed to formulate questions from users’ inputs in a way that simulated a therapy session. Soon after, Shakey the Robot, released by the Stanford Research Initiative, was developed to assist scientists in understanding more about mobile robots. After AI research received some criticism in 1974, an Artificial Intelligence winter ensued when the technology fell short of scientists’ expectations (“The History”).

However, a breakthrough came in 1996 when International Business Machines’ computer program Deep Blue won a chess game against the then-world chess champion Gary Kasparov, demonstrating again the possible “intelligence” of machines. Following this triumph for the industry, AI continued to advance rapidly in the early 2000’s. Social robot Kismet, developed by MIT, was outfitted with sensors, a microphone, and programming that allowed it to imitate numerous human emotions. Later, NASA launched two rovers, Spirit and Opportunity, that were equipped with AI to help the machines navigate the difficult Martian terrain. In 2011, Apple programmed the virtual assistant Siri on its iPhone, while three years later, Amazon released Alexa with voice recognition. OpenAI, a company in which Microsoft has invested over $13 billion, introduced large language model GPT-3 in 2020, launching a sequence of rapid innovation (Novet). DALL-E, another OpenAI model, was released in 2021 and introduced text-to-image capabilities (“The History”). Most recently, AI assistants have been implemented in the newest iPhones, ChatGPT programs have been expanding, and scientists have been exploring medical uses for AI. Additionally, Tesla is currently developing the extremely precise “Optimus” robot that functions as a home assistant, and the company hopes to build ten thousand of them by the end of 2025 (Steiner).

The field of Artificial Intelligence is progressing and expanding — exponentially. So what? Why does this matter? Most of the advertisements for AI products display the technology as a helpful assistant that increases human efficiency. But, is this aim alone a noble goal? In movies, AI often takes the form of a steadfast companion who serves the protagonist without complaint, such as Baymax in Big Hero Six. But, there are also movies like Mission Impossible where AI takes the form of an evil algorithm that threatens humanity. Which will it be in our AI-enabled future? ChatGPT is becoming an increasingly divisive issue in education. Though its question-answering capabilities are intriguing, AI platforms can also assist students in taking all manner of academic shortcuts. With all of the hype surrounding AI and the many questions still left unanswered with this new technology, it is paramount for all people, and Christians in particular, to understand AI. Since Artificial Intelligence will have a lasting impact on humanity as it progresses, Christians must grapple with the moral and ethical considerations raised by AI. By considering both the dangers and benefits through the lens of biblical discernment, believers can reach the understanding that using AI technologies demands caution and boundaries and must be continuously evaluated as Artificial Intelligence evolves.

AI is such a layered term that it does not have a commonly shared definition. On a broad scale, Artificial Intelligence is the “concept of creating computer programs or machines capable of behavior we would regard as intelligent if exhibited by humans” (Kaplan 1). If a machine can accomplish feats normally considered human-like, we tend to describe it as Artificial Intelligence. AI engineers seek to produce machines that imitate human intelligence; however, people often forget that Artificial Intelligence does not actually have any original intelligence of its own. AI technology is trained on human conversations and writings, so its responses and interactions unsurprisingly seem very humanesque. AI integrates our biases, political leanings, and patterns of speech into its data, and then, when asked a question, as in the case of ChatGPT, can give a reasonable response, “producing [a] result one word at a time” (Peterson). However, it’s important to note that the answer is not always correct; the program can “hallucinate,” simply trying to satisfy the user’s question with a confident, yet sometimes inaccurate, response (Mollick 53).

AIs have a foundation of training data in order to decrease the number of incorrect responses. Through a method called “machine learning,” Artificial Intelligence technologies are trained to “adapt to a wide range of inputs, including large sets of historical data, synthesized data, or human inputs” (“What is AI”). This approach focuses on the idea of algorithm or training AI in a way that it can generalize to other tasks without specific instructions. A subset of machine learning is “deep learning,” in which AIs are specialized in recognizing patterns (“What is AI”). This method is modelled after the human brain and requires huge amounts of data. One application of deep learning involves tailoring content for viewers based on their preferences.

Interestingly, the training data used in these methods does not represent all possible information but rather the data that the AI corporations chose to include in the AI’s training. So, only the elites at the top of the AI engineering corporations, who tend to be male, English-speaking, and American, decide what data is “important” for the AIs’ training. As researcher and professor Ethan Mollick explains, “The result gives AIs a skewed picture of the world, as its training data is far from representing the diversity of the population of the internet, let alone the planet” (35). These biases show up in AI responses and, in turn, influence those interacting with AI technologies. For example, Google’s launch of Gemini revealed inherent biases within the system, displaying America’s Founding Fathers as black, women as popes, and sometimes even refusing to show a white person altogether (“Google’s Gemini AI”). While this instance is extreme, eventually users may become unaware of how AI is subtly shaping their views. With excessive interaction, AI could be capable of shifting a person’s perspective, political views, and opinions without him even realizing it. This highlights just one reason for exercising caution when working with these technologies.

Right now, AI has a “Jagged Frontier,” as Mollick describes it. What this means is that its capabilities and boundaries are largely unknown, just like previous exploratory endeavors. Some actions that we would consider easy for AI to accomplish lie beyond its ability, while others that we didn’t believe were possible are achievable. However, to truly understand AI’s boundaries, we must test the technology, which requires seeking to understand AI and its implications for humanity, rather than running away in fear.

While AI is a new field with many unknowns, by examining the history of innovations, several truths become evident that could be applied to Artificial Intelligence technologies. As a whole, technology has been an advantageous invention. After all, we no longer spend our days trying to simply survive; now we have time for leisure because of the inventions of farm equipment, factories, and everything that was introduced in the cycles of industrial revolution. Along with other researchers, Mollick emphasizes, “[T]echnology has tended to make us stronger” (51). In the past, technology has been an overall positive invention for humanity, and AI may likely follow in this pattern. Nevertheless, the transitions into new technologies are often difficult for people, especially where jobs are concerned. Consider the Luddites of the 19th century, for example. This group of English textile workers rebelled at the new technologies being introduced in factories, seeking to destroy them for fear of losing their jobs. Some people lose their jobs to new technology, even when it may be a net positive invention, and this remains a concern with AI.

Additionally, the consideration of the skills humans may lose by embracing the new technologies presents another troubling risk. GPS systems are commonplace now, helpfully directing us where to drive in our comings and goings. However, also in the present age, many people struggle to maintain a mental map of their area, let alone the country as a whole. The internet has further facilitated this lack of geographical awareness; who needs to memorize any geographical information when you can simply look it up? In the name of finding information faster, people sacrifice conversations and research skills, disregarding the value of these dying arts and capabilities.

Technology is not inherently good or evil; it all comes down to how we use it. We are broken, sinful, people, and this human nature is at play in the way we use technology. As authors John Wyatt and Stephen Williams state, “Few technologies are devised with malicious intent, but most technologies can be turned to malicious use” (2). Technology is a tool that has been entrusted to us. And, as with any tool, correct usage demands an understanding of the motivations behind it (Thacker). Past experience demonstrates “that most technological advances are likely to have both an upside and a downside,” as Christian professor John Lennox notes. “A knife can be used for surgery or as a murder weapon; a car can be used to take you to work or as a getaway vehicle after a crime” (Lennox 54). In this way, AI is no different.

With this background in mind, let’s unpack the risks and benefits of AI and consider how the Bible can inform our perspective on this modern phenomenon. We’ll start by focusing first on the potential dangers of Artificial Intelligence.

One of the biggest concerns is that AI is a modern manifestation of transhumanist ideology. This worldview presents a real moral dilemma for Christians. Transhumanism “desire[s] to move beyond humanity” and even transcend “the physical limitations of our imperfect bodies” (Shapiro, “Unraveling The Mysteries;” Walsh 28). Through this lens, “being human isn’t the important part,” editor J. Douglas Johnson states in a 2023 edition of Touchstone (4). In fact, some engineers seek a future where machines rule over humanity and life is simply “an artificial universe that our computers can solve” (Walsh 14). This is far from a Christian’s calling “to lead a quiet life … mind[ing] your own business and work[ing] with your hands … so that your daily life may win the respect of outsiders and so that you will not be dependent on anybody. (1 Thes. 4:11-12). As novelist Paul Kingsnorth quips, “These [AI engineers] are doing more than trying to steal fire from the gods. They are trying to steal the gods themselves — or build their own versions” (33). This sounds eerily like idol worship, a practice which Christians are specifically directed to eschew. The major problem with AI is that it represents twisted hopes for machines to take over and human lives to be rendered inconsequential, a view which Christians should strongly oppose. Consider the words of warning against godlessness and wickedness in Romans 1:25: “They exchanged the truth about God for a lie, and worshiped and served created things rather than the Creator—who is forever praised” (NIV). A situation where machines are worshipped and humans are considered worthless could never be a positive outcome of AI.

The tendency of humans to anthropomorphize technology is accentuated by Artificial Intelligence. We treat lifeless objects as if they have human qualities, forgetting that the Echo Dot we’re “talking to” or the technology that “hates us” doesn’t actually have a brain of its own (Edgar). It is a soulless, lifeless piece of machinery; a tool to use for our benefit — or destruction. But what happens when the robots start to look more like humans? When they can act, converse, and seemingly experience emotion? It becomes harder and harder to remember that these machines are truly lifeless. People may start forming friendships with their AI companions, treating the technology as more than it truly is. Real human interaction may seem unnecessary when AI access is instantaneous. We were designed to develop real relationships, not those emulated by a machine. AI likely will only accelerate our inclination to interact with technology as though it were human.

If ethical considerations were the only problem with AI, they would still warrant significant concern. As John Lennox describes, “[P]eople are carried away with the ‘if it can be done, it should be done’ mentality without thinking carefully through the potential ethical problems” (24). It’s impossible to enumerate all of the ethical nuances of AI, so we will just consider a sampling of issues including biased training, unpredictable parameters, and the human toll required to sanitize content. First, these technologies require significant training and test-runs before they are introduced to the public. As stated before, elites regulate the information allowed in AI training, intentionally or inadvertently introducing biases into the programs’ responses. AI replies could also incorporate political leanings and propaganda while silencing other views under a banner of “preventing disinformation” (Shapiro, “ChatGPT Prefers”). Additionally, since AI is such a new field, engineers often set parameters on the machines without knowing the extent of the consequences. For example, philosopher Nick Bostrom hypothesized a scenario where an AI machine was placed in a paper clip factory and given instructions to produce as many paper clips as possible. However, this machine might “decide” that the best way to accomplish this mission would be through stripping the earth of iron and disposing of all humans, who stand in the way of its goal (Mollick 28). Though this may at first seem hyperbolic, AI engineers really don’t know how machines will carry out the parameters they set. Under pressure to monetize their technology, they release platforms prematurely, accepting the associated risks in an effort to get ahead. Lastly, people rarely realize that AIs do not become “PG-rated” without a significant human toll, and even then, the information is not all family-friendly. Low-paid workers have to read and rate the AI responses, exposing themselves to a steady stream of graphic content that the big AI companies do not want the public to see. Horrific picture after horrific picture is seared onto their brains as they train the AIs to show only socially acceptable images and content. As Ethan Mollick puts it, “In trying to get AIs to act ethically, these companies pushed the ethical boundaries with their own contract workers” (38). Nobody should have to be exposed to such lewd content, even for AI training purposes. The production of a useful AI platform exacts a horrendous human toll. The ethical challenges surrounding AI pose some real problems for Christians to consider when deciding their stance on the new technology.

The invention of Artificial Intelligence underlines one of the biggest truths about human society: when given a choice between the easy way and the hard way, we choose the former. AI puts this tendency on steroids. Using AI without boundaries results in mental, physical, and creative laziness. Why do the work of coming up with new ideas when AI can “think” for you? Why work to revise a paper or spend time researching an interesting topic when AI can do these tasks in a matter of seconds? People miss the value of learning to write and think critically when AI is at their beck and call to accomplish these tasks; “easy and fast” ends up replacing “thoughtful and worthwhile.” We become less human with every skipped opportunity to do the hard things. With time, we may lose the ability to think for ourselves, and while some may argue, “So what?” this is a very important part of retaining our humanity. As Romans 12:2 reminds us, “Do not conform to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God’s will is—his good, pleasing and perfect will.” Our minds were designed by God for a reason, and it is dishonoring to Him to avoid exercising our mental faculties.

At its core, AI takes something scarce, or hard to develop (critical thinking), and makes it plentiful. Expertise that previously took years to cultivate can now be completed to near perfection in a matter of minutes. What are the repercussions associated with such skill displacement? What happens when a good writer no longer has a competitive advantage in the job market? When anyone can use AI to accomplish tasks they couldn’t before, does the concept of the bell curve disappear? Is everyone an expert now? People will no longer value skills traditionally considered a mark of talent. AI may remove the idea of scarcity, creating a dystopia where being skillful in a particular pursuit is just an art of the past. Furthermore, if no one takes the time to learn a skill, what happens when the AI systems get hacked or become unavailable? In this scenario, who can diagnose and treat patients without relying on AI-enabled medical technology? Who can write and clearly communicate without ChatGPT’s influence? Without skilled people, the world would be thrown into chaos. We already get frazzled when the power goes down; in a future with ubiquitous AI, this disorder would be multiplied exponentially. With the click of a button, Artificial Intelligence allows everyone to perform at a high level in areas previously requiring intensive investment. At some point, these skills may be forgotten entirely, never to be regained.

The lack of regulation on these technologies creates a playground for charlatans and risk-takers. Currently, there isn’t really anyone policing AI development. An AI ethics board was established by Google, but then shut down due protest from the public regarding who was part of the committee (Walsh 55). Thus, there remains a lack of supervision on these AI programs, despite Google’s promise to consider how to better police their use of Artificial Intelligence. Other companies and committees have established principles for AI development, amounting to little more than platitudes, leaving many questions unanswered. For example, fake content, including scams, deepfakes, and intentional disinformation, are easily enabled with AI technologies, but who is supposed to filter through these deceptions? Though the technology is progressing at a rapid rate, this essential question has not been resolved. Computer scientist Kai-fu Lee observes, “This means our future is one where everything digital can be forged…” (61). Without some regulation, we may reach a point in the future where it becomes nearly impossible to know what is true and what is AI-created. John Wyatt notes, “There is a deep Christian tradition, reinforced by widespread human intuitions, that values authenticity and regards all forms of simulation, however well-intentioned, as a form of dissembling, a way of being that is fundamentally deceptive and evil” (231). Christians value truth; it is a key characteristic of God’s nature. When it becomes impossible to distinguish truth from falsehood, this raises some serious concerns for Christians. If we value the virtue of truth, can we justify promoting what could increasingly be used for deception? When AI is left unregulated, truth becomes unrecognizable.

When considering where the responsibility for regulation lies, the government is the likely answer, but of course, governments do not always hold the same moral views as the people they govern. And this introduces the possibility of surveillance capitalism, in which everything is monitored by the government to the point that privacy becomes a distant dream (Wyatt and Williams 218-221). AI offenders could be hunted down and tracked with the very technology they misused, giving more and more control to political offices. This could quickly deteriorate into an authoritarian regime. The lack of supervision on AI technologies is a real problem, and we must find a way to introduce regulation while retaining our core value of freedom.

Additionally, moral considerations arising from the widespread deployment of AI are left unanswered, including the physical, reputational, and financial harm potentially traceable to these technologies. If an AI-enabled vehicle were to kill someone in an accident, who is to blame? The “driver”? The vehicle? The manufacturer? The company holding the AI license? When Artificial Intelligence induces physical harm, what is the protocol for determining culpability? These moral concerns have not been sufficiently addressed, and there are no clear laws in place for when this happens. How do we deal with reputational harm, as mentioned previously? What is the compensation for the victim in a deepfake scenario? Again, there is not specific legislation to protect the public. Finally, financial harm may result from stolen intellectual property. AI often doesn't directly copy other people’s writings, but it does draw from their ideas and patterns to create its content, which is still a form of plagiarism. Furthermore, with AI art platforms like DALL-E, “masterpieces” can be generated in an instant by instructing the machine to work in the style of the masters. Artists’ work is being devalued when their lifelong expertise can be imitated in mere seconds. So, the question remains: who will stand up for these creators, protecting their intellectual property?

The integration of AI into society has some immediate and lasting consequences for individuals, one of which is job displacement. Experts agree that many jobs composed of routine tasks that don’t require significant human skill will be replaced by AI (Lee). These would include jobs that are primarily completed by computers and require review and analysis of large amounts of data. However, as many AI theorists like to point out, technology generally creates more jobs than it destroys, many of which we cannot yet imagine. Still, in the short term, the job losses may feel very significant. Other researchers wonder if even these new jobs could be taken over by AI (Suleyman). Additionally, as conservative political commentator Ben Shapiro emphasizes, there is a concerning possibility that AI may wipe out “the pipeline for experts” (“Will AI Kill”). When AI takes over all of the entry-level jobs, he argues, there won’t be a ladder for people to ascend and reach jobs requiring more education. This is currently illustrated by the decline in the number of software developers as evidenced by a 2024 research article (Nezai). Once the current experts pass away, who will replace them? Will AI? Job displacement by AI is not just a possibility of the future but already happening in certain sectors of the economy.

The field of education is also grappling with the explosion of AI tools. There is significant concern that students are using AI to accomplish their schoolwork rather than thinking for themselves and producing original work. As a countermeasure, teachers are using AI checkers to verify the originality of their students’ writing. This cycle prompts better AI technologies that may eventually fool even the AI checkers. When AI is doing all of our “learning” and students do not put forth the effort to think and write for themselves, will there come a point in the future where people simply cannot write, read, or think critically? And while some may argue that we’ve lost other skills over the course of history that haven’t affected us greatly, the ability to write and think are different. Originally, the technology of the printing press popularized the ability to read and write. Could the rise of AI now be the cause of these skills diminishing? Sacrificing real learning for “learning faster” has serious repercussions (“Will AI Kill”). Part of education is the struggle to master the basics as a bridge to more challenging academics. When Alexa answers all of our questions and ChatGPT writes all of our papers, though it may be easier in the moment, the long-term cost is incalculable. Ben Shapiro compares this idea to that of an animal in captivity: it can live a long time, but it doesn’t know what to do when faced with hardship. The animal is helpless when put back into the wild. People need to make academic investments that they may not enjoy in order to ultimately enjoy a better quality of life. Sacrificing education because AI can “do it faster” means losing everything for which education stands. And that is a price we should be unwilling to pay.

We’ve considered general societal concerns revolving around AI, including the transhumanist ideology, anthropomorphism, ethical issues, mental laziness, skill displacement, lack of regulation, surveillance capitalism, and moral considerations. Additionally, we’ve examined specific repercussions such as job displacement and educational degradation. Now, let’s turn our attention to the potential benefits of AI.

The idea of AI taking over tasks that are repetitive is actually very promising for humans. If AI can outperform in accuracy and efficiency in these areas, it frees up humans to work on other, more creative tasks. In this way, it is not unlike previous tools that have brought exponential efficiencies, including the invention of the plow, automobile, and sewing machine. Currently, AI systems can assist humans by supervising traffic flow, efficiently managing resources within electrical grids, detecting fraud, generating computer code, and diagnosing rare diseases, along with a host of other jobs (Suleyman 61-62). AI is part of many of the applications we presently use and is found in “shops, schools, hospitals, offices, courts, and homes,” according to AI engineer Mustafa Suleyman (62). Concurring with the opinions of many researchers, Suleyman further argues that AI will “make experiences more efficient, faster, more useful and frictionless” (62). This widely touted view is grounded in reality; we are experiencing the productivity boost even now. Nevertheless, this offsetting benefit is so generalizable that it can sometimes blind people to the concerning issues surrounding the technology. Because AI is meeting so much success, people are often willing to disregard potential dangers in favor of the immediate visible benefits. Efficiency is an undoubted benefit of AI, but it should not override all other concerns.

Technology should be augmenting human capabilities rather than replacing them. With this scenario, humans would be freed to possibly work less overall, and perhaps use this free time to engage with their family, church, or strengthen their relationship with God. Interestingly, the time people save on technology is often also spent on technology. One survey from 2023 reveals that besides the time spent on personal activities like sleep, Americans spend the majority of their time on leisure activities, one of the most common of which is watching TV (VanMaldeghem). Even without the statistics, it's clear that humans already spend significant time on screens; just consider the prevalence of smartphones in American society. The amount of time AI may free up through its efficiency will likely accelerate this issue. Nevertheless, AI has the potential of extending human capability rather than replacing it, which is a positive aspect of the technology.

There are far-ranging applications of this new technology in many disciplines, including medicine. AI could assist in developing a cure for cancer and other chronic diseases through its data analysis capabilities. It could replace the job of a radiologist, reading and interpreting images more accurately. By recognizing patterns and comparing them to the patient’s history, AI could create more individualized treatment plans, fulfilling some of a doctor’s tasks. On this note, it could, and in some places already does, assist doctors in charting information from visits. AI could lead to further breakthroughs in bioengineering, biotechnology, and biomedicine, revealing patterns in these areas of study that lie beyond the capabilities of the human mind. These examples represent just a few of the varied uses of AI applied to the field of medicine.

Artificial Intelligence also has the potential for many economic upsides. The same AI technology that eliminates jobs could also generate new occupations, many of which we have not yet imagined. One newly emerging occupation is that of a prompt engineer, a role focused on formulating and revising prompts for AI that achieve the best results. There are high hopes the versatility of AI will generate many opportunities for employment. Many scientists predict a utopian future where all people live in wealth and abundance. They believe that AI could help level the playing field, promoting equality and lessening the gaps in knowledge between expert and entry-level positions. When coupled with the current advances and predicted breakthroughs in biotechnology and biomedicine, their predictions do not seem entirely unattainable. AI engineers place aggressive faith in Artificial Intelligence, believing that the benefits of the technology will be symmetrical, raising all people to the same level. However, as demonstrated by history, technology rarely benefits all people equally; whoever creates the most popular platform usually ends up making the most money. As Christians, we know that the only utopian future will be realized in Heaven, but nevertheless, many of the developments predicted by scientists seem to have great potential for different sectors of the economy.

Now that we’ve considered both the challenges and benefits of AI, including the potentials for efficiency, augmentation of human capabilities, medical feats, and economic upsides, it’s paramount to consider the biblical implications of Artificial Intelligence.

Let’s begin by examining what makes us human. In the beginning, God created humans distinct from all other creation. We were given a soul, free will, and the ability to be in relationship with our Creator. Humans were created in the likeness of God and were therefore imprinted with dignity (Gen 1:27). For many, AI seems to threaten our very humanity: the machines seem to have more knowledge than us. However, people forget that being “smarter” is not at the center of what makes us human (Shapiro, “Unraveling The Mysteries”). Instead, theological professor Marius Dorobantu describes two ways of viewing the Imago Dei in light of seemingly intelligent machines. He argues, “Human distinctiveness does not reside in any uniquely human intellectual faculty but in our unparalleled agency in the world, which we are called to care for and even co-create with God…or in the relationality that is so central to what it means to be human, and in which we mirror a Trinitarian God…” (Dorobantu). Machines cannot render humans’ dignity obsolete because they do not have the divine purpose to care for the world and develop intentional relationships that humans do. As Dorobantu continues, “Humans are someone, while machines are something.” That simple fact about technology pinpoints the truth that sets humans apart from machines and gives Christians an anchor for a future filled with AI.

But, we must more deeply investigate God’s designed purpose for humanity. Previously, definitions of humanness may have included the ability to analyze, create, and respond coherently to questions. AI has begun to pressure these longstanding beliefs. Nevertheless, as author John Wyatt states, “It seems obvious that, however sophisticated its design may be, a machine can never understand, perceive, decide, think, feel, trust, love and believe as we do” (229). Machines do not have a soul and thus are not human; they cannot and will never be able to authentically do all the things that come with being human. Renowned pastor John Piper also argues that though machines may be considered “intelligent,” they are far from having true human intelligence. As Piper says, “God is most glorified in us when we are most satisfied in him…Those God-glorifying affections, spilling over in outward acts of love, are the reason God created the universe. Which means, for ChatGPT, that it is quadruply cut off from God-intended purposes for intelligence.” Piper goes on to explain that AI programs 1) produce “intelligence” rather than affections; 2) stem from the machine rather than a heart; 3) were created naturally rather than supernaturally; and 4) having been created by man, can rise no higher than the heart of man. Just as we are broken, so too are the machines we have created. Because of these facts, AI programs cannot replace human intelligence no matter what information is embedded in their algorithms. Piper concludes,

Intelligence, as God gave it at first, was designed not only to perceive natural, external reality — and then to assemble it — but also to see in it, to see through it, the reality of the glory of God: the greatness, the beauty, the worth of the infinite Person who created us. When intelligence cannot do this — cannot spiritually discern, see, and feel that glory — it fails in the most important reason that intelligence exists.

As Christians, there is no need to wonder about our place in the world with these new technologies. Their “intelligence” is so different from ours and can never compare with our ability to engage in authentic relationships with God and others.

Several Biblical stories provide insight into the AI issue. First, consider the Old Testament story of the Tower of Babel. People believed they were so capable and powerful that they could build a tower reaching all the way to God and Heaven. Parallel to that idea, AI engineers believe they can create a superintelligence, or something smarter than humans. They believe they can once again reach godhood on earth. However, people forget that “Superintelligence and godhood are not the end products of the trajectory of the history of human ingenuity,” as apologist John Lennox argues. “[A] superintelligence, God himself, has always existed. He is not the End Product. He is the Producer” (157). Humans cannot create the Creator. Is it possible that humanity is once again overreaching itself with the issue of Artificial Intelligence? As God disrupted human effort at Babel, so He may again disrupt our present efforts.

One of the strongest biblical connections to Artificial Intelligence is found in the story of Israel’s construction of the golden calf. Though we ridicule the Israelites for worshipping a golden cow made of their own earrings, how is Artificial Intelligence any different? Some people are amazed at the accomplishments of Artificial Intelligence and how it seemingly has superhuman capabilities. But, as editor J. Douglas Johnson reminds readers, “[E]ven after we melt it down and pour it all into the mold, so to speak, the aluminum and silicon remain as lifeless as ever without a trace of actual intelligence, let alone anything god-like” (3). The computers that can respond in paragraph form to our questions are just as un-godlike as the pieces of metal and code that make them up. Yet people still insist that it’s different this time around — that Artificial Intelligence is not simply another tool worshipped as an idol. Christians must take care to not fall into modern idolatry.

A final biblical cautionary insight regarding AI is found in the book of Revelation. One sign of Christ’s coming return is the appearance and teachings of the Antichrist. As Russian saint Ignatius Brianchaninov wrote, the Antichrist “will reveal before mankind by means of cunning artifice, as in theatre, a show of astonishing miracles, unexplainable by contemporary science. He will instill fear by the storm and wonderment of his miracles, and will satisfy the [worldly wise, and] confound human learning” (qtd. in “AI Demonic,” 35). Performing seemingly unexplainable feats? Instilling widespread fear? This Antichrist seems to carry undertones of ideas associated with AI. Additionally, Revelation 13:8 states that “All inhabitants of the earth will worship the beast — all whose names have not been written in the Lamb’s book of life…” The end times seem to be unfolding with the help of AI. After all, how is AI portrayed in all of the commercials? As a novel assistant that can help you with any and all of your tasks. As the technology continues to spread, it won’t be long before all the earth is raving about intelligent machines. So, will some form of AI be a key factor in the end times and the rise of the Antichrist? Nobody knows for sure. But, it definitely seems to be a possibility.

By examining both the dangers and benefits of AI and seeking biblical guidance, Christians can perceive the implications of AI are not one-sided. An active tension exists between the undeniable benefits and grave dangers. However, Christians are uniquely equipped to navigate the developing AI juggernaut.

With this in mind, Christians must decide here in the early stages of AI how they will interact with the technology. As author John Wyatt reiterates, the future of AI is largely unknown. “We cannot here and now confidently map AI and robots [into the future]. We can only pray for wisdom to discriminate right from wrong on this earth, and for the strength to act in accordance with what we discern” (233). And, as we are reminded in John 16:33, we will have trouble in this world. Nevertheless, we can take heart knowing that Jesus is our King, and He has already conquered the world. So, how can a Christian go forward in confidence despite the unknowns regarding AI?

Christians must stay informed. We cannot understand technologies when we don’t make an effort to keep up with their development. Hiding from technology out of fear does not stop it from evolving. On this note, Christians should not be afraid to experiment with AI platforms to better understand how they work. There are likely going to be areas in which AI is extremely helpful in tasks and helps expand human capabilities.

On the other hand, Christians must also stay wary. AI has potential for evil just as it does for good. Likely, AI will have some seriously concerning developments, as it is a creation of broken, sinful humans. When advancements present themselves, and nearly everyone rushes to get the newest technological creation, Christians must stop and consider what the implications are on humanity. What are the dangers of this advancement? What are the benefits? What does the Bible say on this matter? As Jesus taught his disciples, we must strive to “be as shrewd as snakes and as innocent as doves” (Matt. 10:16).

Finally, Christians must stand out from the world around them. This is part of our spiritual calling. In John 15:19, Jesus reminds us that sometimes this is very difficult: “If you belonged to the world it would love you as its own. As it is, you do not belong to the world, but I have chosen you out of the world. That is why the world hates you.” At times, the world may scorn us for our different behavior. Novelist Paul Kingsnorth describes two ways of living in the world of AI as a Christian, both a form of self-discipline. The first chooses boundaries and sticks to them. Kingsnorth emphasizes, “The lines have to be updated all the time” (39). Often, this means that the boundaries you set in place will be difficult to uphold, but that is what Jesus meant when he prayed for the Lord to keep his disciples from evil (John 17:15). A practical implication of Jesus’ teaching is that at some point there may be jobs you cannot hold or groups you cannot join because of your beliefs on AI. However, Kingsnorth reminds Christians that “such a refusal can enrich rather than impoverish you.” He continues, “[Y]ou must be prepared, at some stage, for life to get seriously inconvenient, or worse. But in exchange you get to keep your soul” (39). Standing out is no easy task, and doing so while continuing to use some forms of AI may be even harder.

The second option Kingsnorth describes is one who chooses to withdraw from all technology, following in the Luddites’ footsteps. He explains, “At some point the lines you’ve drawn may not only be crossed, but rendered obsolete” (40). So, instead of being absorbed by AI, those following the second way of self-discipline choose to “make real things with [their] hands; [and] pursue nature and truth and beauty” (40). Kingsnorth warns readers, “If things go as fast as they might, it could be that many of us currently [choosing the first option] will end up with a binary choice: [withdraw], or be absorbed into the technium wholesale” (40). Though it may seem unavoidable to not use the technology at all, those choosing this self-discipline face the challenge head on, taking to heart the charge to “lead a quiet life” (1 Thes. 4:11-12).

Technology will continue to advance, just as it has in the past. As Christians though, we are called to live differently from the world around us. Though some predict a utopia stemming from AI, Christians know that this world is not our eternal home, and nothing can be perfect on this earth. We must venture into this new unknown with caution, keeping our eyes fixed not on the horizon of possibilities and dangers, but on what truly matters: our Heavenly Father and His Word. Like any other tool, Artificial Intelligence can be used for good or evil, and because we are a broken people, our technology is flawed. As the technology continues to evolve, we must always ask ourselves whether the next application of AI requires us to surrender a part of our souls.

Like the explorers of the past, Christians must now venture into the frontier of AI, equipped with God’s Word and grounded in the knowledge of their awaiting eternal home. As J. Douglas Johnson wrote, “We can’t stop what’s coming because it is already here, but let’s see to it that we count ourselves among those who can still see it for what it is” (4).

---------------------------------------------------------------------------------------------------------------------------------

Bibliography

“2024 Election Candidate Pros Cons.” ChatGPT, 4.0, OpenAI, October 13, 2024.

Checketts, Levi. “AI and the image of God.” Interview by John Potter. Living Lutheran, February 28, 2024, www.livinglutheran.org/2024/02/ai-and-the-image-of-god/.

Dorobantu, Marius. “Imago Dei in the Age of Artificial Intelligence: Challenges and Opportunities for a Science-Engaged Theology.” ResearchGate, vol. 1, 2022, pp. 175-196, doi.org/10.58913/KWUU3009. Accessed 12 Oct. 2024.

Edgar, Brian. “God, persons and machines: theological reflections.” Christian Perspectives on Science and Technology: ISCAST Online Journal, 2010, iscast.org/wp-content/ uploads/attachments/Edgar_B_2010-05_God_and_Persons.pdf.

“Google’s Gemini AI Sparks Outrage: Bias in AI?.” YouTube, uploaded by TWiT Tech Podcast Network, February 29, 2024, www.youtube.com/watch?v=BHlapPX6Av4.

“The History of AI: A Timeline of Artificial Intelligence.” Coursera, May 16, 2024, www.coursera.org/articles/history-of-ai.

The Holy Bible. New International Version, Zondervan Bible Publishers, 1978.

Johnson, J. Douglas. “Idol Thinking: Facing the Gnostic Revival.” Editorial. Touchstone: A Journal of Mere Christianity, vol. 36, no. 6, 2023, pp. 3-4.

Kaplan, Jerry. Artificial Intelligence: What Everyone Needs to Know. Oxford University Press, 2016.*

Kingsnorth, Paul. “AI Demonic: A Spiritual Exploration of AI.” Touchstone: A Journal of Mere Christianity, vol. 36, no. 6, 2023, pp. 29-40.

Klavan, Andrew, and Wendell Wallach. “What Does the Future of Artificial Intelligence Look Like?” The Andrew Klavan Show. September 6, 2023, Podcast, 33:04, www.dailywire.com/episode/ep-1764-will-ai-kill-or-save-us-all-member-exclusive.

Lee, Kai-fu, and Chen Quifan. AI 2041: Ten Visions for our Future. Currency, 2021.

Lennox, John. 2084: Artificial Intelligence and the Future of Humanity. Zondervan, 2020.

Mollick, Ethan. Co-Intelligence: Living And Working with AI. Penguin Random House, 2024.

Nezai, Jeff. “The rise—and fall—of the software developer.” ADPResearch, June 17, 2024, www.adpresearch.com/the-rise-and-fall-of-the-software-developer/.

Novet, Jordan. “Microsoft’s $13 billion bet on OpenAI carrieshuge potential along with plenty of uncertainty.” CNBC, April 8, 2023, cnbc.com/2023/04/08/microsofts-complex-bet-on- openai-brings-potential-and-uncertainty.html.

Peterson, Jordan B., and Brian Roemmele. “ChatGPT and the Dawn of Computerized Hyper- Intelligence.” The Jordan B. Peterson Podcast, May 15, 2023, Podcast, 2:44:02, www.dailywire.com/episode/kdotdo.

Piper, John. “John Piper on ChatGPT.” Ask Pastor John. October 16, 2023. Podcast, 14:02, www.desiringgod.org/interviews/john-piper-on-chatgpt.

PragerU. “Will A.I. Ruin the World?” Unapologetic with Amala. April 21, 2023, Podcast, 1:04:46, www.dailywire.com/episode/will-a-i-ruin-the-world.

Rui, Martins. “Is AI Our ‘Mark of the Beast?’ The Shadows of Tomorrow.” Linkedin, www.linkedin.com/pulse/ai-our-mark-beast-shadows-tomorrow-rui-martins-dryic.

Shapiro, Ben. “ChatGPT Prefers Nuclear Apocalypse to the N-Word.” The Ben Shapiro Show, February 7, 2023, Podcast, 1:12:21, www.dailywire.com/episode/ep-1663-chat-gpt- prefers-nuclear-apocalypse-to-the-n-word-2.

---. “Will AI Kill — Or Save — Us All?” The Ben Shapiro Show. July 12, 2023, Podcast, 1:06:05, www.dailywire.com/episode/ep-1764-will-ai-kill-or-save-us-all- member-exclusive.

Shapiro, Ben, and Spencer Klavan. “Unraveling The Mysteries of The Ancient World.” The Ben Shapiro Show. August 24, 2024, Podcast, 57:24, www.dailywire.com/episode/ unraveling-the-mysteries-of-the-ancient-world-spencer-klavan-member-exclusive.

Steiner, Hallie. “Elon Musk reveals massive plans for Tesla and Optimus — ‘Things are really going to go ballistic next year.’” Fortune, January 30, 2025, fortune.com/2025/01/30/ elon-musk-reveals-massive-plans-tesla-optimus-self-driving-cars-humanoid-robots/.

Stonestreet, John, and Timothy D. Padgett. “Teachers Use AI, Too.” Breakpoint Colson Center, September 10, 2024, Podcast, 6:12, breakpoint.org/teachers-use-ai-too/.

Suleyman, Mustafa, and Michael Bhaskar. The Coming Wave. Crown, 2023.

Thacker, Jason. “What Does the Bible Say About Artificial Intelligence?” Zondervan, www.zondervan.com/what-does-the-bible-say-about-artificial-intelligence/.

VanMaldeghem, Canaan and Hailey Barrus. “Survey Reveals How Americans Spend Their Day.” Michigan Center for Data and Analytics, www.michigan.gov/mcda/labor-market- information/michigans-labor-market-news/2024/08/05/american-time-use-23.

Walsh, Toby. Machines Behaving Badly: The Morality of AI. La Trobe University Press, 2022.

“What is AI (Artificial Intelligence)?” McKinsey & Company, April 2023, www.mckinsey.com/ featured-insights/mckinsey-explainers/what-is-ai.

“World War II Lesson Plan.” ChatGPT, 4.0, OpenAI, October 13, 2024.

Wyatt, John, and Stephen N. Williams, editors. The Robot Will See You Now. Great Britain, 2021.