Should AI Be Shut Down?

John Stonestreet and Kasey Leander | BreakPoint, Breakpoint | Published: Apr 24, 2023
Should AI Be Shut Down?

Should AI Be Shut Down?

BreakPoint.org

Recently, a number of prominent tech executives, including Elon Musk, signed an open letter urging a 6-month pause on all AI research. That was not enough for AI theorist Eliezer Yudkowsky. In an opinion piece for TIME magazine, he argued that “We Need to Shut It All Down,” and he didn’t mince his words:

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI… is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

Using a tone dripping with panic, Yudkowsky even suggested that countries like the U.S. should be willing to run the risk of nuclear war “if that’s what it takes to reduce the risk of large AI training runs.”

Many experts suggest that the current state of artificial intelligence is more akin to harvesting the power of the atom for the first time than upgrading to the latest iPhone. Whereas computers of yesteryear simply categorized data, the latest versions of AI have the ability to understand the context of words as millions of people use them and thus are able to solve problems, predict future outcomes, expand knowledge, and potentially even take action.

The possibilities, these critics suggest, are not limited to AI somehow “waking up” and achieving consciousness. A well-known thought experiment, the so-called “Paper Clip Maximizer,” explains a scenario in which a powerful AI is given the simple task to “create as many paper clips as possible” without any ethical guardrails. The AI could decide, in order to proceed in the most efficient way, to lock us out of the internet, assume control of entire industries, and dedicate Earth’s resources towards that singular goal. If it didn’t immediately know how to do these things, it could learn how, executing its goal of paperclip maximization to the detriment of all life on Earth. It’s a scenario that seems both frightening and possible in an age in which the internet is everywhere, entire industries are automated, and companies are racing to develop artificial intelligence that is more and more powerful.

The real danger posed by AI is not its potential. It is the lack of ethics. When our science and technologies are guided by a “if we can do something we should” kind of moral reasoning, bigger and faster is not better. Years ago, the theologian and ethicist Peter Kreeft pointed out the reality of technology outpacing our ethics: “Exactly when our toys have grown up with us from bows and arrows to thermonuclear bombs, we have become moral infants.”

Questions of right and wrong and what it means to be human are integral to the ethics of AI. Designed with malice or carelessness, the destructive capacity of our technology is not evidence of its fallenness, but of ours. As Dr. Kreeft wrote, technologies like thermonuclear weapons achieve something “all the moralists, preachers, prophets, saints, and sages in history could not do: they have made the practice of virtue a necessity for survival.”

At the same time, Christians should never fall into fatalism. For atheists like Eliezer Yudkowsky, the threat of extinction by a superior race of sentient beings is somewhere between possible and inevitable. If the story of reality is the survival of the strongest and fittest, as atheistic Naturalism declares, then AI seems perfectly cast to take humanity’s place at the top of the heap. Absent a better definition for “humanity” than brute intelligence and ability, AI is a new and potentially violent Übermensch, destined to replace humanity.

Christians know that this is not how the story ends. Though we are capable of great evil, Someone greater than us is at the helm of history. Christianity can ground a vision of technology both ethically and teleologically. AI is neither an aberration to be abandoned, nor a utopian dream to be pursued at all costs. Rather, like all technology, it is a powerful tool that must be controlled by a shared ethical framework and accurate vision of human value, dignity, and exceptionalism.

Of course, that vision is only true if we were created by Someone with superior intelligence, love, and wisdom.

This Breakpoint was co-authored by Kasey Leander. For more resources to live like a Christian in this cultural moment, go to colsoncenter.org.

Publication date: April 24, 2023

Photo courtesy: ©Getty Images/BlackJack3D

The views expressed in this commentary do not necessarily reflect those of Christian Headlines.


BreakPoint is a program of the Colson Center for Christian Worldview. BreakPoint commentaries offer incisive content people can't find anywhere else; content that cuts through the fog of relativism and the news cycle with truth and compassion. Founded by Chuck Colson (1931 – 2012) in 1991 as a daily radio broadcast, BreakPoint provides a Christian perspective on today's news and trends. Today, you can get it in written and a variety of audio formats: on the web, the radio, or your favorite podcast app on the go.

John Stonestreet is President of the Colson Center for Christian Worldview, and radio host of BreakPoint, a daily national radio program providing thought-provoking commentaries on current events and life issues from a biblical worldview. John holds degrees from Trinity Evangelical Divinity School (IL) and Bryan College (TN), and is the co-author of Making Sense of Your World: A Biblical Worldview.



Should AI Be Shut Down?