A question often raised about artificial intelligence (AI) is whether machines will one day become human, but this way of framing the question assumes we have settled on what it means to be human. It reminds me of another empty proposition, when people assert that advancements in medical technology dangerously allow us to play God, as if we knew what “playing God” could possibly mean. Questions about AI too often begin with highly problematic premises that lead to the vaguest kinds of ethical discussions. In fact, part of what makes AI so remarkable is that it presses the question of humanness but in another direction, particularly as AI technology seeks to learn from, and sometimes improve upon, human modes of intelligence.

When computer scientists and engineers talk about AI, they are usually referring to a type of machine learning that is related to neural networks, or simply “neural nets.”1 Neural nets are patterned off of human physiological systems, which feed and receive signals between neurons and the synapses that connect them. Neurologists say that each human brain has about ten billion neurons, and each of these ten billion neurons allows for information to be shared, multiplied, and processed across the vast stretches of the brain’s neural network.2

Neural net–based AI is a kind of reverse engineering of human brain systems, so it is artificially intelligent in the most basic sense. It is a machine replication of human intelligence. Thus, like a brain neuron, each neuron in a neural net processes a tiny bit of information, usually of the most precise kind (often a yes/no signal that is equivalent to a simple up/down vote). This form of intelligence is called machine learning because, similar to human learning, the dynamic interplay between the machine neuron and its connecting machine synapses allows the machine to store and process information across a network of neurons.

Initially, neural nets consisted of only a few neurons and their synapse-like connections, and these early neural nets had limited functionality. But advancements in computer technologies—specifically the transition from multitasking central processing units (CPUs) that power our multitasking laptops to the graphics processing units (GPUs) commonly found on graphics cards used in gaming PCs—have allowed for much greater and more dedicated processing power.3 This has resulted in parallel systems of neurons, multiplying exponentially the numbers of neurons and the synapse-like connections between them, and that explosive complexity makes it difficult for scientists to fully understand how neural nets work. Deep-learning neural nets are thus, like the human brain, quite profound and perhaps, for that reason, a little scary.

The other advancement that has pushed machine learning to new frontiers is the massive growth in information now available to feed these neural nets. Just like humans, machines can only process—that is, learn from—the information they are fed. The rapid growth of social networking enabled by the internet now allows computers to access vast amounts of data. For example, in order to teach a neural net how to spot cats, you simply need to feed it images of cats—indeed, cats are apparently the favorite example of both philosophers and computer scientists. If you were limited by the number of photos stored on your camera, your neural net would be equally limited (even cat lovers have only so many cat pictures). However, imagine if you could connect your neural net to social media networks like Instagram and Facebook and their vast collections of cat images. Then you would have huge data sets to work with, which would continue to grow as people (not just one cat lover but cat lovers everywhere!) added more and more photos via social media. The growth in data sets via these kinds of forums has created an endless amount of data for neural nets to learn from.

But what does it mean for neural nets to learn? What is meant by AI? From an ethical point of view, this is where things get really interesting. Back in the olden days of code writing, computer programs were basically told what to do, and they could, just as they were designed to, do nothing else. Everything a program did, it did because someone told it to do so. For example, whenever I mention my favorite basketball coach, my word-processing program will underline the word Krzyzewski in red because the word is not recognizable to the program’s coding. Someone coded Microsoft Word’s spell-checker to tell me that I’d misspelled something. Now, I know Krzyzewski is not misspelled, but that was only because I learned that word, whereas the program, which does not learn, does not recognize it. I can, of course, intervene and instruct Word to add Krzyzewski to its dictionary and that will remove the red underlining for that word. But would have tell Word to do this.

In any case, this inability to learn makes pre-AI computers, as smart as they are, pretty dumb. But neural nets are altogether different. Rather than telling computers what to do, neural-net engineers simply feed the networks examples drawn from those vast and growing data sets I just described, and the systems learn to make their own connections from those examples. This often happens, as I mentioned, in deep ways we don’t yet fully understand.

Recall for a moment that one of Aristotle’s great insights was that education largely consists of formation by way of examples. Later on, Ludwig Wittgenstein would remind us that children learn languages not primarily by being told what particular terms mean but rather by being shown examples of words in use.4 Thus, rather than telling a computer, “This is a cat,” which requires providing the computer infinite amounts of code that account for every possible feline anatomical contingency (e.g., short hair versus long hair, flat versus long faces), the engineer connects the neural net to an enormous data set of cat photos, and each neuron then uses its own algorithm to process a different part of the image, in turn enabling other neurons to, in total, determine that what it is looking at is indeed a cat.

What would a practical application of this learning look like? Consider the DIY handyman who rigs a computerized camera system to activate the sprinklers every time neighborhood cats try to use his garden as a litter box. That handyman teaches his homemade neural net to distinguish cats from all other critters hanging around the garden (for example, the handyman’s dog). Similar technologies are now being used to help driverless cars identify not only cats crossing streets but also children and pedestrians. Wall Street investors are using AI to identify patterns in global systems so that they can strategically trade securities to maximize returns. And yes, in case you’re wondering, Microsoft Word uses AI (but not cats, not yet, anyway) to supplement its programming. There are, currently, AI systems all around you—you’re likely benefiting from AI as you read this piece. Now’s the time to cue the scary Foucauldian music: we might think we’re free, but AI, it seems, controls our lives.

The next step in the development of neural networks will be the networking of different neural nets. Right now, neural nets do one thing at a time (e.g., identify cats), even if they do that thing really well. Connecting a lot of networks together—many researchers are currently trying to achieve this—would allow for even greater advancements. Of course, there are plenty of individuals who fear this interconnection of neural nets. One of the most famous, billionaire inventor Elon Musk, put the dangers this way: “Let’s say you create a self-improving AI to pick strawberries and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever.”5 Given that we don’t yet fully understand how deep learning works (we know that it does and that, mathematically, it should), we should be extremely hesitant to plug AIs into things, including other deep learning systems that can do real damage.

Unfortunately, though unsurprisingly, the same problems that have plagued human learning will also plague a system modeled on of human learning. Although our data sets may be huge and growing, neural nets depend on humans to identify the content—the particular cat database, for example—that they are fed. This kind of “supervised learning” means that we control what a neural net thinks (e.g., that a cat is a cat and not a dog) and that our prejudices will also be fed into the neural net. Microsoft recently learned this the hard way when Tay, its AI Twitter bot, turned out to be—after less than a day of public training—as crude, snarky, and offensive as . . . well, the public that trained it. (Interestingly, the version released in China ended up a bit more congenial.)

Or take a far more serious example than cats, bad spelling, and Tay: criminality. If a neural net tasked with identifying possible criminal perpetrators used a set of photographs that were taken of individuals currently incarcerated in US prisons, that neural net would be as prejudiced as the people who created and continue to perpetuate the racism of our US incarceration system. One would see a replication and a further perpetuation of the systemic injustice that allows our society to disproportionately, and grossly so, imprison persons of color. The neural net would do this simply because it would know no better.

The larger ethical and theological question at hand, then, is not the vague one about whether AI will become human or even whether AI will be disastrous for humans (e.g., if, after learning that humans could turn it off, some networked neural net preemptively eliminated humans as a threat, and other scary science fiction scenarios). Rather, I find myself perplexed by the moral question of what neural net–based AI, modeled as it is on human intelligence, tells us about what it means to be human.

We tend to think that humans are, while also materially biological, special. We think we are made in the image of God and comprised of qualities that lend themselves to eternal significance, if not also eternal life. That’s how Christians, at least, have tended to think of these things. But neural nets raise the possibility that rather than being anything special, rather than bearing eternal significance, being eternally destined, or even being biological primarily, we are products of physics—so much matter arranged in so many specific ways. In my (admittedly lay) reading of AI, the most interesting version of this question implicitly arises from the work of Max Tegmark, a physicist who makes a clear and persuasive argument that one can understand AI largely as a pattern of coordinated physical matter.6 Part of what makes Tegmark and others like him so interesting is not that they claim that machines through AI can someday become human but rather their notion that humans and machines can be imagined from the same baseline. Each features a unique organization of matter, where the evolution of life and intelligence can be envisioned as subsequent generations of material organization.

For materialists like Tegmark, there is no need to make ontological arguments about semblances between humans and machines. It simply is the case for these engineers of our futures that we are each piles of physical properties. Given the remarkable success of AI’s reverse engineering, going from humans to machines, might the facts be proving them right? Rather than the scary scenarios and the empty propositions, I think the big problem posed by AI is this question that it raises about humanness. AI prompts us to ask questions not about machines or robots but about ourselves. Robots might one day start asking themselves questions about their ontological status; maybe they will even be wise enough to consider what asking such questions amounts to. Perhaps we should start first.


  1. Indeed, there are other kinds of AI, but neural nets are currently the most ubiquitous and promising.
  2. For a helpful primer on neural nets, consider Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies (Oxford, UK: Oxford University Press, 2016).
  3. In fact, artificial neural nets prove superior to human neural networks in a number of ways. Take, for example, humanity’s tendency to repeat mistakes. These tendencies get trapped in paved neurological pathways, in literal grooves. Machine learning, however, can be corrected by a process known as backpropagation or backpropping.
  4. See Wittgenstein, Philosophical Investigations, 2nd ed., trans. G. E. M. Anscombe (Oxford, UK: Basil Blackwell, 1958). In the forty-third remark, Wittgenstein claims, “For a large class of cases—though not for all—in which we employ the word ‘meaning’ it can be defined thus: the meaning of a word is its use in the language” (20). See also Stanley Cavell’s insightful reflections on the complexities involved in delineating use and meaning in The Claim of Reason (Oxford, UK: Oxford University Press, 1999).
  5. Consider Maureen Dowd, “Elon Musk’s Billion-Dollar Crusade to Stop the AI Apocalypse,” Vanity Fair, March 26, 2017, https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x.
  6. See Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (New York, NY: Knopf, 2017).