I read an article the other week where the author argued that within about 50 years, we will have a robot artificial intelligence for president. This is simply an extreme form of an increasingly common way of thinking. It goes like this:
These types of decisions will eventually be made much better by an AI than by humans, because AI is developing at such a fast rate that just about every task that requires human intuition and intelligence will be solvable by AI….
So if you have AI that is better at say economic planning than any human, which would you sooner have in charge of your country’s economy, a human or a superior AI?
Aside from the fact that in a free democracy it is emphatically not the task of the president to plan the economy (thus we see one additional reason we should not trust these folks that say AI can do everything better — often, they don’t even understand basic principles of freedom, economics, and philosophy), there are at least three major flaws with this line of thinking. There are more, but I’ll keep this post at three.
1. Not All Problems Have One Best Answer
First, this is classic “one best way” thinking. In some decisions, there is only one best way. But that is only one kind of decision. In many cases, there are multiple good paths to the destination (and multiple choices among a destination). These are called design problems (as opposed to engineering problems). In design problems, there is more than one legitimate path. It is up to us to use our judgment and intuition and preferences to determine which path we want to create.
The notion that AI will take over all jobs (including that of president) because of its superiority is assuming this “one best way thinking.” It is assuming that for almost every decision, there is one optimal approach, and since computers have such immense processing power, they will soon have the capacity to always be able to figure that out better.
But what if there is not just one best course for many decisions? This brings us into the realm of art, emotion, beauty, and freedom — some of the greatest things about work and the world. If there is not just one best decision to make in most cases, then there will always be a definitive place for human beings, no matter how powerful computers become. It is not about what is the one “best” way to do something; it is about “What do we want to do? What seems great and most interesting, and reflects our values and style in the best way? What do we care most about? What do we believe?”
2. Human Participation is Part of the End Goal
Which leads to the second point: this thinking that if AI is more efficient and smarter that it should therefore do everything fails to understand one of God’s ultimate purposes in creation — namely, human participation. Consider: God himself is smarter than any human or any computer that ever will or could be. Yet he does not make all decisions for us. He doesn’t say “just sit back and watch — I can do this better.” Instead, he gives us a role — that is part of his very purpose in creation (Genesis 1:28).
Why does he do this? Because his goal is to have a people like Christ. Which means a people who are wise and capable of making their own decisions and playing a part in charting their course in life and human society. God cares about the development of the individual. He’s not just after “the right” decisions (though sometimes, of course, there is a right decision and it does matter). He is after mature individuals who are capable of working with him and playing a part in shaping their own destiny. If we have computers end up doing everything because they can “do it better,” then we are missing one of the key purposes of life altogether: namely, that we play a part in things, rather than outsource our decision making.
A world where humans have a part in shaping their work, their lives, and society is better than a world where all of those decisions are made for us, because part of the end goal itself is our act of making those decisions. In other words, the act of decision-making is meaningful in itself, and not merely an ends to a destination that could be arrived at by another means.
In contrast, a society where AI makes all the decisions is a society where humans have, by definition, become slaves. We would no longer be a free people, but rather a people ruled by another entity — justified, as it always has been, on the notion that this other entity can “do it better,” all the while failing to realize that doing it yourself, even with mistakes, ought to be an essential part of what we mean by “better” in the first place.
3. The Logic of AI Supremacy Leads to Nihilism
Also consider: if we were to follow the logic all the way that computers should always take over a task they are better at (to do this we have to forget point one, of course, but bear with me), then what’s left for people to do? Just watch. Don’t be a painter–computers can do it better. Just go to the museum and look at the paintings robots created. Don’t direct a movie–robots can do it better. Just go watch the movies that robots create. Don’t be a teacher — just let a computer adopt Wikipedia into its memory and teach students for you. Oh, wait, don’t be a student either — computers can do that better also.
This notion misses the fact that creating things is itself part of the fun. The point is not to create perfect movies, or perfect art, or perfect classes, or perfect investment decisions. The point is to have a part to play in the running of the world and doing of these things, which is the real ultimate purpose for how God glorifies himself in the world. If all that was for us to do was watch and follow in a society led by computers, with computers doing all of the work, we would become diminished, atrophied human beings. With that being the case, could we really say that the computers that are running everything really are making the best decisions? Perhaps they forgot to make a decision about the most important question of all: who makes the decisions.
Even more, if all that were left to us to do is just watch, why not outsource that as well? Can’t AI do that better, also? The notion that “AI does it better, so it should do it” ends up undermining all of human life. In other words, it ends up in pure nihilism.
*Note: Some readers might wonder how I can say that God has given us a part to play in shaping our destiny, when I believe in the absolute sovereignty of God over all things. The answer is the historic Christian doctrine of compatibilism: God does indeed determine all things, and at the same time humans make real decisions and are responsible for their actions. And in making our decisions, we don’t try to find out what God decreed, but use our judgment in alignment with Scripture. God does not whisper the answer to us, but expects us to use wisdom.
**A funny side note: As additional proof of the inflated evaluation of AI we sometimes have, autocorrect changed “compatibilism” in the above paragraph to “compatibility” without my permission. Come on, autocorrect. There is no such theological doctrine called “compatibility.” We’ve had enough of this vandalization that you bring to our sentences, in the name of knowing the English language better than real people do.