The Myth of Technology’s Neutrality

In a recent op-ed that appeared in the New York Times, Tech billionaire Peter Thiel said this about artificial intelligence (AI):

The first users of the machine learning tools being created today will be generals…AI is a military technology.

Thiel’s opinion has created a backlash among some of AI’s proponents who don’t want their baby to be cast in this light. There is no doubt, however, that AI will be used for military purposes. It’s already happening. But that’s not the topic that I want to address in this blog post. Instead, I want to warn against a dangerous belief that is prevalent among many digital technologists. In a response to Thiel’s op-ed, Dawn Song, a computer science professor at the University of California, Berkeley who works in the Berkeley Artificial Intelligence Research Lab told Business Insider,

I don’t think we can say AI is a military technology. AI, machine learning technology is just like any other technologies. Technology itself is neutral.

According to Business Insider, Song went on to say that, just like nuclear or security encryption technologies, AI can be used in either good ways or bad. Basing her claim that technology is neutral on the fact that it can be used in both good and bad ways is a logical error. Anything can be used in both good and bad ways, but that doesn’t make everything neutral. Digital technologies in particular are never neutral. That’s because they are created by people and people are never neutral. Digital technologies consist of programs that are written by people with assumptions, interests, biases, beliefs, perspectives, and agendas that become embedded in those programs.

Not only are digital technologies not neutral, they shouldn’t be. They should be designed to do good and to prevent harm. If the potential for harm exists, technologies should be designed to prevent it. If the potential for harm is great and it cannot be prevented, the technology should not be developed. That’s right. Creating something that does great harm is immoral. This is especially true of AI, for its potential for harm is enormous. If general AI—a truly sentient machine—were ever developed, that machine would not only exhibit the non-neutral objectives that were programmed into it, it would soon develop its own interests and objectives that might be quite different from those of humanity. At that point, we would be faced with a silicon-based competitor that could work at speeds that would leave us in the dust. Our puny interests probably wouldn’t count for much. Could we create a superintelligent AI that would respect our interests? At this point, we don’t know.

Fortunately, some of the folks who are positioned at the forefront of AI research recognize its great potential for harm and are working fervently and thoughtfully to prevent this from happening. They are painfully aware of the fact that this might not be possible. Unfortunately, however, there are probably even more people working in AI who exhibit the same naivete as Dawn Song. Believing that AI is neutral is a convenient way of relinquishing responsibility for the results of their work. Look at the many ways that digital technologies are being used for harm today and ask yourself, was this the result of neutrality? No, those behaviors were either intentionally designed into the products and services or were the result of negligence. There is a great risk that harmful behaviors would develop within AI that were neither anticipated nor intended. The claim that digital technologies in general and AI in particular are neutral should concern us. Technologies are human creations. We must take responsibility for them. The cost of not taking responsibility is too high. Sometimes this means that we must prevent particular technologies from ever being developed. Whether or not this is true of general AI has yet to be determined. While the jury is still out, I’d like the jury to be composed of people who are working hard to understand the costs and to take them seriously, not people who naively believe in technology’s neutrality.

Leave a Reply