The Danger of Technological Arrogance

Technologists have never been particularly good at considering the ramifications of their work apart from those outcomes that are intended and beneficial. Only the rare technologist is inclined to step back from the work and think deeply about unintended and harmful consequences. Technologists want to feel good about their work and they want to rock the world with their achievements as quickly as possible. Taking time to consider the larger implications of their work is annoyingly inconvenient. This shortsighted and myopic perspective is dangerous.

This is especially true regarding AI research and development. The potential risk of a superintelligent AI departing from the interests of humanity to pursue its own interests, assuming that general intelligence in a machine can be achieved, is all too real. No one who is informed about AI can responsibly ignore the risks or claim that they don’t exist.

With this in mind, I grew concerned while reading a thoughtful article published on June 9, 2018 in the New York Times titled “Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots” by Cade Metz. At an exclusive conference organized by Jeff Bezos that took place in March of this year, Rodney Brooks, an MIT roboticist, debated the potential dangers of AI with neuroscientist and philosopher Sam Harris. When Harris warned that, because the world was in an arms race toward AI, researchers might not have the time needed to ensure that superintelligence is built in a safe way, Brooks responded: “This is something you have made up,” implying that Harris’ argument was not based on evidence and reason. Spurred on by Brooks, Oren Etzioni, who leads the Allen Institute for Artificial Intelligence, laid into Harris as well. He argued:

Today’s AI systems are so limited, spending so much time worrying about superintelligence just doesn’t make sense. The people who take Mr. Musk’s side are philosophers, social scientists, writers — not the researchers who are working on AI.

Etzioni further claimed that worrying about superintelligence “is very much a fringe argument.” Anyone familiar with this debate, however, knows that concerns about AI do not reside only on the fringes and their are certainly not “made up.” Etzioni cannot actually believe this without a great deal of self-imposed, head-in-the-sand delusion.

What especially concerned me about Etzioni’s position was the insinuation that only AI researchers were qualified to have informed opinions on the matter. This is the height of technological arrogance. As a longtime information technologist myself, I have always been annoyed by the tendency of my colleagues to see themselves as the smartest guys in the room. The intelligence of technologists is no greater on average than the intelligence of philosophers, social scientists, and writers. In fact, it’s quite possible that a comparison of intelligence, if such a thing were feasible, would relegate technologists to an inferior position. In my experience, philosophers, social scientists, and writers tend to think more broadly and with greater nuance than most technologists. Ethical considerations are more central to their training and therefore to their thinking. They are much more likely to consider the potential threats of technologies than technologists themselves.

We cannot leave decisions about the future of AI solely in the hands of AI researchers and developers any more than we can leave decisions about the protection and use of our personal data in the hands of social media executives and their employees. Recent findings regarding Facebook have underscored this fact. Despite their well-publicized rhetoric about “making the world a better place,” people who run technology companies are motivated by personal interests that bias their perspectives. They want to believe in the inherent good of their creations. The technologies that we create and the ways that we design them must be subject to thoughtful discussion that extends well beyond technologists. Technologies affect us all, and not always for good.

Leave a Reply