“Colossus: The Forbin Experiment”, like so many other tales of cautionary science fiction is the story of man’s good intentions for technology gone awry. In the film, Dr. Forbin’s super-computer Colossus is installed as a weapons system for the entire United States. Soon after, humans lose control of the machine and inevitably become enslaved to it.
This type of technological Frankenstein story is repeated constantly in science fiction. It seems as humans we are simultaneously terrified and in awe of artificial intelligence. We are unable to decide if computers are our benevolent friends or dangerous enemies. But as Jaron Lanier states in his article, “The First Church of Robotics”, perhaps we should see them as neither. The problem is not the sentient nature of artificial intelligence, but rather our obsession with the personification of technology. As Lanier says, “Humans are social creatues, so if a machine if presented in a social way, people will adapt to it.” Herein lies the dilemma; we feel the need to personify and yet upon doing so we often find we fear what we have created, or at the very least, the prospect of what it may become. Yet what is the purpose of artificial intelligence but to personify an otherwise inanimate object?
Of course, it may be that just as the President does with Colossus, we contradict ourselves by passing responsibility on to our machines while still claiming control over them. Lanier writes, “…artificial intelligence gives us the cover to avoid accountability by pretending that machines can take on more human responsibility.” It is humans who program the computer with algorithms, which are then designed to make decisions for us. Therefore, humans are essentially responsible for any conclusions these algorithms come to. This is not to say that computers and technology are not useful, (of course we can all agree that they are) but rather that machines are simply tools and not beings. Technology is at our service and we must use it for good.