There is an amusing and slightly acerbic acronym that has stuck with me from my days working at a computer helpdesk for an international oil firm: PICNIC. Short for “problem in chair, not in computer”, my colleagues used it as code whenever an employee rocked up at our helpdesk with a complaint or problem that was due to human clumsiness rather than malfunctioning hardware. “Did you check that the printer was plugged into the power socket?”
Nevertheless, says Artificial Intelligence (AI) researcher Robert Elliott Smith, our blind faith in computers and the algorithms that run them is misguided. Based on his 30 years experience working with AI, the aptly titled Rage Inside the Machine takes the reader on a historical tour of computing to show how today’s technology is both less amoral and more prejudiced than we give it credit for.
At their silicon hearts, computers are just big number crunchers. This has led to the tacit assumption that computers are rational machines that cannot possibly be biased, as opposed to humans. But this, says Smith, is a mistake. The theories and findings that gave rise to today’s algorithms go back several centuries and are products of their times, and this historical context is often ignored in contemporary discussions. A large part of Rage Inside the Machine, therefore, is a trip down memory lane.
The first historical vignette goes back all the way to 1290 when Christian scholar Ramon Llull tried to make a mechanical device that would give irrefutable proof that Christianity was the one true faith. Goofy as this may now sound, it did lead him to write about the mathematical subdiscipline of combinatorics, which in turn influenced scholars and philosophers centuries down the line. Combinatorics is quite simply the study of how many possible combinations you can make with a given number of component parts. What it reveals about reality is that extremely complex problems – ones that are easy to describe but hard to solve – are actually surprisingly common. The travelling salesman problem is probably the most well-known example of a problem where the number of possible solutions rapidly balloons (see In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation). Thus, to provide us with answers, algorithms simplify real-world processes and are by their very nature reductionist.
With that firmly in mind, Smith proceeds to look at the historical antecedents of various facets underpinning AI, with the occasional foray into equations and Venn diagrams. He thus discusses the history behind probability theory, which is the mathematical modelling that uses statistics to analyse complex data. And he shows how Darwin’s theory of evolution by natural selection was rapidly appropriated and applied to social contexts. When combined with Friedrich Gauss’s concept of the bell curve (a graph that shows the dispersion of data either side of an average value), it was used as justification for eugenic practices aimed at the betterment of the human race by eliminating statistical outliers.
“[…] to provide us with answers, algorithms simplify real-world processes and are by their very nature reductionist.”
One of the most notorious tools that came out of this form of social Darwinism, which is still with us today, is the intelligence quotient (IQ) test. It has been used to prop up racism and sexism for decades (see also my review of Superior: The Return of Race Science). Less well-known is that other statistical tools had equally less salubrious origins, with links to both eugenics and mental asylums (the name Karl Pearson might ring a bell from your statistics classes, see also my review of Genetics in the Madhouse: The Unknown History of Human Heredity).
The current fears that AI will soon make large swathes of humanity unemployable (see The Future of the Professions: How Technology Will Transform the Work of Human Experts and CGP Grey’s excellent video Humans Need Not Apply) is an echo of what happened when the Industrial Revolution replaced the cottage industry with factories (see also my review of The Technology Trap: Capital, Labor, and Power in the Age of Automation). Conversely, the notion that you can increase efficiency by dividing labour influenced how humans did complex computations before technology could help out – it led to groups of skilled people in computing factories breaking down the task into bite-sized chunks (see e.g. my review of The Weather Machine: How We See Into the Future).
“The current fears that AI will soon make large swathes of humanity unemployable […] is an echo of what happened when the Industrial Revolution replaced the cottage industry with factories”
Other important figures that feature in Smith’s story are Alan Turing, Claude Shannon, and Noam Chomsky. Turing, and the test named after him, gave rise to the idea that the brain is just a computer (see Turing: Pioneer of the Information Age). Shannon’s information theory underlies all electronic communication today. And Chomsky studied human language, particularly its syntax, and his contributions are still relevant to the current struggle of algorithms to really understand human language with all its subtleties. Ironically, much discussion on AI is muddled by the language we use to describe what algorithms are doing, resulting in wishful mnemonics: the naming of computational phenomena with words denoting human characteristics and capabilities. “Does Google’s AlphaGo programme really intuitively decide on its next move when playing Go?”, asks Smith.
Throughout his book, Smith links the historical material back to current concerns around AI. One of the take-away messages that he repeatedly hammers home is that the assumptions and simplifications we have built into our algorithms are wedded to historical prejudices and baggage. And by their very nature as relentless optimisers, algorithms will reinforce these and feed them back to us, as examples of racist and misogynist AI bloopers show.
Still, as the book progresses, I increasingly felt Smith went a bit off-script. I think the book’s subtitle initially put me on a wrong footing and led to me expecting more social commentary and less history. There were certain points in the book where I wondered “what does this have to do with current concerns about social networks?” One example is when he writes of his work for aerospace company McDonnell Douglas. Here he trained genetic algorithms, which mimic evolution to find better solutions, to learn the fighter jet maneuvres of top-gun pilots. Though, to his credit, those same genetic algorithms can help us understand how social network architecture leads to the self-reinforcing filter bubbles that have become a grave concern (see The Filter Bubble: What The Internet Is Hiding From You, but also the critique Are Filter Bubbles Real?).
“While Silicon Valley is awash in dreams of the coming Singularity […] Smith sees a far more immediate problem in the unholy trinity of scientism, computation, and commercialism.”
Together with books such as The AI Delusion and Rebooting AI: Building Artificial Intelligence We Can Trust, Smith clearly falls in Camp Cautious. While Silicon Valley is awash in dreams of the coming Singularity, when AI will eclipse human intelligence, Smith argues that beyond computers having become more numerous, powerful, and connected, not much has changed. I would add that the maxim “garbage in, garbage out” still stands firmly. Rather than the future existential threat of AI that some fear (see Human Compatible: Artificial Intelligence and the Problem of Control and Superintelligence: Paths, Dangers, Strategies), Smith sees a far more immediate problem in what he calls the unholy trinity of scientism, computation, and commercialism. We obliviously trust the powerful algorithms employed by large firms such as Facebook that have penetrated every nook and cranny of our everyday lives. And it is easy to forget they have but one objective: maximize profit. And that, argues Smith, is far more dangerous to humankind than nightmarish visions of the robot apocalypse.
So, how can we stop the internet making bigots of us all? Smith is not outspokenly prescriptive, though his work on evolutionary algorithms suggests we can create a different kind of beast, a breed of diversity-preserving algorithms rather than the relentless optimizers underlying current online social networks. Instead, the goal of this book is foremost to educate readers, to arm them with a better understanding of how algorithms work by simplifying reality, and to raise awareness of how their inner workings betray the past prejudices that are still baked into them. To that end, Smith presents a very pleasant and accessible mix of revealing history, personal anecdotes, and sharp observations.
Disclosure: The publisher provided a review copy of this book. The opinion expressed here is my own, however.
You can support this blog using below affiliate links, as an Amazon Associate I earn from qualifying purchases:
Other recommended books mentioned in this review: