This weekend marks the 100th anniversary of British mathematician Alan Turing’s birth. In celebration of his enormous contributions to the fields of mathematics, computational science, cryptology, and artificial intelligence, the scientific community has dubbed 2012 the “Alan Turing Year”, commemorating the occasion with numerous conferences, museum exhibitions, a series of articles on his life in the Guardian and BBC, a Google doodle, and even a functional model of his famous Turing Machine made of Legos. By his mid 20s Turing developed his theory of the “Universal Machine”, thus ushering in the age of modern computer science. A decade later, Turing devoted his studies in cryptology toward cracking the German naval enigma. By developing machines known as “bombes” that could decrypt the messages the Nazis relayed to their U-boats, Turing’s intelligence gathering re-shaped World War II. Historians have argued that cracking the Nazi code shortened the war by two years and saved millions of lives.
Such accolades coming 58 years after his death evidence not only his importance as a historical figure, but also how his ideas continue to influence contemporary research and debate on computer science in our increasingly digitized society. As the “Father of Artificial Intelligence”, Turing’s 1950 article “Computing Machinery and Intelligence” foresaw how rapid advances in information science would produce a future in which the line between human intelligence and artificial intelligence would become blurred. Asking, “can machines think”, Turing postulated that ultimately the true mark of artificial intelligence would be whether or not one could tell the difference between communication with a human versus a machine. Turing’s standards for evaluating artificial intelligence have not only framed the scholarly and ethical debate in the scientific community for the past six decades, but they have also proven to be a prophesy of daily life in the 21st century. Living amongst automated phone banks, internet chatterboxes, GPS navigators, and Apple’s Siri app, everyday life has become a series of Turing tests as we increasingly rely upon forms of artificial intelligence and speak to it as if it were real.
Yet, less emphasis has been placed on the tragedy of his untimely death. In 1952, Turing was arrested and convicted of gross indecency for a consensual sexual relationship with another man, the same 1885 statute under which Oscar Wilde was imprisoned more than half a century earlier. Instead of serving prison time, Turing chose to undergo an experimental hormonal treatment prescribed by the British government. While this chemical castration via a synthetic oestrogen hormone curbed his sex-drive, it had dire side effects. Turing began to grow breasts and developed a deep depression. His conviction also caused him to lose his security clearance, thus barring him from continuing to work with the British intelligence agencies. The man who did as much from inside a laboratory to defeat the Nazis as any general did on the battlefield was now considered a threat to national security solely by virtue of his sexuality. Two years later, on June 8th, 1954, Turing took a few bites from a cyanide-laced apple–an elaborate end designed to let his mother believe that his suicide was actually an accident due to careless storage of laboratory chemicals. In 2009, British Prime Minister Gordon Brown issued an official apology for Turing’s “appalling” treatment, but a 2011 petition to pardon Turing’s conviction was officially denied by the British Government.