'Phrenology’ has an old-fashioned ring to it. It sounds like it belongs in a history book, filed somewhere between bloodletting and velocipedes. We’d like to think that judging people’s worth based on the size and shape of their skull is a practice that’s well behind us. However, phrenology is once again rearing its lumpy head.
In recent years, machine-learning algorithms have promised governments and private companies the power to glean all sorts of information from people’s appearance. Several startups now claim to be able to use artificial intelligence (AI) to help employers detect the personality traits of job candidates based on their facial expressions. In China, the government has pioneered the use of surveillance cameras that identify and track ethnic minorities. Meanwhile, reports have emerged of schools installing camera systems that automatically sanction children for not paying attention, based on facial movements and microexpressions such as eyebrow twitches.
Perhaps most notoriously, a few years ago, AI researchers Xiaolin Wu and Xi Zhang claimed to have trained an algorithm to identify criminals based on the shape of their faces, with an accuracy of 89.5 per cent. They didn’t go so far as to endorse some of the ideas about physiognomy and character that circulated in the 19th century, notably from the work of the Italian criminologist Cesare Lombroso: that criminals are underevolved, subhuman beasts, recognisable from their sloping foreheads and hawk-like noses. However, the recent study’s seemingly high-tech attempt to pick out facial features associated with criminality borrows directly from the ‘photographic composite method’ developed by the Victorian jack-of-all-trades Francis Galton – which involved overlaying the faces of multiple people in a certain category to find the features indicative of qualities like health, disease, beauty and criminality.
Technology commentators have panned these facial-recognition technologies as ‘literal phrenology’; they’ve also linked it to eugenics, the pseudoscience of improving the human race by encouraging people deemed the fittest to reproduce. (Galton himself coined the term ‘eugenics’, describing it in 1883 as ‘all influences that tend in however remote a degree to give to the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable than they otherwise would have had’.)
In some cases, the explicit goal of these technologies is to deny opportunities to those deemed unfit; in others, it might not be the goal, but it’s a predictable result. Yet when we dismiss algorithms by labelling them as phrenology, what exactly is the problem we’re trying to point out? Are we saying that these methods are scientifically flawed and that they don’t really work – or are we saying that it’s morally wrong to use them regardless?