Starting wth Cathy O'Neill's Weapons of Math Destruction, there's been an onslaught of books sounding the alarm about the use of algorithms in daily life. My Amazon list that collects these together is even called 'Woke CS'. These are all excellent books, calling out the racial, gender, and class inequalities that algorithmic decision-making can and does exacerbate and the role of Silicon Valley in perpetuating these biases.
Hannah Fry's new book "Hello World" is not in this category. Not exactly, anyway. Her take is informative as well as cautionary. Her book is as much an explainer of how algorithms get used in contexts ranging from justice, to medicine, to art, as much as it is a reflection on what this algorithmically enabled world will look like from a human perspective.
And in that sense it's a far more optimistic take on our current moment than I've read in a long time. In a way it's a relief: I've been mired for so long in the trenches of bias and discrimination, looking at the depressing and horrific ways in which algorithms are used as tools of oppression, that it can be hard to remember that I'm a computer scientist for a reason: I actually do marvel at and love the idea of computation as a metaphor, as a tool, and ultimately as a way to (dare I say it) do good in the world.
The book is structured around concepts (Power, data) and domains (justice, medicine, cars, crime and art). After an initial explainer on how algorithms function (and also how models are trained using machine learning), and how data is used to fuel these algorithms, she very quickly gets into specific case studies of both the good and the bad in algorithmically mediated decision making. Many of the case studies are from the UK and were unknown to me before this book. I quite liked that: it's easy to focus solely on examples in the US, but the uses (and misuse) of algorithms is global (Vidushi Mardia's article on AI policy in India has similar locally-sourced examples).
If you're a layman looking to get a general sense of how algorithms tend to show up in decision making systems, how they hold out hope for a better way of solving problems and where they might go wrong, this is a great book. It uses a minimum of jargon, while still beiing willing to wade into the muck of false positives and false negatives in a very nice illustrative example in the section on recidivism prediction and COMPAS, and also attempting to welcome the reader into the "Church of Bayes".
If you're a researcher in algorithmic fairness, like me, you start seeing the deeper references as well. Dr. Fry alludes to many of the larger governance issues around algorithmic decision making that we're wrestling with now in the FAT* community. Are there better ways to integrate automated and human decision-making that takes advantage of what we are good at? What happens when the systems we build start to change the world around them? Who gets to decide (and how) what level of error in a system is tolerable, and who might be affected by it? As a researcher, I wish she had called out these issues a little more, and there are places where issues she raises in the book have actually been addressed (and in some cases, answered) by researchers.
While the book covers a number of different areas where algorithms might be taking hold, it takes very different perspectives on the appropriateness of algorithmic decision-making in these domains. Dr. Fry is very clear (and rightly so) that criminal justice is one place where we need very strong checks and balances before we can countenance the use of any kind of algorithmic decision-making. But I feel that maybe she's letting off the medical profession a little easy in the chapter on medicine. While I agree that biology is complex enough that ML-assistance might lead us to amazing new discoveries, I think some caution is needed, especially since there's ample evidence that the benefits of AI in medicine might only accrue to the (mostly white) populations that dominate the clinical trials.
Similarly, the discussion of creativity in art and what it means for an algorithm to be creative is fascinating. The argument Dr. Fry arrives at is that art is fundamentally human in how it exists in transmission -- from artist to audience -- and that art cannot be arrived at "by accident" via data science. It's a bold claim, and of a kind with many claims about the essential humanness of certain activities that have been pulverized by advances in AI. Notwithstanding, I find it very appealing to posit that art is essentially a human endeavour by definition.
But why not extend the same courtesy to the understanding of human behavior or biology? Algorithms in criminal justice are predicated on the belief that we can predict human behavior and how our interventions might change it. We expect that algorithms can pierce the mysterious veil of biology, revealing secrets about how our body works. And yet the book argues not that these systems are fundamentally flawed, but that precisely because of their effectiveness they need governance. I for one am a lot more skeptical about the basic premise that algorithms can predict behavior to any useful degree beyond the aggregate (and perhaps Hari Seldon might agree with me).
Separately, I found it not a little ironic, in a time when Facebook is constantly being yanked before the US Congress, Cambridge Analytica might have swayed US elections and Brexit votes, and Youtube is a dumpster fire of extreme recommendations, that I'd read a line like "Similarity works perfectly well for recommendation engines" in the context of computer generated art.
The book arrives at a conclusion that I feel is JUST RIGHT. To wit, algorithms are not authorities, and we should be skeptical of how they work. And even when they might work, the issues of governance around them are formidable. But we should not run away from the potential of algorithms to truly help us, and we should be trying to frame the problem away from the binary of "algorithms good, humans bad" or "humans good, algorithms bad" and towards a deeper investigation of how human and machine can work together. I cannot read
Imagine that, rather than exlcusively focusing our attention on designing our algorithm to adhere to some impossible standard of perfect fairness, we instead designed them to facilitate redress when they inevitable erred; that we put as much time and effort into ensuring that automatic systems were as easy to challenge as they are to implement.without wanting to stand up and shout "HUZZAH!!!". (To be honest, I could quote the entire conclusions chapter here and I'd still be shouting "HUZZAH").
It's a good book. Go out and buy it - you won't regret it.
This review refers to an advance copy of the book, not the released hardcover. The advance copy had a glitch where a fragment of latex math remained uncompiled. This only made me happier to read it.
No comments:
Post a Comment