Tuesday, February 19, 2019

OpenAI, AI threats, and norm-building for responsible (data) science

All of twitter is .... atwitter?... over the OpenAI announcement and partial non-release of code/documentation for a language model that purports to generate realistic-sounding text from simple prompts. The system actually addresses many NLP tasks, but the one that's drawing the most attention is the deepfakes-like generation of plausible news copy (here's one sample).

Most consternation is over the rapid PR buzz around the announcement, including somewhat breathless headlines (that OpenAI is not responsible for) like

OpenAI built a text generator so good, it’s considered too dangerous to release
or
Researchers, scared by their own work, hold back “deepfakes for text” AI
There are concerns that OpenAI is overhyping solid but incremental work, that they're disingenuously allowing for overhyped coverage in the way they released the information, or worse that they're deliberately controlling hype as a publicity stunt.

I have nothing useful to add to the discussion above: indeed, see posts by Anima Anandkumar, Rob MunroZachary Lipton  and Ryan Lowe for a comprehensive discussion of the issues relating to OpenAI.  Jack Clark from OpenAI has been engaging in a lot of twitter discussion on this as well.

But what I do want to talk about is the larger issues around responsible science that this kerfuffle brings up. Caveat, as Margaret Mitchell puts it in this searing thread.

To understand the kind of "norm-building" that needs to happen here, let's look at two related domains.

In computer security, there's a fairly well-established model for finding weaknesses in systems. An exploit is discovered, the vulnerable entity is given a chance to fix it, and then the exploit is revealed , often simultaneously with patches that rectify it. Sometimes the vulnerability isn't easily fixed (see Meltdown and Spectre). But it's still announced.

A defining characteristic of security exploits is that they are targeted, specific and usually suggest a direct patch. The harms might be theoretical, but are still considered with as much seriousness as the exploit warrants.

Let's switch to a different domain: biology. Starting from the sequencing of the human genome through the million-person precision medicine project to CRISPR and cloning babies, genetic manipulation has provided both invaluable technology for curing disease as well as grave ethical concerns about misuse of the technology. And professional organizations as well as the NIH have (sometimes slowly) risen to the challenge of articulating norms around the use and misuse of such technology.

Here, the harms are often more diffuse, and the harms are harder to separate from the benefits. But the harm articulation is often focused on the individual patient, especially given the shadow of abuse that darkens the history of medicine.

The harms with various forms of AI/ML technology are myriad and diffuse. They can cause structural damage to society - in the concerns over bias, the ways in which automation affects labor, the way in which fake news can erode trust and a common frame of truth, and so many others - and they can cause direct harm to individuals. And the scale at which these harms can happen is immense.

So where are the professional groups, the experts in thinking about the risks of democratization of ML, and all the folks concerned about the harms associated with AI tech? Why don't we have the equivalent of the Asilomar conference on recombinant DNA?

I appreciate that OpenAI has at least raised the issue of thinking through the ethical ramifications of releasing technology. But as the furore over their decision has shown, no single imperfect actor can really claim to be setting the guidelines for ethical technology release, and "starting the conversation" doesn't count when (again as Margaret Mitchell points out) these kinds of discussions have been going on in different settings for many years already.

Ryan Lowe suggests workshops at major machine learning conferences. That's not a bad idea. But it will attract the people who go to machine learning conferences. It won't bring in the journalists, the people getting SWAT'd (and one case killed) by fake news, the women being harassed by trolls online with deep-fake porn images. 

News is driven by news cycles. Maybe OpenAI's announcement will lead to us thinking more about issues of responsible data science. But let's not pretend these are new, or haven't been studied for a long time, or need to have a discussion "started".


Monday, January 28, 2019

FAT* Session 2: Systems and Measurement.

Building systems that have fairness properties and monitoring systems that do A/B testing on us.

Session 2 of FAT*: my opinionated summary.

Sunday, January 27, 2019

FAT* blogging

I'll be blogging about each session of papers from the FAT* Conference. So as not to clutter your feed, the posts will be housed at the fairness blog that I co-write along with Sorelle Friedler and Carlos Scheidegger.

The first post is on Session 1: Framing and Abstraction.

Thursday, December 20, 2018

The theoryCS blog aggregator REBORN

(will all those absent today please email me)

(if you can't hear me in the back, raise your hand)

The theoryCS blog aggregator is back up and running at its new location -- cstheory-feed.org -- which of course you can't know unless you're subscribed to the new feed, which....

More seriously, we've announced this on the cstheory twitter feed as well, so feel free to repost this and spread the word so that all the theorists living in caves plotting their ICML, COLT and ICALP submissions will get the word. 

Who's this royal "we"? Arnab Bhattacharyya and myself (well mostly Arnab :)). 

For anyone interested in the arcana of how the sausage (SoCG?) gets made, read on: 

Arvind Narayanan had set up an aggregator based on the Planet Venus software for feed aggregation (itself based on python packages for parsing feeds). The two-step process for publishing the aggregator works as follows:
  1. Run the software to generate the list of feed items and associated pages from a configuration file containing the list of blogs
  2. Push all the generated content to the hosting server. 
Right now, both Arnab and I have git access to the software and config files and can edit the config to update blogs etc. The generator is run once an hour and the results are pushed to the new server. 

So if you have updates or additions, either of us can make the changes and they should be reflected fairly soon on the main page. The easiest way to verify this is to wait a few hours, reload the page and see if your changes have appeared. 

The code is run off a server that Arnab controls and both of us have access to the domain registry. I say this in the interest of transparency (PLUG!!) but also so that if things go wonky as they did earlier, the community knows who to reach. 

Separately, I've been pleasantly surprised at the level of concern and anxiety over the feed -- mainly because it shows what a valuable community resource the feed is and that I'm glad to be one of the curators. 

If you've read this far, then you really are interested in the nitty gritty, and so if you'd like to volunteer to help out, let us know. It would be useful for e.g to have a volunteer in Europe so that we have different time zones covered when things break. And maybe our central Politburo (err. I mean the committee to advance TCS) might also have some thoughts, especially in regard to their mission item #3:
To promote TCS to and increase dialog with other research communities, including facilitating and coordinating the development of materials that educate the general scientific community and general public about TCS.

Thursday, December 06, 2018

The theoryCS aggregator

As you all might now, the cstheory blog aggregator is currently down. Many people have been wondering what's going on and when it will be back up so here's a short summary.

The aggregator has been thus far maintained by Arvind Narayanan who deserves a HUGE thanks for setting up the aggregator, lots of custom code and the linked twitter account. Arvind has been planning to hand it over and the domain going down was a good motivator for him to do that.

Currently I have all the code that is used to generate the feed, as well as control over the twitter feed. Arnab Bhattacharyya has kindly volunteered to be the co-manager of the aggregator. What remains to be done now is

  • set up a new location to run the aggregator code from
  • set up hosting for the website
  • link this to the twitter account. 
None of these seem too difficult and the main bottleneck is merely having Arnab and I put together a few hours of work to get this all organized (we have a domain registered already). We hope to have it done fairly soon so you can all get back to reading papers and blogs again. 

Saturday, November 24, 2018

Should credit scores be used for determining residency?

It's both exhilarating and frustrating when you see the warnings in papers you write play out in practice. Case in point, the proposal by DHS to use credit scores to ascertain whether someone should be granted legal residence.

Josh Lauer at Slate does a nice analysis of the proposal and I'll extract some relevant bits for commentary. First up: what does the proposal call for? (emphasis mine)
The new rule, contained in a proposal signed by DHS Secretary Kirstjen Nielsen, is designed to help immigration officers identify applicants likely to become a “public charge”—that is, a person primarily dependent on government assistance for food, housing, or medical care. According to the proposal, credit scores and other financial records (including credit reports, the comprehensive individual files from which credit scores are generated) would be reviewed to predict an applicant’s chances of “self-sufficiency.”
So what's the problem with this? What we're seeing is an example of the portability trap (from our upcoming FAT* paper). Specifically, scores designed in a different context (for deciding who to give loans to) are being used in this context (to determine self-sufficiency). Why is this a problem?
Unfortunately, this is not what traditional credit scores measure. They are specialized algorithms designed for one purpose: to predict future bill-paying delinquencies, for any reason. This includes late payments or defaults caused by insurmountable medical debts, job loss, and divorce—three leading causes of personal bankruptcy—as well as overspending and poor money management.
That is, the reason the portability trap is a problem is because you're using one predictor to train another system. And if you're trying to make any estimations about the validity of the resulting process, then you have to know whether the thing you're observing (in this case the credit score) has any relation to the thing you're trying to observe (the construct of "self-sufficiency"). And this is something we harp on a lot in our paper on axiomatic considerations of fairness (and ML in general)

And in this case there's a clear disconnect:
Credit scores do not predict whether an individual will become a public charge. And they do not predict financial self-sufficiency. They are only useful in this context if one believes credit scores reveal something about a person’s character. In other words, if one believes that people with low credit scores are moochers and malingerers. Given the Trump administration’s hostility toward (brown-skinned) immigrants, this conflation of credit scores and morality is not surprising.
And this is a core defining principle of our work: that beliefs about the world control how we choose our representations and learning procedures: the procedures cannot be justified except in the context of the beliefs that underpin them. 

I think that if you read anything I've written, it will be clear where I stand on the normative question of whether this is a good idea (tl;dr: NOT). But as a researcher, it's important to lay out a principled reason for why, and this sadly merely confirms that our work is on the right track.


Disqus for The Geomblog