How can an industry that, unlike other business sectors, persistently promotes itself as doing good, learn to do that in reality? Do you want to not do harm, or do you want to do good? These are two totally different things.
And how do you put an official ethical system in place without it seeming like you’re telling everyone how to behave? Who gets to decide those rules anyway, setting a moral path for the industry and — considering tech companies’ enormous power — the world.
There are things that puzzle me about this entire discussion about ethics and tech. It seems like an interesting idea for tech companies to incorporate ethical thinking into their operations. Those of us who work in this space are clamoring for more ethics education for budding technologists.
There is of course the cynical view that this is merely window dressing to make it look like Big Tech (is that a phrase now?) cares without actually having to change their practices.
But let's put that aside for a minute. Suppose we assume that indeed tech companies are (in some shape of form) concerned about the effects of technology on society and that their leaders do want to do something about it.
What I really don't understand is the idea that we should teach Silicon Valley to be ethical. This seems to play into the overarching narrative that tech companies are trying to do good in the world and slip up because they're not adults yet -- a problem that can be resolved by education that will allow them to be good "citizens" with upstanding moral values.
This seems rather ridiculous. When chemical companies were dumping pesticides on the land by the ton and Rachel Carson wrote Silent Spring, we didn't shake our heads sorrowfully at companies and sent them moral philosophers. We founded the EPA!
When the milk we drink was being adulterated with borax and formaldehyde and all kinds of other horrific additives that Deborah Blum documents so scarily in her new book 'The Poison Squad', we didn't shake our heads sorrowfully at food vendors and ask them to grow up. We passed a law that led eventually to the formation of the FDA.
Tech companies are companies. They are not moral agents, or even immoral agents. They are amoral profit-maximizing vehicles for their shareholders (and this is not even a criticism). Companies are supposed to make money, and do it well. Facebook's stock price didn't slip when it was discovered how their systems had been manipulated for propaganda. It slipped when they proposed changes to their newsfeed ratings mechanisms to address these issues.
It makes no sense to rely on tech companies to police themselves, and to his credit, Brad Smith of Microsoft made exactly this point in a recent post on face recognition systems. Regulation, policing and whatever else we might imagine, has to come from the outside. While I don't claim that regulation mechanisms all work as they are currently conceived, the very idea of checks and balances seems more robust than merely hoping that tech companies will get their act together on their own.
Don't get me wrong. It's not even clear what has to be regulated here. Unlike with poisoned food or toxic chemicals, it's not clear how to handle poisonous speech or toxic propaganda. And that's a real discussion we need to have.
But let's not buy into Silicon Valley's internal hype about "doing good". Even Google has dropped its "Don't be evil" credo.