Silicon Valley’s Libertarian Solution is Regulation

When the news broke that Cambridge Analytica had harvested and shared the private Facebook data of over 50 million Americans to support Trump’s 2016 campaign, the proverbial shit hit the fan. Shrill headlines decrying ‘The Data that Turned the World Upside Down’ raced throughout the social media giant’s own networks, and the public erupted in fury. To make matters worse, this shocking news came on the back of a seemingly continuous stream of tech related scandals – from privacy concerns to Russian bots – over the past year.

In the face of these continual crises, voices for reform have grown increasingly prominent. Both sides of the aisle are calling for change, and even Mark Zuckerberg recently expressed that he would support the “right” regulation for his company. And yet, in spite of this panicked clamouring, we are still without a tangible plan for reform.

The reasons for this inaction may be simpler than we think. In the words of US Representative Adam Schiff at the 2017 hearing investigating the responsibility of tech platforms, “I don’t see Congress trying to legislate an algorithm, because that is beyond the regulatory reach and competence of the government.” That is to say, there is a growing consensus, both within and without Washington, that the government has neither the power nor understanding to regulate tech’s algorithms.

This consensus misses the point entirely; there are actual humans behind these algorithms, and humans can be responsibly regulated. For years, tech companies have tried to obfuscate this essential truth, attempting to convince us that the behaviors of tech platforms are somehow separate from the people who created them. Through creating this false narrative, these companies have not only been able to distance themselves from social responsibility, but have, even more damagingly, shrouded their algorithms with a veil of objectivity.

But it is time to strip this veil away. We must remind ourselves that algorithms are entirely unobjective; human biases shape and impact the software we write, and when we fail to acknowledge and take responsibility for this fact, our so-called “objective” platforms begin to encode the worst of our biases.

Consider Google Perspective, a tool released to help content publishers identify “toxic” comments in a discussion. Great intention. But, ironically, the tool was far more toxic than any of the comments it was filtering: phrases such as “I am a man” were scored as 20% toxic, while phrases such as “I am a gay black woman” were scored as 87% toxic.

Or consider Facebook’s censorship algorithms, which scour the social media giant’s network for dangerous content and hate speech. Through these algorithms, Facebook took down and deactivated the account behind a post decrying “All white people are racist,” while ignoring a post by a Congressman proclaiming “Hunt them, identify them, and kill them all” about radicalized Muslims. That’s not all. Facebook has also repeatedly blocked the accounts of editors of two leading Palestinian media outlets, and once even blocked the account of the Palestinian Authority’s ruling party. Facebook later apologized and unblocked these accounts (which, just like in the case of Cambridge Analytical, really did make everything better).

These examples serve to demonstrate that our technical platforms are not only not objective, but, worse, actively perpetuate existing social hierarchies. And here’s the key: algorithms cannot “pick up” on racist or sexist views of the public on their own. The engineers behind the code — whose agency tech companies love to ignore — are responsible for feeding the data that the algorithm learns from. Further, they are also responsible for shaping how the algorithm learns to make decisions from the data. Code is very much unobjective both in its outcome and in its creation. We must shatter the distinction between man and algorithm.

These engineers at Facebook and Google could have chosen to review their data for the social biases they surely knew they would find — over-representation of gay black females in toxic comments, or tolerance of hate speech targeting minorities  — but they chose not to. They could have modified the algorithm to remove biased, causal associations based on attributes such race, gender, or sexual orientation, but they chose not to. Instead, they made the active choice to build algorithms from data brimming with human biases.

If we want to see change, we need everybody — especially the government — to stop indulging in this nonsensical discourse that algorithms are distinct from their programmers and, therefore, that tech platforms are unregulatable. These arguments are palpably false. Simply put, humans write algorithms and the government can regulate humans.

Government policy must foremost expose the human agency behind tech. In response to the recent spate of scandals, we must regulate Silicon Valley through requiring companies to first, be transparent about data collection, second, test for inadvertent biases in their platforms and third, disclose results and negative consequences of their algorithms. This information must be made available to ordinary folk; individuals must be able to know they are considered by any particular algorithm. Not only are these answers indispensable for regulating these companies, they are critical for rebuilding public trust in the platforms we use.

But if the government is the starting-point for this much-needed change, the ultimate solution lies in Silicon Valley’s libertarian roots. The tech industry needs to embrace crowdsourced oversight; with the information attained through government-enforced transparency, software engineers at large will be able to review algorithms for potential biases. There is no better way for Silicon Valley to return to its roots than through empowering the public to improve their platforms and in the information age there is no greater, more fundamental freedom than an open, thorough, and actionable understanding of the algorithms that run our lives.

With enforced transparency by the government, disclosure and testing from Silicon Valley, and crowdsourced review by the general public, we may finally see the change we’ve been calling for.
Joyce Xu 

Photo: Official White House Photo by Lawrence Jackson, Members of the audience take pictures as President Barack Obama participates in a town hall meeting moderated by CEO Mark Zuckerberg at Facebook headquarters in Palo Alto, Calif. April 20, 2011

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s