Monthly Archives: September 2014

Public Digital Infrastructure: Who Pays?

Glen Canyon Bridge & Dam, Page, Arizona, by flickr user Thaddeus Roan under CC-BY 2.0
Glen Canyon Bridge & Dam, Page, Arizona, by flickr user Thaddeus Roan under CC-BY 2.0

Every day, we risk our personal security and privacy by relying on lines of code written by a bunch under-funded non-profits and unpaid volunteers. These essential pieces of infrastructure go unnoticed and under-funded; that is, until they fail.

Take OpenSSL, one of the most common tools for encrypting internet traffic. It means that things like confidential messages and credit card details aren’t transferred as plain text. It probably saves you from identity fraud, theft, stalking, blackmail, and general inconvenience dozens of times a day. At the time when a critical security flaw (known as ‘Heartbleed’) was discovered in OpenSSL’s code last April, there was just one person paid to work full-time on the project – the rest of it being run largely by volunteers.

What about the Network Time Protocol? It keeps most of the world’s computer’s clocks synchronised so that everything is, you know, on time. NTP has been developed and maintained over the last 20 years by one university professor and a team of volunteers.

Then there is OpenSSH, which is used to securely log in to remote computers across a network – used every day by systems administrators to keep IT systems, servers, and websites working whilst keeping out intruders. That’s maintained by another under-funded team who recently started a fundraising drive because they could barely afford to keep the lights on in their office.

Projects like these are essential pieces of public digital infrastructure; they are the fire brigade of the internet, the ambulance service for our digital lives, the giant dam holding back a flood of digital sewage. But our daily dependence on them is largely invisible and unquantified, so it’s easy to ignore their importance. There is no equivalent to pictures of people being rescued from burning buildings. The image of a programmer auditing some code is not quite as visceral.

So these projects survive on small handouts, occasionally large ones from large technology companies. Whilst it’s great that commercial players want to help secure the open source code they use in their products, this alone is not an ideal solution. Imagine if the ambulance service were funded by ad-hoc injections of cash from various private hospitals, who had no obligation to maintain their contributions. Or if firefighters only got new trucks and equipment when some automobile manufacturer thinks it would be good PR.

There’s a good reason to make this kind of critical public infrastructure open-source. Proprietary code can only be audited behind closed doors, so that means everyone who relies on it has to trust the provider to discover its flaws, fix them, and be honest when they fail. Open source code, on the other hand, can be audited by anyone. The idea is that ‘many eyes make all bugs shallow’ – if everyone can go looking for them, bugs are much more likely to be found.

But just because anyone can, that doesn’t mean that someone will. It’s a little like the story of four people named Everybody, Somebody, Anybody, and Nobody:

There was an important job to be done and Everybody was sure that Somebody would do it. Anybody could have done it, but Nobody did it. Somebody got angry about that because it was Everybody’s job. Everybody thought that Anybody could do it, but Nobody realized that Everybody wouldn’t do it. It ended up that Everybody blamed Somebody when Nobody did what Anybody could have done.

Everybody would benefit if Somebody audited and improved OpenSSL/NTP/OpenSSH/etc, but Nobody has sufficient incentive to do so. Neither proprietary software nor the open source world is delivering the quality of critical public digital infrastructure we need.

One solution to this kind of market failure is to treat critical infrastructure as a public good, deserving of public funding. Public goods are traditionally defined as ‘non-rival’, meaning that one person’s use of the good does not reduce its availability to others, and ‘non-excludable’, meaning that it is not possible to exclude certain people from using it. The examples given above certainly meet this criteria. Code is infinitely reproducible at nearly zero marginal cost, and its use, absent any patents or copyrights, is impossible to constrain.

The costs of creating and sustaining a global, secure, open and free-as-in-freedom digital infrastructure are tiny in comparison to the benefits. But direct, ongoing public funding for those who maintain this infrastructure is rare. Meanwhile, we find that billions have been spent on intelligence agencies whose goal is to make security tools less secure. Rather than undermining such infrastructure, governments should be pooling their resources to improve it.


Related: The Linux foundation have an initiative to address this situation, with the admirable backing of some industry heavyweights http://www.linuxfoundation.org/programs/core-infrastructure-initiative/
While any attempt to list all the critical projects of the internet is likely to be incomplete and lead to disagreement, Jonathan Wilkes and volunteers have nevertheless begun one https://wiki.pch.net/doku.php?id=pch:public:critical-internet-software

‘Surprise Minimisation’

A little while ago I wrote a short post for the IAPP on the notion of ‘surprise minimisation’. In summary, I’m not that keen on it;

I’m left struggling to see the point of introducing yet another term in an already jargon-filled debate. Taken at face-value, recommending surprise minimisation seems no better than simply saying “don’t use data in ways people might not like”—if anything, it’s worse because it unhelpfully equates surprise with objection, and vice-versa. The available elaborations of the concept don’t add much either, as they seem to boil down to an ill-defined mixture of existing principles.

Why Surprise Minimisation is a Misguided Principle