What is the economic risk of Heartbleed?

by on April 15, 2014 at 8:06 am in Economics, Uncategorized, Web/Tech | Permalink

From a report from today’s WaPo:

No examples have surfaced of anyone actually exploiting the vulnerability.

Of course that is no longer true, as The Royal Canadian Mounted Police are investigating the cases of 900 Canadian identity theft victims.  And there are likely undetected further cases.  Still, when I hear this crisis described as “On the scale of 1 to 10, this is an 11,” I conclude that economists think about risk differently than do most people, including tech consultants.  (To flip this coin on its other side, I am not especially reassured about the web sites judged as “safe” — should we now start trusting such judgments so strongly?)

What can the deadweight loss be of a previously unnoticed crisis?  And if that is an 11, what does a 12 look like?  How many Canadian victims would be needed to get us up to 13?

dan1111 April 15, 2014 at 8:18 am

Why can’t they just make 10 more severe?

zippo April 15, 2014 at 8:29 am

This crisis goes to 11, dan1111.

Z April 15, 2014 at 9:08 am

Perhaps instead of midgets dancing around a miniature Stonehenge they will have the little mushroom people of Nova Scotia dance around the bowl of pudding, as is tradition. That should calm Canadian fears.

anon April 15, 2014 at 8:37 am

A problem made less problematic if you use different and random passwords for every site you use. Use a password manager (Google it or see Krebs and Schneier).

(Possibly unrelated: Yahoo email spam seemed to be way up a few months ago. Change your passwords people.)

Rusty Synapses April 15, 2014 at 9:18 am

I use a password manager (RoboForm) and note it was secure only because it was using an old version of OpenSSL – that’s not exactly reassuring. It also didn’t make a lot of info available about its own security before this – which makes it hard to compare password managers on this front, although LastPass seems to be the best, from what I can tell now (or at least discloses the most detailed info about its own security). If your password manager gets compromised, that would be REALLY bad (all eggs/1 basket – and in this case, it’s very hard to figure out if “you’re” watching that basket closely). Note for financial websites, I don’t put my login info in the password manager. People laugh about the post it by the computer, but I think the risk of that being compromised in my home office (desk drawer), especially without my knowledge, as really low – but it’s hard for even someone like me (I have (an admittedly ancient) MS in CS from Stanford) to really evaluate the security of a password manager or even an encrypted file on my own computer against increasingly clever hackers.

Ed April 15, 2014 at 9:29 am

I’ve come to agree that the paper list of passwords by your physical home computer may turn out to be the only way to go. As for the significance of this, one or two of these incidencts is not that significant but they add up and it may be that password protection on the internet is just not feasible, and that is a big deal.

Guest April 15, 2014 at 9:34 am

I also started using paper to keep tabs of my user names and passwords. But I won’t write the actual passwords – I write them in code. For example, if the password is Mets86 the paper would say “baseball year” for the password. I keep a paper with the passwords at home and I keep a paper locked in a cabinet at work. I also use a RSA token for my bank and two factor authentication for other sites (when possible). I also set up challenge questions with my financial institutions – when I call, I have to answer a question. And I subscribe to LifeLock. Identity theft is scary.

dan1111 April 15, 2014 at 9:41 am

Then you still have to remember all the passwords, which means you have to use weak passwords like “Mets86″.

An alternative is to have each password start with the same short, memorable word. Then add different random gibberish to the end of that password for each site, and write down the gibberish only. This protects you against electronic password theft, physical password theft, and brute force attacks.

Guest April 15, 2014 at 12:44 pm

Mets86 was only an example – the real passwords are better. But I like your idea. I’ve read variations on it – e.g. use 6 gibberish characters followed by the first three letters of the website you are accessing.

Steve April 15, 2014 at 6:09 pm

You’re right that a paper list is probably the most secure as long as you keep it safe, the problem to watch our for with that is the temptation is use less passwords. Having to re-type something like K_.<_@i_\+A[gef29AQjt):tu]yKnPWe over and over can be a real pain.

Zachary Mayer April 15, 2014 at 9:42 am

If your password manager encrypts data locally, you shouldn’t have an issue even if openSSL is compromised. Even better if they use perfect forward secrecy (e.g. lastpass).

Moebius Street April 15, 2014 at 12:48 pm

And indeed, this is the case for at least Roboform. It’s encrypted with, I think, AES-256 locally, so even if the bad guys get the data from the server, it *should* be safe (assuming that your master password is well-chosen)

brad April 15, 2014 at 2:44 pm

If they get the private key, which code was released this weekend to do, they can man in the middle future connections. Doesn’t matter what your password is then.

Slocum April 15, 2014 at 8:49 am

This comment seems worth citing (‘CA’ stands for ‘Certificate Authority’):

It is not enough to do new certificates. All of the old certificates could now be used for man in the middle attacks! 2/3rds of the Internets certificates potentially need to be blacklisted! This is a MAJOR disaster.

It is unfeasible to blacklist such a large amount of certificates – as every device requires a list of all blacklisted certificates. This means all of the major CA’s are going to have to black list their intermediate certificate authorities, and start issuing all new certificates under new CA’s. This means even people who weren’t effected will probably have to have their certificates blacklisted.

In short EVERY existing CA used on the internet may have to be black listed, and every single SSL certificate re-issued.

Also this one:

One thing that is not made clear by your post: Even if organizations like the NSA only found out about this yesterday they can still decrypt any data that they captured previously as long as they steal your private key before the vulnerability is patched (unless you use perfect forward secrecy).

My guess is that they are currently sitting on huge amounts of that they can now decrypt that they couldn’t before.

Rahul April 15, 2014 at 9:12 am

Bloomberg had an article saying NSA knew & was exploiting this for more than a year. Not sure if that’s reliable.

Marian Kechlibar April 15, 2014 at 9:56 am

Plausible enough to give you worry. Add the fact that previous use of the bug can’t be determined from normal logs …

prior_approval April 15, 2014 at 10:44 am

Especially worrying to the extent that the NSA is also responsible for protecting America’s communication infrastructure.

Chris D. April 15, 2014 at 12:51 pm

It beggars belief to think that the NSA, who is perhaps the strongest candidate for the World’s Biggest Threat to Digital Security, didn’t know about this. To assume otherwise contradicts every bit of evidence about their demonstrated competence in the field.

Remember that diagram that said “SSL removed (and re-added) here =)”? This is the most obvious way to do it.

David April 15, 2014 at 9:00 am

My questions aren’t about what are the economic potential disasters that come from Heartbleed or any other contrived hack bomb, but rather a total different subject. My worry is, what in the world will happen morally, spiritually and with society as a whole when Transhumanism is set in place? I talk a lot about the potential effects of transhumanism on my blog below. Take a look if you would like to to understand more.


crandall April 15, 2014 at 9:02 am

Economic impact will be on the scale of the recent TARGET retailer hacking. Exploiting HeartBleed across the internet seems much more difficult than originally thought.

Surprising how the the initial panic diminished so quickly; it was difficult this week to find much new or scary about Heartbleed in the media.
Even the referenced WashPost article was a mild, routine news filler.

F. Lynx Pardinus April 15, 2014 at 9:08 am

It’s irrelevant what’s being reported in the mass media. There’s a lot of scrambling going on behind the scenes involving certificates–see Slocum’s post above.

derek April 15, 2014 at 9:34 am

Talking last night to an IT guy for the municipality, he said that the nasty characteristic of this bug is that you have no idea who was where and who got what. Most other hacks and security holes leave some traces of access.

In other words, no one knows.

Isaac April 15, 2014 at 11:13 am

Actually, it turns out to be easier than previously thought, which is a bit scary. Cloudflare’s challenge was cracked in 3 hours, from scratch, and the private keys extracted.

ummm April 15, 2014 at 9:07 am

every week something new to be scared about

Anti-ummm April 15, 2014 at 9:35 am

For once we are in agreement

David K April 15, 2014 at 9:09 am

Imagine that one day it were announced that every lock on every home and business in the country had a bad spring and could be popped open undetected with simple tools that every burglar has. While, yes, there might be some concern that some of the burglaries that happened in the past might have been aided by this trick, I think that the law-and-order concern for the immediate aftermath of the announcement would be far worse.

Marian Kechlibar April 15, 2014 at 9:22 am

This kind of analogies with the real world usually fail. Too many subtle differences.

Unlike physical locks, computer systems can be upgraded really quickly and there is no shortage of ersatz locks.

Unlike physical locks, computer systems can be picked by the hackers from another continent.

Add the fact that the Google engineers were probably not the first ones to spot the problem, only the first ones to publish it.

David K April 15, 2014 at 9:50 am

Of course there are differences, my point however is that the concern at an 11 is one of *keeping order now that publication has been made* and *getting all the locks changed quickly so there isn’t a massive hacking rampage*, not prior damage.

Marian Kechlibar April 15, 2014 at 9:53 am

The Heartbleed fix situation illustrated all the common problems with software development and deployment very well.

Strongly staffed teams such as Debian rolled the patch out in very short time. Some others, such as CentOS, were slower, even though they had some corporate backing.

Android update process will be messy, as expected. Too many variations in custom OS bundles. As for some older devices, they will probably never be fixed.

Tom Noir April 15, 2014 at 9:17 am

My two cents is that Heartbleed and similar exploits represent growing pains. Hackers have gotten more sophisticated in the last few years and they have gained state sponsorship. The old ad-hoc approaches to security no longer suffice. Over the next couple of years we are going to have to seriously upgrade the net’s security infrastructure. We will eventually reach a stable point, but expect more revelations like Heartbleed between then and now.

Marian Kechlibar April 15, 2014 at 9:20 am

This is my field (I am actually developing crypto software for living).

Yes, the level is 11 of 10, because problems like that:

a) sow incredible distrust into the IT community, which is already paranoid enough,

b) have un-obvious effect further downstream. For example, which SSL connections are you now going to trust? Which Certificate Authorities? Add the fact that most users aren’t especially knowledgeable about the whole thing and take “https” for sign of security per se,

c) as with almost any IT security breach, it is almost impossible to quantify the extent of the damage, especially with regard to state actors, because most of the exploits will be “silent” (is the NSA or the FSB going to trumpet into the world how many “adversaries” they hacked using this exploit? No)

d) the opportunity cost is hard to assess. Which Internet-based services will never be built (or never succeed), because of the resulting paranoia?

Rahul April 15, 2014 at 9:29 am

Wikipedia says 66% of web-servers were using OpenSSL. I’m wondering what were the others using?

Perhaps one lesson is to be wary of ecosystems where any one technology has a massive market share? Though 66% isn’t terribly overwhelming.

Marian Kechlibar April 15, 2014 at 9:39 am

That does not mean that the remaining 34% of web servers and other infrastructure are fine. (On the other hand, the vulnerability was only present in 1.0.1, and plenty of those servers using OpenSSL were actually running either 0.9.8 or 1.0.0).

People re-use passwords on different sites and different services. If some user passwords leaked from one site, the hacking professionals now add those passwords to the extant password dictionaries and try them against other systems.

The vulnerability of a monocultural ecosystem is clear. At the same time, the abilities of human administrators of the same systems to cope with a wild garden of mixed software configurations is limited. No one is really willing to learn to configure 10 different types of software just for the sake of diversity (diversity per se is something easy to speak about but much harder to actually cope with).

Adrian Ratnapala April 15, 2014 at 9:41 am

The flipside is that the larger the market share of the product you use, the more likely that bugs will be found and fixed. For any individual, or individual organisation, the way to diversify against These things is defense in depth. SSL, and properly encrypted passwords, and not having too much on the net in the first place, and whatever else you can think of. Each of these hopefully relies on a separate set of technologies and have uncoralated bugs.

Marian Kechlibar April 15, 2014 at 9:46 am

It also illustrates some weak points of the open source model – namely, general unwillingness of the users to pay for the development. The # count of OpenSSL installations on the contemporary servers probably goes into tens of millions. If every owner of each such server paid Eur 1 pro year to pay for some extra “eyeballs” to view the code (you need qualified people to spot this type of error), OpenSSL would probably be much safer than it historically was.

Nevertheless, everyone loves to use opensource, but preciously few people want to support it financially.

Rahul April 15, 2014 at 9:55 am

Very true! Apparently the OpenSSL project had a yearly budget of less than $1 million. Makes me almost feel that the affected companies deserved this.

Wil Wade April 15, 2014 at 10:00 am


Many people are paid to look at and test OpenSSL, both through the foundation that controls OpenSSL and through companies that use it. That they are not obvious is due to the nature of OSS development. OSS places a premium on who as an individual is making the code commitment. This means that people who are hired by companies to contribute to OSS projects are listed, not the companies they work for. Also since anyone (who can read code) can read the code, everyone who uses it could look at it. Many of the large companies that use OpenSSL I am sure do have people checking it. The largest example of that is the Linux Kernel. You can look at all the companies that contribute here: http://www.linuxfoundation.org/publications/linux-foundation/who-writes-linux-2013

Marian Kechlibar April 15, 2014 at 10:07 am

Will, yes, much of open source is funded by large corporations, and yet it seems that the level of funding is not nearly high enough. Diversified funding from smaller users would definitely make some difference.

As with many software projects, devels love to write new code, but they do not love to write, say, Unit Tests that much. The Testing/QA team of many OSS projects consists of someone doing it part time. I am not sure how many people actually do test OpenSSL and similar bundles rigorously. Worse, the code is often written without regard to, say, static code analysis tools. (See, for example: http://blog.regehr.org/archives/1125).

dan1111 April 15, 2014 at 10:14 am

@Marian, still, the truly astounding thing about Open Source is how well it works for so many things. Yes, you can complain about the bugs. But much high profile paid software is just as buggy or worse.

Marian Kechlibar April 15, 2014 at 10:21 am

dan, I do agree. I am not a hater of OSS, just a (critical, but hopefully realistic) user thereof.

Adrian Ratnapala April 16, 2014 at 12:20 am

Marian, I think you are right that the funding for open source is erratic. But are the alternatives better, or even possible?

Hundreds of millions of users are not going to individual pay $0.10 for a piece of software directly. They will (and do) have solutions bundled to them by the likes of Microsoft and Apple. But would or should anyone trust those solutions any better than the open source ones?

At a guess I think only Microsoft could produce something roughly as trustworthy as the open source solutions while also having the right incentives. And even they do not have the right incentives when the NSA comes knocking.

Dan Kaminsky points out (http://dankaminsky.com/2014/04/10/heartbleed/) that the code review in the Linux kernel is much more ferocious than in OpenSSL. I would like to see a social-science explanation of the difference.

Marian Kechlibar April 15, 2014 at 10:13 am

One of the core reasons why we keep getting failures like this is ubiquitious use of C. C is very unsafe with regard to memory access.

The authors of C, though very smart guys, were primarily concerned about speed (it was in the early 1970s), and less about security issues, which they probably could not even grasp fully.

As a result, we’re stuck with some fairly dangerous IT infrastructure, and huge potential costs for its upgrade. Rewriting the extant C code to, say, Rust, would be extremely expensive. The question is whether the world can afford the alternative.

dan1111 April 15, 2014 at 10:26 am

I am not convinced that C’s memory management is the security problem we should all be worried about. Yes, it is a potentially exploitable feature of that language, but other languages have their own exploitable features.

C is a small, efficient language allowing for simpler programs, and this limits exposure to other kinds of attacks.

Marian Kechlibar April 15, 2014 at 10:32 am

I would say that the memory un-safeness of C-style buffers has caused more security bugs than any other feature of any other programming language in history. Of course removing it is no panacea, but it would still help tremendously. Buffer overflow protection is the seatbelt+airbag equivalent in the IT industry.

Moreover, things like that are at least principally feasible and scientifically trivial. It is much harder (and perhaps impossible) to design a new asymmetric crypto algorithm.

Rahul April 15, 2014 at 10:33 am


I don’t understand enough about the intricacies but how come someone like Linus Torvalds (and the whole kernel team) sticks with C? And not just for legacy reasons,.

That makes me skeptical whether C is indeed as unsafe and broken as you make it sound.

Marian Kechlibar April 15, 2014 at 10:41 am

Rahul, I would say that the reasons are:

1. There is no clear candidate to substitute C for.
2. Porting tens of millions of lines of a language to another one is a huge project. As in “5 years of work”.

Lots of people think that C is nasty, not least because of the varying bit width of elementary types etc., but it is with us to stay. There are some successful attempts to mitigate the worst problems on the HW or OS level, such as “protecting stacks against breaking” or “protecting memory of a process against another process”. In-process problems are harder to deal with.

Rahul April 15, 2014 at 10:48 am


I agree with your #1, but that’s precisely the point why C gets still used so much.

About your #2, I’m not so sure. Something like Git was a independent & new project. Yet they (he?) used C. You could argue it’s for idiosyncratic reasons but porting sure wasn’t a factor there.

Michael B Sullivan April 15, 2014 at 10:54 am

Of course, the OpenSSL folks wrote their own wrapper around malloc that defeated some guards against malloc misuse. This is more on their heads than C’s. They prioritized “fast” over “safe.”

Rahul April 15, 2014 at 2:37 pm

Yeah, that malloc wrapper bit must nominate them for next year’s Ignoble Awards.

Adrian Ratnapala April 16, 2014 at 12:28 am

@Rahul: The reason there are no good alternatives to C for the kernel is that C is a very simple language that gives pretty straightforward access to the hardware. Big parts of OS kernels depend on this; for the rest of the kernel it’s just not worth mixing languages. That is not true in user space, where different libraries could be in different languages.

The really annoying thing isn’t that C is prevent — it’s isn’t. The problem is that the popular new languages Java, Python, C#, even Go live in parallel universes called “runtimes” and are no use for writing ubiquitous libraries. And C++ has the same problems as C.

Dude April 15, 2014 at 10:11 am

Seems there might be a long term benefit of more people using tools such as 1Password.

I also wonder if the open source community will utilize more crowd funding to pay people to find exploits in open source code in the future. That could be a benefit.

The CA issue is a bit difficult to wrap one’s head around in terms of complexity and risk. It’s big.

brad April 15, 2014 at 11:54 am

> Wikipedia says 66% of web-servers were using OpenSSL. I’m wondering what were the others using?

The largest remaining chunk are servers running Microsoft IIS (SChannel), and java application servers (JSEE). There are a number of even less frequently used libraries like GnuTLS, and PolarSSL. Finally, there are hardware TLS termination devices, some embed openSSL but some have their own propritary TLS implementations.

It should be noted that none of these implementations have a sterling reputation, and even if they did TLS the protocol is a bit of a nightmare so there would still be vurnerabilities even with a well written library.

The good news to come out of all this is that the openBSD team (heretofore unrelated) is taking a hard look at cleaning up openSSL. They have quite a strong reputation for best security practices.

Tyler Cowen April 15, 2014 at 10:00 am

How much has gdp gone down because of this? Bueller?

Michael B Sullivan April 15, 2014 at 11:02 am

The actual answer is: “we don’t know yet.”

Heartbleed doesn’t seem to have been broadly known about prior to its public disclosure. When it did go into wide release, my expectation is that black-hats went into gold-rush mode, wrote heartbleed attacks, and sent them out into broad release, just fishing for whatever and logging the results, trying to get as much data as they could before systems closed them down.

I expect that it’s only this week that they’ve even started to browse the logs of what they found and started to try to put any useful data to work for them. At that point, it’s going to depend on how many passwords, how many CC numbers, and how many private keys they find in the data — we know it’s possible to get all of those, but we don’t really have a strong sense of how much you’ll get from a widespread fishing expedition.

After this initial goldrush phase, the long-term economic harm is going to depend on:

1. How many people never, or only very belatedly, fix their vulnerability.
2. Perhaps more dangerously, how many people code-patch Heartbleed, but do not rotate their certificates and turn out to have an exposed private key.

I would strongly expect there to be significant numbers of systems still vulnerable to Heartbleed in two months’ time. Probably not major systems, but since probably the majority of all internet users reuse their passwords between “J. Random bulletin board they log into” and Gmail, the economic harm can still be significant just through password exposure.

(In general, I think it’s under-commented on how much password exposure has become a pollution-like negative externality. Every time a password is breached and makes it into the giant password dictionaries, that makes everyone and everything just a tiny bit more vulnerable to theft, and increases transaction costs.)

ummm April 15, 2014 at 11:11 am

I suspect windows 8 has done more to hurt GDP than this from the decline in PC sales and lost productivity

A Definite Beta Guy April 15, 2014 at 11:22 am

How much has GDP gone down due to the development of nuclear warheads? Black Swan risk.

Chris S April 15, 2014 at 12:56 pm

Given that I have now spent about a dozen hours over the last few days on conference calls with my financial-institution customers, explaining carefully and repeatedly that we were not at risk because, ahem, we failed to stay current with our encryption software, there is some deadweight loss. Also judging by the tinge of shit-rolling-downhill panic in my contacts’ voices, the executives are also highly distracted by this.

That said, I still get paid the same no matter what I work on, so I guess GDP was unaffected.

anonymous coward April 15, 2014 at 2:09 pm

Does crime necesarily lower GDP? Should we be concerned about privacy only to the extent of profit and loss?
If heartbleed doesn’t matter, can I have your email password.

Insight April 17, 2014 at 3:58 am

Crime lowers gdp (growth, and therefore gdp) because it diverts resources from investment to defense. Heartbleed is likely similar, and the costs over time may be quite large, but indirect and hard to measure.

Adam Huttler April 15, 2014 at 2:43 pm

Risk and damage are not synonymous. Risk reflects some combination of magnitude and likelihood of damage. The risk could have been an 11 and we just got lucky. But yeah, Michael B. Sullivan is right – we don’t yet have a clue how much damage was done.

Curt F. April 15, 2014 at 10:05 am

I want to know how it came to be called “heartbleed”, rather than something like “Google Evaluation Team Critical Vulnerability Report #3429″.

Also, why did anyone waste time to make the stupid heartbleed logo? See e.g. http://heartbleed.com/

My interpretation of the name and the logo is that someone (the bug discoverers?) is attempting to overhype and “market” the bug, possibly to make their own discovery seem more critical and important. I’m reasonably sure that the bug is real and is a source of major problems for many sites and many users, but to me the nomenclature is very counterproductive.

Tony April 15, 2014 at 10:20 am

The protocol exploited was called heartbeat. The actual exploit was a lack of an out of bounds memory check. Essentially, you could coax a server into “bleeding” memory. The name is actually quite fitting.

Curt F. April 15, 2014 at 10:32 am

I still think they should have gone with MEGABUG 6000!! It has a nice ring to it. Very fitting.

Patrick Guldan April 15, 2014 at 10:36 am

Many threats get a catchy name and as Tony said, the name makes sense. These also get a Common Vulnerabilities and Exposures name. In this case it’s CVE-2014-0160 (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160). There are thousands cataloged every year and referring to them by CVE can get a bit tricky. Most of my casual communications will refer to the common name. Formal emails to management and control documents will reference the CVE.

Of note, I’m betting some security shops are thinking of how to brand the next bug: http://www.fastcodesign.com/3028982/why-the-security-bug-heartbleed-has-a-catchy-logo

Rahul April 15, 2014 at 10:07 am

I was reading a list of affected sites(*) based on self-reported assessments of major companies, and one interesting bit is none of the listed banks / trading sites are affected. Citigroup, Chase, CapitalOne, USBank, Scotttrade etc. all report that they were not using the code that was vulnerable.

Is this just a coincidence? Or are banks just all using a different technology? Or some other systematic reason? Just luck?

* http://mashable.com/2014/04/09/heartbleed-bug-websites-affected/

Marian Kechlibar April 15, 2014 at 10:20 am

1. Many enterprise systems use Windows Server, whose implementation of the SSL/TLS protocol is independent.
2. Some of them may be running on Java, thus using Java SSL/TLS, another independent implementation.
3. Finally, those on Linux/OpenBSD might have been running older OpenSSL versions. For example, Debian Squeeze still lives with 0.9.8. Major upgrades of those servers are rare.

Rahul April 15, 2014 at 10:36 am

Another general lessen here might be to upgrade very defensively. Stay a couple versions behind the cutting edge. Except critical patches, of course. Not really novel, but might be a good time to reiterate.

Michael B Sullivan April 15, 2014 at 11:04 am

The big banks tended to have created their network infrastructure at a time when big enterprises used the Microsoft stack, not the Linux/open source one.

Chris S April 15, 2014 at 1:00 pm

Also many very large sites use hardware appliances such as F5 Networks. Even small/medium companies may contract with a data center that uses this class of hardware.


Remember, the vulnerability was in OpenSSL, the open source implementation of the SSL standard. These have their own, proprietary implementation of the standard.

They are trumpeting this as a “win”, but more likely they just got lucky. Every web transaction relies on millions (billions?) of lines of code, and each one is a potential defect.

Ian Tindale April 15, 2014 at 10:21 am

My suspicion is that many if not most people will initially be scared because they don’t understand the technology and they read scare stories all over the place on non-technical sites for dummies; then after a while they’ll just get fatigued and give up trying to be secure and stick to the same password they’ve always had and consider it to be the infrastructure’s problem if anything goes wrong (or maybe that’s a specifically British trait — maybe foreigners like Americans and Australians are more individualistic and therefore accept, accommodate and encompass more personal responsibility, but we British can always just throw up our hands and sigh, safe in the knowledge that the Council or the Government or the BBC or BT or some other similarly wide body will just make it all better in the end and catch us as we fall). As I say, people will slip into security scare fatigue and rebel against chasing around changing passwords any more often than they already don’t.

Corvus April 15, 2014 at 10:48 am

Two points not made in previous comments:

1. While this bug could have been exploited historically, security experts consider it unlikely, as one of the primary methods of detection of a data theft is watching the markets for data the thieves are selling. The experts say they aren’t seeing anything that looks like it was harvested using this technique. Of course, this method does lack some preciseness, but it has been used quite successfully for several years now. If I’m not mistaken, I believe the recent Target affair was first detected by seeing numbers for sale on the market.
2. The rumor that the NSA has been already exploiting this bug – or could exploit it “in arrears”, does not, to me, seem very threatening as a practical matter. Outside of considerations of political freedom, which I do consider significant, I would think a vast majority of internet users would not be inconvenienced by the NSA being able to know their bank balance, or what they’ve purchased online. I don’t think the NSA would be into identity theft. [Identity theft IS extremely scary and troubling.] If the NSA wanted an identity, they would likely pick one that is either fictional, dead, or not EVER on the internet. Unlike an identity thief, the NSA has no game in stealing a person’s identity, or picking their plastic “pocket”. Quite the opposite, I’d think, since that would have the potential to leave the NSA actions open to public scrutiny.

Chris S April 15, 2014 at 1:01 pm

The RCMP seems to have found some actual victims.


Sebastian H April 15, 2014 at 11:13 am

You’re getting two things confused. As a security flaw this was an 11. It was everywhere, it compromised things that should have been safer and it was super easy to exploit.

As an actually exploited security issue we don’t know where it ranks, though it appears to have not been very exploited so it probably wasn’t more than a two or so there.

The additional suggestion that the NSA knew about it and left it open is an 11 if true. Leaving something like that exploitable is a clear betrayal of one half of their mission.

Hook April 15, 2014 at 1:52 pm

Agreed. Security flaws don’t get much worse, but security flaws themselves don’t cause any damage. It’s the exploits that will have the real cost and those will take some time to see.

I’m somewhat glad this sort of thing is happening now. It will hopefully teach the industry something about good update practices and we can avoid the scenario where some similar bug in 2028 leads to a situation where all of our smart appliances that are past the three year manufacturer support window are permanently vulnerable to exploitation.

Rahul April 15, 2014 at 11:25 am

Naive questions: Why did the SSL heartbeat protocol need to allow variable length payloads? And also, why does payload size need to be separately specified? Why cannot the server just count?

Chris S April 15, 2014 at 1:04 pm

Probably no good reason, other than that programmers like to make things configurable and future-proof as a general rule, sometimes to excess.

Sigivald April 15, 2014 at 2:57 pm

Variable lengths? Not sure why.

Size specification? Because otherwise it’s very hard to tell that you’ve gotten it all, and aren’t just waiting on the next TCP packet to arrive with the rest of the data.

The format is specified here; in the standard TCP-ish Unix-y way, it’s roughly “length of stuff I’m sending you”, then “stuff I’m sending you”.

Knowing the length in advance makes it trivially easy to know you’ve gotten it all, and lets you pre-build a buffer to hold it.

The bug here was not “letting it be specified”, but “just using a pointer to the text that could trivially be overrun”, without either checking for overruns and prohibiting htem, or … making a buffer for the return value that was the same size as the requested amount. (i.e. either “pad the return it to length with empty and let the caller handle that” or “notice they gave you less than they claimed, eventually, and just return what was actually sent”.)

A variant of the classic buffer overflow exploit, endemic to lazy or forgetful network code written in C.

Rahul April 15, 2014 at 3:09 pm

Thanks! I totally forgot the bit about the heartbeat message splitting over several TCP packets. The size makes sense now. Though variable size seems really silly and overkill for a heartbeat message.

Sigivald April 15, 2014 at 2:51 pm

Schneier – for all his tendency to hyperbole – is describing the bug as catastrophic in potential, not its actual effect on the real world (which nobody yet knows).

Rahul April 15, 2014 at 3:19 pm

One risk I wonder about is how realistic is it for a NSA plant or just a smart but crooked hacker to start working on a OpenSource project and then innocuously insert a legitimate looking ( but not quite ) piece of code. If it gets committed and released into the wild, great. He then exploits it. If someone spots it before a commit he has plausible deniability.

david April 15, 2014 at 4:14 pm
david April 15, 2014 at 4:20 pm

Although, on reflection, the underhanded C contest is more appropriate.

Rahul April 16, 2014 at 12:22 am

Indeed! How evil.

Adrian Ratnapala April 16, 2014 at 12:41 am

The winners of those contents seem to rarely seem to exploit buffer overflows and other well known C problems. The things they exploit are often the kind of thing that would be so more common in PHP.

Rahul April 16, 2014 at 3:16 am

The more I think the more I’m puzzled by the variable payload size in the Heartbeat saga. It just doesn’t make sense as a design decision. If you want the ability to merely track your heartbeats or retain uniqueness a fixed payload length provides ample sequence combinations internally.

I hate to be a conspiracy theorist, but this variable payload decision has all the markings of an underhand coding exploit.

Was the variable payload size a part of Segglemann’s implementation or a part of the original SSL spec?

Mark April 16, 2014 at 2:59 am

Too bad there aren’t more details on how the Canadian authorities detected the alleged breach. From everything I have read, one of the nasty things about Heartbleed is that breaches leave no trace, that there’s no way to tell whether a breach has occurred or not. So how do the Canadians know? Further, the Canadian government web sites, including the one affected, are able to be accessed by citizens using a federated identity system that allows people to log in with their existing bank credentials. Using this system, no actual user names or passwords are ever presented to the Canadian government web sites, only anonymous tokens.

Andreas Moser April 16, 2014 at 6:53 am

I tricked this Heartbleed by pretending that I changed all my passwords, but I simply re-entered my old ones. Clever!

Comments on this entry are closed.

Previous post:

Next post: