The 24th March 2017 was an interesting day for security in technology, from the Google researchers confirming near future SHA1 collision rendering SHA1 insecure, to Google’s global logout following a Play services update.

The most worrying of all was Cloudflare’s “Cloudbleed” (derived from the 2014 Heartbleed bug) memory leak, affecting as many as 120,000 pages per day for the past week.

topthat

Cloudflare is a platform for connecting your websites to speed up requests while keeping spambots and attackers at bay, allowing accurate analytics and a secure connection to your users through Cloudflare’s globally distributed network.

What Happened

Chief operating officer John Graham-Cumming said, “it was likely that in the last week, around 120,000 web pages per day may have contained some unencrypted private data, along with other junk text, along the bottom.” A field day for attackers and bots scraping data from search engine cached pages.

Around 1 in every 3,300,000 HTTP requests through Cloudflare potentially could have been affected by a major memory leakage, with about 0.00003% of requests potentially compromised.

The problem arose from Cloudflare’s use of a new parser cf-html and the Ragel parser implemented in their NGINX builds. The COO clarified “For the avoidance of doubt: the bug is not in Ragel itself. It is in Cloudflare’s use of Ragel. This is our bug and not the fault of Ragel.”

Cloudflare were very clear on their blog and didn’t make any excuses, they did mitigation as quickly as possible, fixed the problem and documented it in full. This may trigger other services to take into consideration auto generated code.

Cloudflare pinpointed the bug and provided us with a scarily small piece of code.

The leakage included HTTP headers, chunks of POST data (perhaps containing passwords), JSON for API calls, URI parameters, cookies and other sensitive information used for authentication (such as API keys and OAuth tokens.)

Cloudflare pinpointed the bug and provided us with a scarily small piece of code. “The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer. This is known as a buffer overrun. Had the check been done using >= instead of == jumping over the buffer end would have been caught.”

Those two symbols created the largest breach of the year.

/* generated code */
if ( ++p == pe )
goto _test_eof;

“Introducing cf-html subtly changed the buffering which enabled the leakage even though there were no problems in cf-html itself,” so they disabled cf-html and successfully stopped the leakage while they fixed the bug.

Not mentioned in the blog but calculated by a commenter Mark:

“1 in every 3,300,000 HTTP requests through Cloudflare…
…we found 770 unique URIs that had been cached and which contained leaked memory
770 * 3,300,000 = 2,541,000,000 (2.5 billion)
At a minimum. There were probably more that weren’t cached or had expired from cache.”

That is a serious amount of data that bots could have acquired.

Time to Fix – Thank goodness for security experts

It took “7 hours with an initial mitigation in 47 minutes.”

Cloudflare were “grateful that it was found by one of the world’s top security research teams and reported to us.”

The bug was confirmed by Tavis Ormandy, a Vulnerability researcher at Google and understandably Cloudflare were “grateful that it was found by one of the world’s top security research teams and reported to us.”

I think this kind of event really wakes developers up to the risks of using 3rd party software such as parsers and compilers. A single character can convert a secure HTML request to a mass sharing of random data.

Let’s hope nobody with bad intentions caught on before it was fixed. This also shines a light on search provider caching; do we really want one mistake forever fossilised for others to take advantage of?

Share on Facebook0Share on LinkedIn2Tweet about this on TwitterEmail this to someone