Posted:
Today we are publishing details of a vulnerability in the design of SSL version 3.0. This vulnerability allows the plaintext of secure connections to be calculated by a network attacker. I discovered this issue in collaboration with Thai Duong and Krzysztof Kotowicz (also Googlers).

SSL 3.0 is nearly 18 years old, but support for it remains widespread. Most importantly, nearly all browsers support it and, in order to work around bugs in HTTPS servers, browsers will retry failed connections with older protocol versions, including SSL 3.0. Because a network attacker can cause connection failures, they can trigger the use of SSL 3.0 and then exploit this issue.

Disabling SSL 3.0 support, or CBC-mode ciphers with SSL 3.0, is sufficient to mitigate this issue, but presents significant compatibility problems, even today. Therefore our recommended response is to support TLS_FALLBACK_SCSV. This is a mechanism that solves the problems caused by retrying failed connections and thus prevents attackers from inducing browsers to use SSL 3.0. It also prevents downgrades from TLS 1.2 to 1.1 or 1.0 and so may help prevent future attacks.

Google Chrome and our servers have supported TLS_FALLBACK_SCSV since February and thus we have good evidence that it can be used without compatibility problems. Additionally, Google Chrome will begin testing changes today that disable the fallback to SSL 3.0. This change will break some sites and those sites will need to be updated quickly.

In the coming months, we hope to remove support for SSL 3.0 completely from our client products.

Thank you to all the people who helped review and discuss responses to this issue.

Posted by Bodo Möller, Google Security Team

[Updated Oct 15 to note that SSL 3.0 is nearly 18 years old, not nearly 15 years old.]

Posted:
It’s been a year since we launched our Patch Reward program, a novel effort designed to recognize and reward proactive contributions to the security of key open-source projects that make the Internet tick. Our goal is to provide financial incentives for improvements that go beyond merely fixing a known security bug.

We started with a modest scope and reward amounts, but have gradually expanded the program over the past few months. We’ve seen some great work so far—and to help guide future submissions, we wanted to share some of our favorites:
  • Addition of Curve25519 and several other primitives in OpenSSH to strengthen its cryptographic foundations and improve performance.
  • A set of patches to reduce the likelihood of ASLR info leaks in Linux to make certain types of memory corruption bugs more difficult to exploit.
  • And, of course, the recent attack-surface-reducing function prefix patch in bash that helped mitigate a flurry of “Shellshock”-related bugs.

We hope that this list inspires even more contributions in the year to come. Of course, before participating, be sure to read the rules page. When done, simply send your nominations to security-patches@google.com. And keep up the great work!

Posted by Michal Zalewski, Google Security Team

Posted:
Security and privacy are top priorities for Google. We’ve invested a lot in making our products secure, including strong SSL encryption by default for Search, Gmail and Drive. We’re working to further extend encryption across all our services, ensuring that your connection to Google is private.

For some time, we’ve offered network administrators the ability to require the use of SafeSearch by their users, which filters out explicit content from search results; this is especially important for schools. However, using this functionality has meant that searches were sent over an unencrypted connection to Google. Unfortunately, this has been the target of abuse by other groups looking to snoop on people’s searches, so we will be removing it as of early December.

Going forward, organizations can require SafeSearch on their networks while at the same time ensuring that their users’ connections to Google remain encrypted. (This is in addition to existing functionality that allows SafeSearch to be set on individual browsers and to be enabled by policy on managed devices like Chromebooks.) Network administrators can read more about how to enable this new feature here.

Posted by Brian Fitzpatrick, Engineering Director

Posted:
Cross-posted on the Chromium Blog

We work hard to keep you safe online. In Chrome, for instance, we warn users against malware and phishing and offer rewards for finding security bugs. Due in part to our collaboration with the research community, we’ve squashed more than 700 Chrome security bugs and have rewarded more than $1.25 million through our bug reward program. But as Chrome has become more secure, it’s gotten even harder to find and exploit security bugs.

This is a good problem to have! In recognition of the extra effort it takes to uncover vulnerabilities in Chrome, we’re increasing our reward levels. We’re also making some changes to be more transparent with researchers reporting a bug.

First, we’re increasing our usual reward pricing range to $500-$15,000 per bug, up from a previous published maximum of $5,000. This is accompanied with a clear breakdown of likely reward amounts by bug type. As always, we reserve the right to reward above these levels for particularly great reports. (For example, last month we awarded $30,000 for a very impressive report.)

Second, we’ll pay at the higher end of the range when researchers can provide an exploit to demonstrate a specific attack path against our users. Researchers now have an option to submit the vulnerability first and follow up with an exploit later. We believe that this a win-win situation for security and researchers: we get to patch bugs earlier and our contributors get to lay claim to the bugs sooner, lowering the chances of submitting a duplicate report.

Third, Chrome reward recipients will be listed in the Google Hall of Fame, so you’ve got something to print out and hang on the fridge.

As a special treat, we’re going to back-pay valid submissions from July 1, 2014 at the increased reward levels we’re announcing today. Good times.

We’ve also answered some new FAQs on our rules page, including questions about our new Trusted Researcher program and a bit about our philosophy and alternative markets for zero-day bugs.

Happy bug hunting!

Posted by Tim Willis, Hacker Philanthropist, Chrome Security Team

Posted:
Cross-posted on the Open Source Blog

A recent Pew study found that 86% of people surveyed had taken steps to protect their security online. This is great—more security is always good. However, if people are indeed working to protect themselves, why are we still seeing incidents, breaches, and confusion? In many cases these problems recur because the technology that allows people to secure their communications, content and online activity is too hard to use. 

In other words, the tools for the job exist. But while many of these tools work technically, they don’t always work in ways that users expect. They introduce extra steps or are simply confusing and cumbersome. (“Is this a software bug, or am I doing something wrong?”) However elegant and intelligent the underlying technology (and much of it is truly miraculous), the results are in: if people can’t use it easily, many of them won’t. 

We believe that people shouldn’t have to make a trade-off between security and ease of use. This is why we’re happy to support Simply Secure, a new organization dedicated to improving the usability and safety of open-source tools that help people secure their online lives. 

Over the coming months, Simply Secure will be collaborating with open-source developers, designers, researchers, and others to take what’s there—groundbreaking work from efforts like Open Whisper Systems, The Guardian Project, Off-the-Record Messaging, and more—and work to make them easier to understand and use. 

We’re excited for a future where people won’t have to choose between ease and security, and where tools that allow people to secure their communications, content, and online activity are as easy as choosing to use them.

Posted by Meredith Whittaker, Open Source Research Lead and Ben Laurie, Senior Staff Security Engineer

Posted:
One of the unfortunate realities of the Internet today is a phenomenon known in security circles as “credential dumps”—the posting of lists of usernames and passwords on the web. We’re always monitoring for these dumps so we can respond quickly to protect our users. This week, we identified several lists claiming to contain Google and other Internet providers’ credentials.

We found that less than 2% of the username and password combinations might have worked, and our automated anti-hijacking systems would have blocked many of those login attempts. We’ve protected the affected accounts and have required those users to reset their passwords.

It’s important to note that in this case and in others, the leaked usernames and passwords were not the result of a breach of Google systems. Often, these credentials are obtained through a combination of other sources. 

For instance, if you reuse the same username and password across websites, and one of those websites gets hacked, your credentials could be used to log into the others. Or attackers can use malware or phishing schemes to capture login credentials.

We’re constantly working to keep your accounts secure from phishing, malware and spam. For instance, if we see unusual account activity, we’ll stop sign-in attempts from unfamiliar locations and devices. You can review this activity and confirm whether or not you actually took the action.

A few final tips: Make sure you’re using a strong password unique to Google. Update your recovery options so we can reach you by phone or email if you get locked out of your account. And consider 2-step verification, which adds an extra layer of security to your account. You can visit g.co/accountcheckup where you’ll see a list of many of the security controls at your disposal.

Posted by Borbala Benko, Elie Bursztein, Tadek Pietraszek and Mark Risher, Google Spam & Abuse Team

Posted:
Cross-posted on the Chromium Blog

The SHA-1 cryptographic hash algorithm has been known to be considerably weaker than it was designed to be since at least 2005 — 9 years ago. Collision attacks against SHA-1 are too affordable for us to consider it safe for the public web PKI. We can only expect that attacks will get cheaper.

That’s why Chrome will start the process of sunsetting SHA-1 (as used in certificate signatures for HTTPS) with Chrome 39 in November. HTTPS sites whose certificate chains use SHA-1 and are valid past 1 January 2017 will no longer appear to be fully trustworthy in Chrome’s user interface.

SHA-1's use on the Internet has been deprecated since 2011, when the CA/Browser Forum, an industry group of leading web browsers and certificate authorities (CAs) working together to establish basic security requirements for SSL certificates, published their Baseline Requirements for SSL. These Requirements recommended that all CAs transition away from SHA-1 as soon as possible, and followed similar events in other industries and sectors, such as NIST deprecating SHA-1 for government use in 2010.

We have seen this type of weakness turn into a practical attack before, with the MD5 hash algorithm. We need to ensure that by the time an attack against SHA-1 is demonstrated publicly, the web has already moved away from it. Unfortunately, this can be quite challenging. For example, when Chrome disabled MD5, a number of enterprises, schools, and small businesses were affected when their proxy software — from leading vendors — continued to use the insecure algorithms, and were left scrambling for updates. Users who used personal firewall software were also affected.

We plan to surface, in the HTTPS security indicator in Chrome, the fact that SHA-1 does not meet its design guarantee. We are taking a measured approach, gradually ratcheting down the security indicator and gradually moving the timetable up (keep in mind that we release stable versions of Chrome about 6-8 weeks after their branch point):

Chrome 39 (Branch point 26 September 2014)
Sites with end-entity (“leaf”) certificates that expire on or after 1 January 2017, and which include a SHA-1-based signature as part of the certificate chain, will be treated as “secure, but with minor errors”.

The current visual display for “secure, but with minor errors” is a lock with a yellow triangle, and is used to highlight other deprecated and insecure practices, such as passive mixed content.

Chrome 40 (Branch point 7 November 2014; Stable after holiday season)
Sites with end-entity certificates that expire between 1 June 2016 to 31 December 2016 (inclusive), and which include a SHA-1-based signature as part of the certificate chain, will be treated as “secure, but with minor errors”. 

Sites with end-entity certificates that expire on or after 1 January 2017, and which include a SHA-1-based signature as part of the certificate chain, will be treated as “neutral, lacking security”.

The current visual display for “neutral, lacking security” is a blank page icon, and is used in other situations, such as HTTP.
Chrome 41 (Branch point in Q1 2015)
Sites with end-entity certificates that expire between 1 January 2016 and 31 December 2016 (inclusive), and which include a SHA-1-based signature as part of the certificate chain, will be treated as “secure, but with minor errors”. 

Sites with end-entity certificates that expire on or after 1 January 2017, and which include a SHA-1-based signature as part of the certificate chain, will be treated as “affirmatively insecure”. Subresources from such domain will be treated as “active mixed content”.

The current visual display for “affirmatively insecure” is a lock with a red X, and a red strike-through text treatment in the URL scheme.
Note: SHA-1-based signatures for trusted root certificates are not a problem because TLS clients trust them by their identity, rather than by the signature of their hash.

Posted by Chris Palmer, Secure Socket Lover and Ryan Sleevi, Transport Layer Securer

Posted:
Cross-posted on the Chrome Blog

You should be able to use the web safely, without fear that malware could take control of your computer, or that you could be tricked into giving up personal information in a phishing scam.

That’s why we’ve invested so much in tools that protect you online. Our Safe Browsing service protects you from malicious websites and warns you about malicious downloads in Chrome. We’re currently showing more than three million download warnings per week—and because we make this technology available for other browsers to use, we can help keep 1.1 billion people safe.

Starting next week, we’ll be expanding Safe Browsing protection against additional kinds of deceptive software: programs disguised as a helpful download that actually make unexpected changes to your computer—for instance, switching your homepage or other browser settings to ones you don’t want.

We’ll show a warning in Chrome whenever an attempt is made to trick you into downloading and installing such software. (If you still wish to proceed despite the warning, you can access it from your Downloads list.)
As always, be careful and make sure you trust the source when downloading software. Check out these tips to learn how you can stay safe on the web.

Posted by Moheeb Abu Rajab, Staff Engineer, Google Security

Posted:
Last week we announced support for non-Latin characters in Gmail—think δοκιμή and 测试 and みんな—as a first step towards more global email. We’re really excited about these new capabilities. We also want to ensure they aren’t abused by spammers or scammers trying to send misleading or harmful messages.

Scammers can exploit the fact that , , and ο look nearly identical to the letter o, and by mixing and matching them, they can hoodwink unsuspecting victims.* Can you imagine the risk of clicking “ShppingSite” vs. “ShoppingSite” or “MyBank” vs. “MyBɑnk”?

To stay one step ahead of spammers, the Unicode community has identified suspicious combinations of letters that could be misleading, and Gmail will now begin rejecting email with such combinations. We’re using an open standard—the Unicode Consortiums “Highly Restricted” specification—which we believe strikes a healthy balance between legitimate uses of these new domains and those likely to be abused.

We’re rolling out the changes today, and hope that others across the industry will follow suit. Together, we can help ensure that international domains continue to flourish, allowing both users and businesses to have a tête-à-tête in the language of their choosing.

Posted by Mark Risher, Spam & Abuse Team

*For those playing at home, that's a Myanmar letter Wa (U+101D), a Gujarati digit zero (U+AE6) and a Greek small letter omicron (U+03BF), followed by the ASCII letter 'o'.

Posted:
Cross-posted from the Webmaster Central Blog

Security is a top priority for Google. We invest a lot in making sure that our services use industry-leading security, like strong HTTPS encryption by default. That means that people using Search, Gmail and Drive, for example, automatically have a secure connection to Google. 

Beyond our own stuff, we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure. For instance, we have created resources to help webmasters prevent and fix security breaches on their sites. 

We want to go even further. At Google I/O a few months ago, we called for “HTTPS everywhere” on the web. 

We’ve also seen more and more webmasters adopting HTTPS (also known as HTTP over TLS, or Transport Layer Security), on their website, which is encouraging. 

For these reasons, over the past few months we’ve been running tests taking into account whether sites use secure, encrypted connections as a signal in our search ranking algorithms. We’ve seen positive results, so we’re starting to use HTTPS as a ranking signal. For now it's only a very lightweight signal—affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content—while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.

In the coming weeks, we’ll publish detailed best practices (we’ll add a link to it from here) to make TLS adoption easier, and to avoid common mistakes. Here are some basic tips to get started:
  • Decide the kind of certificate you need: single, multi-domain, or wildcard certificate
  • Use 2048-bit key certificates
  • Use relative URLs for resources that reside on the same secure domain
  • Use protocol relative URLs for all other domains
  • Check out our Site move article for more guidelines on how to change your website’s address
  • Don’t block your HTTPS site from crawling using robots.txt
  • Allow indexing of your pages by search engines where possible. Avoid the noindex robots meta tag

If your website is already serving on HTTPS, you can test its security level and configuration with the Qualys Lab tool. If you are concerned about TLS and your site’s performance, have a look at Is TLS fast yet?. And of course, if you have any questions or concerns, please feel free to post in our Webmaster Help Forums.

We hope to see more websites using HTTPS in the future. Let’s all make the web more secure!

Posted by Zineb Ait Bahajji and Gary Illyes, Webmaster Trends Analysts