I haven’t had a substantial post for quite a long time, so it’s time for something useful and interesting. Although not Java-specific, this post might still be interesting to some of you. A brief warning before reading: This is a very lengthy post, but – believe it or not – this is just the brief summary of an even longer paper. I’ll try to keep the attack descriptions as short as possible – sorry in advance if they get to detailed. This post is only meant as a “short” intro on SSL/TLS attacks. If you want to read more please have a look at the paper (you’ll find the link at the bottom).
Those of you who are not familiar with SSL/TLS in detail (e.g., if you are just using SSL/TLS) probably need a kickstart introduction to SSL/TLS. For a quick and dirty introduction you may want to read the article at Wikipedia on TLS. For a complete,
detailed understanding you will still have to struggle through the RFCs (RFC 2246 and RFC 5246 should be sufficient to clearly understand whats going on). The attacks of each category are in chronological order:
Attacks on the Handshake Protocol
Cipher suite rollback
The cipher-suite rollback attack presented by Wagner and Schneier aims at limiting the offered cipher-suite list provided by the client to weaker ones or NULL-ciphers. An Man-in-the-middle (Mitm) attacker may alter the ClientHello message sent by the initiator of the connection, strips of the undesirable cipher-suites or completely replaces the cipher-suite list with a weak one and passes the manipulated message to the desired recipient.
ChangeCipherSpec message drop
This simple but effective attack described by Wagner and Schneier was feasible in SSL 2.0 only. During the handshake phase the cryptographic primitives and algorithms are determined. For activation of the new state it is necessary for both parties to send a ChangeCipherSpec message. This message informs the other party that the following communication will be secured by the previously agreed parameters. The pending state is activated immediately after the ChangeCipherSpec message is received.
An attacker located as Mitm could simply drop the ChangeCipherSpec messages and cause both parties to never activate the pending states.
Key exchange algorithm confusion
Another flaw pointed out by Wagner, Schneier is related to a feature concerning temporary key material. SSL 3.0 supports the use of temporary key material during the handshake phase (RSA public keys or DH public parameters) signed with a long term key. A problem arises from a missing type definition of the transferred material. Each party implicitly decides, based on the context, which key material is expected and decodes accordingly. More precise, there is no information on the type of the encoded key material. This creates a surface for a type confusion attack.
Wagner and Schneier described an attack where a ClientHello message of SSL 3.0 is modified to look like a ClientHello message of SSL 2.0. This would force a server to switch back to the more vulnerable SSL 2.0.
Bleichenbacher Attack on PKCS#1
In 1998 Daniel Bleichenbacher presented an attack on RSA based SSL cipher suites. Bleichenbacher utilized the strict structure of the PKCS#1 v1.5 format and showed that it is possible to decrypt thePreMasterSecret in an reasonable amount of time. The PreMasterSecret in a RSA based cipher suite is a random value generated by the client and sent (encrypted and PKCS #1 formatted) within the ClientKeyExchange. An attacker eavesdropping this (encrypted) message can decrypt it later on by abusing the server as a decryption oracle.
Timing based attacks
Brumley and Boneh outlined a timing attack on RSA based SSL/TLS. The attack extracts the private key from a target server by observing the timing differences between sending a specially crafted ClientKeyExchange message and receiving an Alert message inducing an invalid formatted PreMasterSecret. Even a relatively small difference in time allows to draw conclusions on the used RSA parameters. Brumley’s and Boneh’s attack is only applicable in case of RSA based cipher-suites. Additionally, the attack requires the presence of a high-resolution clock on the attacker’s side.
Improvements on Bleichenbacher’s attack
The researchers Klíma, Pokorny and Rosa not only improved Bleichenbacher’s attack, but were able to defeat a countermeasure against Bleichenbacher’s attack.
Breaking the countermeasure A countermeasure against Bleichenbacher’s attack is to generate a random PreMasterSecret in any kind of failure and continue with the handshake until the verification and decryption of the Finished message fails due to different key material (the PreMasterSecret differs at client and server side). Additionally, the implementations are encouraged to send no distinguishable error messages. This countermeasure is regarded as best-practice. Moreover, because of a different countermeasure concerning version rollback attacks the encrypted data includes not only the PreMasterSecret, but also the major and minor version number of the negotiated SSL/TLS version.
Implementations should check for equality of the sent and negotiated protocol versions. But in case of version mismatch some implementations again returned distinguishable error messages to the sender (e.g. decode_error). It is obvious that an attacker can build a new (bad version) oracle from this. With a new decryption oracle Klíma, Pokorny and Rosa were able to mount Bleichenbacher’s attack, in spite recommended countermeasures are present.Improving Bleichenbacher’s attack on PKCS#1 Additionally to the resurrection of Bleichenbacher’s attack the authors could improve the algorithm for better performance. These optimizations included redefinition of interval boundaries for possible PKCS conforming plaintexts.
ECC based timing attacks
Brumley and Tuveri presented an attack on ECDSA based TLS connections. The problem arose from the strict implementation of an algorithm for improving scalar multiplications, which ECC heavily relies on, such as e.g., point multiplication. This algorithm could be misused to offer timing side channels revealing information about the used multiplier.
More improvements on Bleichenbacher’s attack
Bardou, Focardi, Kawamoto, Simionato, Steel and Tsay significantly improved Bleichenbacher’s attack far beyond the hithero improvements. They fine-tuned the algorithm to perform faster and with lesser oracle queries. Additionally, the authors combined their results with the previous improvements and were able to significantly speed up Bleichenbacher’s algorithm.
ECC-based key exchange algorithm confusion attack
Mavrogiannopoulos, Vercauteren, Velichkov and Preneel showed that the key exchange algorithm confusion attack can also be applied to ECDH. According to the authors, their attack is not feasible yet, due to computational limitations. But, as already discovered with other theoretical only attacks, it may be a question of time when the attack is enhanced to be practical or the resources for computation increase.
Attacks on the Record and Application Data Protocols
MAC does not cover padding length
Wagner and Schneier pointed out that SSL 2.0 contained a major weakness concerning the Message Authentication Code (MAC). The MAC applied by SSL 2.0 only covered data and padding, but left the padding length field unencrypted. This may lead to message integrity compromise.
Weaknesses through CBC usage
Serge Vaudenay introduced a new attack class, padding attacks, and forced the security community to rethink on padding usage in encryption schemes.The attacks described by Vaudenay rely on the fact that block encryption schemes operate on blocks of fixed length, but in practice most plaintexts have to be padded to fit the requested length (a multiple of the block length). After padding, the input data is passed to the encryption function, where each plaintext block (of length of the block size) is processed and chained according to the Cipher Block Chaining Mode (CBC) scheme. The CBC mode chains consecutive blocks, so that a subsequent block is influenced by the output of its predecessor. This allows an attacker to directly influence the decryption process by altering the successive blocks.
Information leakage by the use of compression
Kelsey described an information leak enabled by a side-channel based on compression. This is in absolute contrast to, what the author calls “folk wisdom”, that applying compression leads to a security enhanced system. Kelsey showed that it adds little security or, in the worst case, is exactly the other way around, due to the fact that compression reveals information on the plaintext.Cryptosystems aim at encrypting plaintexts in a way that the resulting ciphertext reveals little to no information on the plaintext. Kelsey observed that by the use of compression a new side-channel arises which could be used to gain hints on the plaintext. Therefor he correlates the output bytes of the compression to the input bytes and makes use of the fact that compression algorithms, when applied to the plaintext, reduce the size of the input data.
Interception of protected traffic
Canvel, Hiltgen, Vaudenay and Vuagnoux extended the weaknesses presented by Vaudenay to decrypt a password from an SSL/TLS secured IMAP session.Canvel et al. suggested three additional attack types based on Vaudenay’s observations:
Timing Attacks The authors concluded that a successful MAC verification needs significantly more time compared to a premature abortion caused by an invalid padding. This observation relies on the fact that performing a padding check is less complex than performing cryptographic operations as they are necessary to verify a MAC.
Multi-session Attacks The basic idea of this attack type requires a critical plaintext to be present in each TLS session (such as e.g., a password) and that the corresponding ciphertext is known to the attacker. Due to the nature of security best-practice the corresponding ciphertexts look different every session, since the key material for MAC and encryption changes every session. Therefor, it is advantageous to check if a given ciphertext ends with a specific byte sequence (which should be identical in all sessions) instead of trying to guess the whole plaintext.
Dictionary attacks Leveraged by the previous attack type which checks for a specific byte sequence of the plaintext this attack aims at checking for byte sequences included in a dictionary.
Gregory Bard and Bodo Möller observed independent from each other an interesting detail concerning Initialization Vectors (IVs) of SSL messages. Every en- and decryption in CBC mode depends on an IV. Every new plaintext (consisting of multiple blocks) should get its own, fresh and independent IV.The problem with SSL is that, according to the SSL specification, only the IV of the first plaintext is chosen randomly. All subsequent IVs are simply the last block of the previous encrypted plaintext.This is absolutely in contrast to cryptographic best-practice. Bard observed that an attacker willing to verify a guess if a particular block has a special value and is in possession of an eavesdropped ciphertext can easily check the guess.
More chosen-plaintext attacks
The attack by Bard discussed in was revisited by him in 2006. Overall Bard addressed the same topics as before, but provided an attack sketch how to exploit this problem. Bard described a scenario in which an attacker uses a Java applet, executed on the victim’s machine, to mount the attack described before.
George Danezis highlighted in an unpublished manuscript ways how an attacker may use the obvious fact that minimal information, despite the connection is TLS protected, remains unencrypted to analyze and track traffic. In particular, Danezis used unencrypted fields, part of every TLS message, of the TLS Record Header for analysis.Danezis identified several information leaks introduced by these unencrypted fields:
- Requests to different URLs may differ in length which results in different sized TLS records.
- Responses to requests may also differ in size, which again yields to different sized TLS records.
- Different structured documents may lead to a predictable behavior of the client’s application. For example a browser normally gathers all images of a website – causing different requests and different responses.
- Content on public sites is visible to everyone, an attacker may link content (e.g., by size) to site content.
Moreover, an attacker could also actively influence the victim’s behavior and gain information about what she is doing (without knowledge of the encrypted content) by providing specially crafted documents with particular and distinguishable content lengths, structures, URLs or external resources.
IV chaining vulnerability
Rizzo and Duong presented a tool called B.E.A.S.T. that is able to decrypt HTTPS traffic (e.g., cookies). The authors implemented and extended ideas of Bard, Möller and Dai. The combination of CBC mode applied to block ciphers and predictable IVs enabled guessing of plaintext blocks and verify the validness.
Short message collisions and busting the length hiding feature
Paterson, Ristenpart and Shrimpton outlined an attack related to the MAC-then-PAD-then-Encrypt scheme in combination with short messages. In particular their attack is applicable if all parts of a message (message, padding, MAC) fit into a single block of the cipher’s block-size. Under special preconditions the authors described the creation of multiple ciphertexts leading to the same plaintext message.
Distinguishing encrypted messages
Paterson et al. extended the attack described above enabling an attacker to distinguish between two messages. The authors sketch how to distinguish whether the encrypted message contains YES or NO. The attack is based on clever modification of the eavesdropped ciphertext so that it either passes the processing or leads to an error message. Based on the outcome (error/no error) it is possible to determine which content was send.
AlFardan and Paterson applied Vaudenay’s attack to DTLS. DTLS is a slightly different version of regular TLS adjusted to unreliable transport protocols, such as UDP. These adjustments are advantageous, as well as disadvantageous at the same time. Vaudenay’s attack may work on DTLS since bad messages do not cause session invalidation. But with the lack of error messages the oracles introduced by Vaudenay can not be used without adjustment. The attacker gets no feedback whether the modified messages contained a valid padding or not. The authors adjusted Vaudenay’s algorithms by using a timing oracle arising from different processing branches with unequal time consumption.
Compression based attack
In September 2012 Juliano Rizzo and Thai Duong presented the C.R.I.M.E. attack tool.C.R.I.M.E. targets HTTPS and is able to decrypt traffic, enabling cookie stealing and session take-over. It exploits a known vulnerability caused by the use of message compression discovered by Kelsey in 2002.
Attacks on the PKI
Lenstra, Wang and de Weger described in 2005 how an attacker can create two valid certificates with equal hash values by computing MD5 collisions. With colliding hash values it is possible to impersonate clients or servers – attacks of this kind render very hard to detect Mitm attacks possible.The practicality of the attack was demonstrated in 2008 by Sotirov et al. who were able, through clever interaction between certificate requests from a legal CA and a massively parallel search for MD5 collisions, to create a valid CA certificate for TLS. With the help of this certificate they could have issued TLS server certificates for any domain name, which would have been accepted by any user agent.
X.509 constraint checking weaknesses
In 2008, Moxie Marlinspike ublished a vulnerability report concerning the certificate basic constraint validation of Microsoft’s Internet Explorer. The Internet Explorer did not check if certificates were allowed to sign sub-certificates (to be more technical, if the certificate is in possession of a CA:TRUE flag). Any valid certificate, signed by a valid CA, was allowed to issue sub-certificates for any domain.
Attacking Certifictae Issuer Application logic
Attacks on the PKI by exploiting implementational bugs on CA side were demonstrated by Moxie Marlinspike, who was able to trick the CA’s issuance logic by using specially crafted domain strings. Marlinspike gained valid certificates for arbitrary domains, issued by trusted CAs.
Attacking the PKI directly
Marlinspike described an attack that aims at interfering the infrastructure to revoke certificates. By the use of the Online Certificate Status Protocol (OCSP) a client application can check the revocation status of a certificate. OCSP responds to query with a responseStatus. The response structure contains a major design flaw: Not all fields are authenticated by a digital signature.An attacker acting as Mitm could respond to every query with tryLater. Due to lack for a signature the client has no chance to detect the spoofed response. Thereby, a victim is not able to query the revocation status of a certificate.
Wildcard certificate validation
Weakness Moore and Ward published a Security Advisory concerning wildcard (*) usage when IP addresses are used as CN URI in X.509 certificates. According to RFC 2818 wildcards are not allowed for IP addresses. The authors found multiple browsers treating IP addresses including wildcard characters as certificate CN as valid and matching.
The authors could fool browsers to accept issued certificates with CN=”*.168.3.48″. This certificate was treated as valid for any server with a “.168.3.48″ postfix. [/UPDATE - Thanks to Richard Moore]
Owning a CA
At March, 15th 2011 a major Certification Authority (CA) was successfully compromised. An attacker used a reseller account to issue 9 certificates for popular domains.
Owing another CA
Soon after the attack above, a Dutch Certification Authority was completely compromised by an attacker. In contrast to the previous attack impact, the attacker was able to gain control over the CA’s infrastructure.
Attacking certificate validation
Georgiev et al. uncovered that widespread, common used libraries for SSL/TLS suffer from vulnerable certificate validation implementations. The authors revealed weaknesses in the source code of major SSL/TLS libraries and applications build upon or with these products. The authors examined the root causes for the bugs and were able to exploit most of the vulnerabilities. As major causes for these problems bad and misleading API specifications, lacking interest for security concerns (even by
banking applications!) and the absence of essential validation routines were
Prediction of random numbers
In January 1996, Goldberg and Wagner published an article on the quality of random numbers used for SSL connections by the Netscape Browser. The authors gained access to the application’s Source Code by decompiling it and identified striking weaknesses in the algorithm responsible for random number generation.
In 2008 Luciano Bello observed during code review that the PRNG of Debian-specific OpenSSL was predictable starting from version 0.9.8c-1, Sep 17 2006 until 0.9.8c-4, May 13 2008, due to an implementation bug. A Debian-specific patch removed two very important lines in the libssl source code responsible for providing adequate entropy.
Exception based DoS
Zhao et al. provided an attack on the TLS handshake which leads to an immediate connection shutdown and can thus be used for a Denial of Service (DoS) attack. The authors exploited two previously discussed weaknesses to mount successful attacks.
- The first attack targets the Alert protocol of TLS and makes use of the fact that, due to yet missing completed cryptographic primitives negotiation during the handshake phase, all Alert messages remain strictly unauthenticated and thus spoof-able. This enables an obvious, but effective attack: Spoofing Fatal Alert messages which cause immediate connection shutdowns
- The second attack simply confuses a communication partner by sending
either misleading or replayed messages or responding with wrong messages according to the expected handshake flow
Ray and Dispensa discovered a serious flaw induced by the renegotiation feature of TLS. The flaw enables an attacker to inject data into a running connection without destroying the session. A server would accept the data, believing its origin is the client. This could lead to abuse of established sessions – e.g., an attacker could impersonate a legitimate victim currently logged in to a web application.
In February 2009, Moxie Marlinspike released the sslstrip tool which disables SSL/TLS at a higher layer. As a precondition it is necessary for an attacker to act as Mitm. To disable the security layer the tool sends HTTP 301 – permanent redirection responses and replaces any occurrence https:// with http:// (notice the missing s). This causes the client to move to the redirected page and communicate unencrypted and unauthenticated (when the stripping succeeds and the client does not notice that she is being fooled). Finally, the attacker opens a fresh session to the (requested) server and passes-through or alters any client and server data.
In 2011, the German Hacker Group The Hackers Choice released a tool called THC-SSL-DoS, which creates huge load on servers by overwhelming the target with SSL/TLS handshake requests. Boosting system load is done by establishing new connections or using renegotiation. Assuming that the majority of computation during a handshake is done by the server the attack creates more system load on the server than on the own device – leading to a DoS. The server is forced to continuously recompute random numbers and keys.
Hopefully, you’re still on board after this exhaustive long post. If you are aware of additional attacks on SSL/TLS please let me know! I will add them.
This post is based on a recent paper together with Jörg Schwenk. All of you who are interested on more details, attack figures, countermeasures and literature references are invited to read the full paper which provides more details and concludes the “lessons learned” from each attack. As promised, here is the link to the paper on eprint: Lessons Learned From Previous SSL/TLS Attacks – A Brief Chronology Of Attacks And Weaknesses
Best practices for all organizations that would like to produce more secure applications!
As part of the software development process, security professionals must make choices about where to invest their budget and staff resources to ensure that homegrown applications are as secure as possible. ESG research found organizations that are considered security leaders tend to make different choices than other firms.