Software Development

Simple Security Rules

Wow!
Citi really messed up their online security. They included account information as part of the URL. You could alter the URL and access someone else’s account information. Yikes o rama, that’s a bad design.

I’ve seen a fair number of bad security designs in my time, but I’ve come up with a list of simple security rules:

  1. Security by obscurity never works. Assume the attacker has your source code. If you are doing some super cool obscuring of the data (like storing the account number in the URL in some obscured manner like the Citi folks apparently did), someone can and will break your algorithm and breach your system.
  2. If any part of a system can read data, all parts of the system can. For example, if you’re writing an iOS app and are encrypting the data in the local database, the fact that you can decrypt it to use it in your app means that someone else can also decrypt it.
  3. A corollary to the above is that once data escapes your server, the bad guys can get the data, so let as little of the data out as possible.
  4. Also, never trust the data on the wire. Any HTTP/HTTPS request can be forged and tampered with. This means that if you send a primary key in a hidden field as part of an HTML form, ensure that when the form is submitted, the primary key is the same one your originally sent. But you say, “How can I verify it’s the one I sent… I wouldn’t have sent the primary key if I could keep the state on the server side and somehow correlate the form submission to the DB record that the form was submitted against.” Yeah, well Lift and Seaside and WebObjects and others have solved that problem.
  5. Know your types as you’re parsing the request and composing a response. Use an ORM that correctly escapes String parameters. Never “shell out”. Rails and Django have markers on Strings that indicate that they are to be “trusted” or they require HTML encoding. This addresses substantially all the cross site scripting related issues. Lift carries the DOM around as part of the page composition so it always knows what should be HTML encoded. Any framework that composes a response simply by writing Strings to a response is de facto insecure.
  6. Use random numbers for everything. SSL uses random numbers for keys. Lift uses random numbers for field names (except in test mode where having stable field names is necessary for automated testing). Use session-duration random numbers as opaque identifiers so that data doesn’t leak from the server to the client. Where you can’t use random numbers, encrypt any identifiers with a session-specific encryption key and make sure you have some salt in the thing being encrypted so the key cannot be rainbow-tabled.
  7. Test. Security testing is just testing and should be done at the unit and integration level. Security tests should be a normal part of your unit test suite as well as any integration testing that you do. Your QA people should understand common vulnerabilities (e.g., XSS) just like they understand common programming errors (e.g., NPE) and should test for them.
  8. Make the OWASP Top 10 a normal part of your check-in and code review process. This means that every material feature should have a list of the OWASP Top 10 associated with it and a 1 sentence description of the exposure to the vulnerability and anything done in the code to defend against exposure. Once developers do this regularly, it’ll take 5 minutes to fill out the list, but more importantly, it will create a culture of awareness.
  9. Never assume that your systems are secure. Always assume there are vulnerabilities… just like it’s good to assume there will always be bugs in software. It’s our jobs to identify the vulnerabilities, assess the risks of penetration, and prioritize remediation. Also, the only way to keep data out of the hands of the bad guys is to toss the hard drives containing the data into an active volcano (entropy is your friend.) If you can access the data, the bad guys can. The only issue is how much effort they are willing to put into getting to the data. If it’s not worth their time or there are easier targets, those targets will be attacked.
  10. Think of security as a series of obstacles rather than a single insurmountable wall. In order for the bad guys to get to the pot of gold, they have to evade many many obstacles. This makes it hard for them and increases the chances you’ll observe them trying to overcome an obstacle.

I’ll wind up with some thoughts on the whole RSA/Lockheed break-in. This is perfect example of a pot of gold being very valuable (control information for drones, aircraft design plans, etc.) and an attack that was long ranging and very methodical. The attackers probed the weaknesses in individuals within RSA (could this or this have been part of the probe?) Sent targeted documents that contained zero-day flaws to a small number of weak individuals. Once the attackers gained control of the individual’s machines, they were able to probe the network and escalate privileged in such a way that the actually accessed the RSA key database.

This was the time RSA should have voided all the RSA keys and re-issued new ones. Failure to do that should be a company-ending event for RSA… but I editorialize.

I’m just postulating here, but I’m guessing that the attackers used rogue certs to do man-in-the-middle to get Lockheed RSA key/username/password combinations. Because the CA issuing the certs was trusted and there are enough rough CAs floating around, it’s no longer out of the realm of possibility to do man-in-the-middle attacks of SSL layers (re-route traffic and use rogue cert).
If you know that current value of an RSA key, you can narrow down the device that is associated with an account (and if you do the same attack 3 or 4 times, you can figure out exactly which device it is) so that with each device seed number, you can determine what the current-time value of the correct RSA key for a given account.

Next, you waltz into the VPN or whatever that’s being secured and do whatever trivial privilege escalation you need to do to get to the right file servers.

Anyway, for the kind of systems most of us are building, sticking to my security outline above should yield good results, but if the target is valuable enough and the attacker is skilled and persistent enough, they can break almost any system.

Reference: Simple Security Rules from our JCG partner David Pollak at the Good Stuff blog.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button