Thursday, September 13, 2012

How to Mitigate CRIME attack in Apache

Perhaps you've seen the new CRIME attack on Compression in TLS connections.

The exploit uses a sidechannel attack (a piece of JavaScript running on the victim's machine) to repeatedly query a server and compare the time it takes each request to complete to eventually learn what the contents of an HTTP cookie is (while examining the encrypted packets on the wire using main-in-the-middle). Proof of concepts have been shown against Github, Dropbox and Stripe.

If you're running Apache you'll probably want to mitigate the attack, so here's how.

Mitigation in Apache 2.4.3+

Just add the line 'SSLCompression off' to your SSL configuration and restart Apache.

Mitigation in Apache 2.2 and Apache <= 2.4.2

Unfortunately older versions of Apache don't have an option to disable SSL compression (it's still being backported as of this writing). There are three options you have, one will work, the other two maybe not.

The first option is to recompile OpenSSL without zlib support; this will prevent the DEFLATE compression method from being used by the SSL module. This is a pain, but is guaranteed. You should still be able to use mod_deflate to compress HTML, however.

The second option you have is to patch Apache 2.2.22 (and possibly earlier versions) to include an SSLCompression option like in Apache 2.4.3. I just created this patch based on the 2.4.3 patch, but have not tested it. The code was pretty much verbatim the same however, so it should work. The patch is here.

The third option may not actually prevent the attack, but it's an idea I had. The configuration looks like this:

    SSLOptions +StrictRequire
    <Location />
    SSLRequire (  %{SSL_COMPRESS_METHOD} ne "DEFLATE" )
    </Location>

This will force any request to fail in "<Location />" if the request used SSL compression. The browser may still send a cookie with ssl compression, however, so the server may still cache the request and might have a similar side-chain attack vector to the original exploit. Use with caution and verify for yourself if the exploit is still viable.


Resources:


https://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslcompression
https://issues.apache.org/bugzilla/show_bug.cgi?id=53219

Monday, August 20, 2012

How Not to Install Software

Before I had written a single C program I knew what ld and gcc and autoconf were. I knew the difference between a library and an application, and quickly learned what 'core' software was. All of this I learned because I wanted to use Linux.



When I was a Windows user, there wasn't much I needed to know to install a new program. First you check the system requirements of the program ('Windows 98 SE? Got it!'). Next you'd download the program. The following week when the download was finished, you start the installer. Then go make a sandwich. When you come back, you get to configure the software using a handy wizard. Finally it would be done, and you would click on an icon and run your program.

Linux was a different animal. If you wanted a piece of software that didn't come with your distribution, you had to check if someone had made a package. Since they usually didn't, it meant downloading the program and compiling it. After the first couple attempts you'd figure out what the cryptic commands were ('./configure', 'make', 'make install'). Ah, fresh brewed software!

But wait - what's this error? Searching the web you might find some other person who'd had a similar bug, and they needed some extra software installed. Reading more you finally figure out what you need to get, and try to download and install that. Rinse and repeat for each new error.

Once you had struggled for a day or two with getting this thing installed you finally get to use it. It's great. You delete the source files and forget about all the stuff you installed, until one day you need to install something else.

It wants a different version of a library. OK, i'll just download that version, compile it, and try to install...

Shit, it's not working. Both libraries are installed and now other programs aren't working! I can't even use Gnome now! What's going on?! Finally in desperation I delete everything I installed in /usr/local/* and re-install all the packages from my distribution. The old stuff works again, and none of my custom software exists anymore. I'm back to square one.



From years of fighting software and many, many mistakes, I learned how software likes to be handled. Applications like to pretend they live in their own universe. Libraries like to pretend they are the only version that will ever exist. And operating systems like to pretend 3rd party software does not exist.

Distribution-provided software is very sensitive. If you want to keep using the distribution's tools, you have to keep all their software pure and pristine. The royal family of software, as it were. If there is an official package, you must use it, and never overwrite it with non-packaged software.

Third-party software likes to do it's own thing. Install where it wants to install, modify what it wants to modify, run how it wants to run. There are no guarantees here unless you start mucking with the code or the installer. Making matters worse, it usually expects you to install everything as root, making it much more likely you will screw something up by accident.



Why does this system perpetuate itself? Because users of Unix-like operating systems are expected to take on the task of becoming an expert. There is no expectation that software 'just work'. It's a near miracle if you can install packaged software without having to track down its dependencies, and impossible to install something from source without intoning the secret language of the command-line.

This shouldn't be the case. Windows and Mac have had single-file installers with no dependency requirements for decades. Even popular third-party software for Linux and BSD comes with all its dependencies, either in source or as binaries, to work around the dread art of installing software. Though this has consequences in a few corner cases, it works much better than the sea of wild technical expertise required to manage what should be a point-and-click operation.

There doesn't seem to be an end in sight. Linux distributions and software developers continue to work in silos in spite of the users who are the audience for the collective product. We sit and dream of one day using a free computer system that our grandmothers could use without needing to call us for support. But this train can't be built by a person alone. Only a collective compromise and the backs of all invested parties will build the track for us to run our trains on. In the mean time i'll keep moving product by hand.

Monday, July 30, 2012

Authentic Steam Watches

Why don't more steampunk novels involve the mechanics/engineering of victorian-era devices? Watches and gears are everywhere in steampunk culture, but it seems there's never technical discussion of them in or relating to the plot.

I got a pocket watch recently, and as is natural to me I find myself gravitating to the technical aspects. (Wikipedia) For instance, Railroads demanded very specific standards for their time pieces, lest a train derail from being off-time by a couple minutes. Therefore there are specific watches which are engineered to be very robust and keep time much more accurately.

There's even technical words which i've never heard of before, such as 'isochronism' - keeping the same time even if parts of the whole start to change. Apparently this is even used in some modern technical documents; USB has an isochronous transfer mode.

One of the things that stood out was how temperature changes the operating of the device. Extreme cold or heat will contract or expand the steel balance, causing the watch to run slow or fast. They engineered a solution which involves mating it to a brass balance and having two cuts in both, though that leads to only keeping accurate time in either extreme cold or hot environments. Special alloys ended up fixing the problem for good, though that was post-Victorian, essentially.

You could even extend some knowledge of watchmaking to other parts of a story. Jewels are often used as a hard, durable, low-friction mounting point for the moving pieces of a pocket watch. You could include in your story a plot device where one particular jewel (though normally valueless) unlocks some key to some device made thousands of years ago, as part of some detective novel revolving around ancient devices with a modern spin.

There's a plethora of technical jargon specific to pocket watches which might be nice to include in the story. If you want to write for geeks/nerds, including technical details like this can't hurt.


Some neat facts:
  • Wrist watches ('wristlets') were considered feminine and unmanly until they were introduced by the military and finally made standard issue in the 1940's.
  • The vest-pocket in a three piece suit is intended for a pocket watch. Since vests fell out of fashion, the only place to put a pocket for a watch was in trousers. Hence that little pocket you tend to put change in or try to cram a cellphone into.
  • A four-minute delay in one watch caused a train wreck, hence Railroad chronometers will (among other things) keep time to within 30 seconds in a week.

Tuesday, July 24, 2012

HSTS makes CAs obsolete

I was in the toilet, where most of my brilliant ideas come from, and I was thinking about HTTP. How it's a bit crufty and old (13 years), how it could use significant upgrades to enhance delivery of content. I thought about SPDY and how i'm wary of the 'features' it mandates, like SSL.I don't like SSL in general (it's a pain in the ass) and I like being forced to pay to serve my own content even less.

Then I thought about HSTS and how it makes it easier to connect to a site securely. Sure, it has nothing to do with transport or encryption directly, but the aim was to keep the connection secure by preventing an attack on a browser's ability to *not* use SSL. And I remembered that browsers like Chrome ship with implicit lists of sites which should have HSTS enabled by default. And then it hit me.

With an HSTS-enabled flag for a website in your browser, if you also shipped a certificate fingerprint, it basically bypasses the need for a Certificate Authority.

Think of SSH. What's the one time your connection is in peril?

The authenticity of host 'syw4e.info (97.107.132.9)' can't be established.
RSA key fingerprint is 57:f9:cf:53:3a:fb:a4:af:e0:96:3c:20:99:30:82:8e.
Are you sure you want to continue connecting (yes/no)?

That question is all that stands between you logging into a real server and a fake server, and establishing a secure connection or not. If you had that key fingerprint already you would know if it was authentic, and you could go on with your life without answering stupid questions.

This is exactly what happens when you visit a site with a self-signed certificate, only it's much more complicated than it needs to be. If your browser simply had a list of those fingerprints (similar to the list of HSTS-enabled websites Chrome has) it could connect securely, automatically, without having to verify against a 3rd-party Certificate Authority.

Though this would make the browser's connections about as secure as with SSH, this isn't practical. There are lots of websites out there. We can't possibly keep a list of all of them in the browser. But if you believe in the idea that HSTS makes us more secure than without it, simply accepting and keeping the first certificate you got would be just as secure, right? Well, there's a problem there.

Websites are constantly changing their certificates. They add and remove hardware regularly, and sometimes revoke certificates if there's suspicion their private key might have been compromised. They also change before they expire. So even if we had a list of the initial certificates' fingerprints, what happens when they change?

In order to support a dynamic network of secured systems, there would need to be an extension to the encryption protocol that allowed downloading updates for future certificates. In addition, sites could publish a list of 'trusted' hosts (perhaps even on different domains) which can also update the fingerprint, so that if host A.B.C is down, you can still get an update for it's certificate fingerprint from B.B.C, or even D.C. In this way, a very simple peer-to-peer network of whitelisted hosts could share updates about the security of the network, without being explicitly tied to a few for-profit corporations (SSL CAs).

So then you may ask, what about internal servers? How can we ensure a user going to a closed, internet-less site knows they're connecting securely? The answer there is, of course, hypocrisy.

If you want your client to be connected securely, you have to exchange some secrets. It's mandatory for encryption to be secure. Maybe it's a HMAC'd challenge-response pair, or a shared key or certificate. You need to share something ahead of time.

With SSL, it's been the chain of certificates from Certificate Authorities that live in your computer and in your browser - up to 650 or more of them! So we can keep that system alive, and keep paying CAs for the 'extra assurance' we need for things like online banking, and offline encrypted connections. But we can also share our own secrets.

Take the case of the 'pay-for-wifi' connection abroad. You connect to the wifi and try to check your e-mail, and are redirected to a page asking you to put in your credit card details. Well wait just a minute! Is that the real page, or did some hacker put that up? With CAs you would be assured, because you have their chain of trust in your browser. But if you don't want to use CAs, you could input the certificate fingerprint yourself, perhaps if it was printed on the wall next to the access point's SSID and WPA-PSK passphrase.

tl;dr


But I digress. The main thing to take away from this post is how HSTS has dramatically changed our perception of 'secure web'. Instead of demanding that all connections are secure, we accept that on the very first visit, we might be getting a 'real' HSTS response from a website, or there might be an attacker lying in wait.

Of course i'm not as smart as I seem. Somebody else has already thought of all this and created it, and is trying to get the browsers and big internet players to buy in. It's not going so well. But since HSTS is already implemented in extremely popular browsers, they must have already accepted the idea of the no-assured-trust-on-first-visit model of security - for the HTTP protocol, anyway. If they accept that, then they're only a step away from the same security that SSH depends upon.

Considering all this, it seems that there's a disconnect in the reality of browser security. The browser and big-internet-guys already assume their connection might be compromised on the first visit. Yet they won't accept this new model that avoids the need for Certificate Authorities. Once implemented, SPDY adoption might actually skyrocket, because the protocol wouldn't be beholden to paying third-parties and depending on all 650 of them for security. I just hope progress and the pursuit of better security wins out over commercial interests.

Friday, July 13, 2012

How to Scale in the Real World

Every so often i'll see a blog post about "scaling lessons I learned when launching my startup." It's painful to read. Lessons like "use monitoring" and "you can use metrics for your application!" and "sharding is good".

Then they go into the hacks. "Add artificial load to your server so when it breaks, you can remove the extra load, and you have more capacity!" "Take a server offline every once in a while." "Automatically kill anything which has too much memory or CPU or takes too long." "Memcache memcache memcache." "Giving developers root makes their lives easier, and mine too!"

Yes, we were all beginners once. These little nuggets are a window into years ago when I too thought it was a good idea to make untested changes and restart services in the middle of the day, or up the limit on max connections until the server fell over, then scrambling to add another server. We learn from our mistakes.

The thing is, no book or blog post you read about scaling will work for everyone. Everyone experiences it differently because both the technology and what you're using it for is different. So you need to be flexible while relying on a little bit of old folks' wisdom. And yes, I just called myself 'old folks' at 28. Jesus i'm arrogant.

Anyway. Here's how I think of scaling in the real world. Keep in mind i'm only talking about "scaling" and not "keeping a fully-redundant high-performance site operating at peak optimization", because that's five different things and way more complex than a single blog post.

Step 1. BE AFRAID


A good mindset of fear and paranoia will help you plan and execute everything you do to scale your site. You should be aware of everything you do and what it's consequences could be. Fear of the site going down, fear of what happens when I push this commit out, fear of bottlenecking i/o, fear of accidental ddos'ing, fear of getting hacked.

Fear is a great motivator. You should also keep in mind it's just a job and calm the hell down, but in general being wary of things breaking or degrading should be high in your mind when you do anything. It will help you plan and execute your plan in ways that will minimize risk and maximize the value of changes.

Step 2. HAVE A GOAL


So your startup is going to revolutionize the way people take a bath by making a social network for rubber ducky owners. Great idea, but that's not the goal i'm talking about. Your goals towards scaling should have specific things you want to accomplish, such as a number of users on your site at the same time, or the average speed of anyone browsing any part of the site. You will execute your goals by building out your site to meet exactly these criteria.

Now you might be saying, but I want to scale infinitely! Can't you just tell me how to configure Redis so i'll never run out of capacity? The answer is of course, No. All scaling has upper limits. The point is to figure out how far you can go ahead of time, so that when you're getting nearer to the limit, you know to make a new goal and plan for that.

Imagine eBay. At some point they probably had a generic way to scale for a while, so they could keep adding servers and bandwidth and keep up with demand. But at some point, you outgrow datacenters. You outgrow coasts and continents. Will your little auction site keep churning away when it's stretched out across the globe, still using a static map file in Apache that needs to be reloaded every time you add an application server? The goals have to be re-imagined at some point. Figuring yours out will make it easier to focus on the 'now' while keeping an eye on the future.

Step 3. PLAN


A scaling plan is basically your architecture manifesto. Keep in mind, it's based on your goals, which should change as you grow, so don't be stuck on one kind of technology or way of doing something. Whatever it is you're doing, there's a different, probably better way to do it, so don't get too caught up with the details. To begin, take your goal and look at every single layer from the client to your app's guts and back.

Let's take a goal which says "I want to maintain 30,000 hits per second of traffic." Starting with the browser client, where is your traffic going? Probably to a web server. If it's going straight to your web servers, you're going to need to sustain over 30K connections, which is a problem for just one web server. If you were going to a CDN that would be much easier to deal with, and you can probably get by with one frontend caching proxy server like Varnish (though that's not redundant at all, your goal didn't include redundancy...). It will have to be a really beefy box to keep a good and fast cache, though. You'll probably also want to enforce cache headers to the CDN to make sure it's not pulling your whole site from the origin every 2 seconds.

So you have 30K HPS to static content. Wonderful! Oh what's that? You wanted to display a social graph of your rubber ducky empire to every user? Shit. I guess we need more stuff. MySQL for a database (because it's easy and universal), Starman for an application server (because fuck you Perl is more than good enough), Memcached for your "fast" application cache, and one of those Map/Reduce thingies for making your social graph (i'm not a real developer, I don't know how that shit works). But how do you configure them? How many do you need? What happens if you outgrow something? Calm down. And keep in mind it doesn't really matter what you pick, you'll figure out how to scale it soon enough.

First write your application for the stack you picked. It doesn't matter what your application is or how shitty it runs as that has nothing to do with scaling. Scaling happens once the piece of crap code is done. This is how scrappy start-ups can afford to write terrible on-the-fly hacks and still survive launch week. So now that your app is running, you need to gather benchmarks.

To gather benchmarks we need metrics. To get metrics you either write something yourself or grab something that's actually good, like collectd. Configure it to gather everything under the sun and send it somewhere not on the box it's collecting on. Then populate your system with fake data and start hammering all the parts of the site. This is useful later as you can keep testing functionality and capacity as your site grows.

As you test your site, see how much of the resources are used up by the meager benchmark you've made. Now compare that to your goal and add about 20% to that number, and you know how much resources you'll need to hit your goal. Now just allocate enough capacity to get there. Keep in mind disk i/o, bandwidth, cpu, database queries, connection pool numbers, cache hit percentage, etc etc.

These numbers are not just basic information you need for capacity planning, it's critical in monitoring your live site to see when you unexpectedly hit a bottleneck. All of these criteria should have monitoring alerts trigger if they get anywhere near 80%, or double in a less-than-manageable amount of time. (Can you double your database capacity in an hour? No? Then you should probably get alerts if any of your database metrics go up by 50% in a half-hour.)

Now that you know the basic resources you'll need to achieve your goals, tune your stack. This is where "premature optimization" is actually a great thing. For example, your resource numbers for MySQL probably look ridiculous - 50 servers just to handle 30k HPS? Apparently people forget that MySQL (like most tools) needs to be tuned to reach its peak performance. Once you tune your stack you can go back to your benchmarking tools and fine-tune the performance to get the numbers more efficient.

But let's be honest: the goal is not to get the fastest performing stack, it's to get a stack that can perform. You might start to rethink your application when you find out it's just not performing very well. In general it's a mistake to redesign your app just because it looks like scaling is taking a lot more resources than it should. As a famous customer support representative once said, "The future is gonna cost more money," and your application will get slower over time. Focus on scaling and let someone else optimize the application.

With realistic numbers about how your site can perform, you can start allocating resources.Your goal was 30K HPS, but you only get 100 hits per second right now. If you have no historical data to plot the growth of traffic, just shoot for 10 times the traffic you're doing now and allocate resources for that. Before you have a launch day or big advertising push or something, check your historical data and do another 10x increase beforehand. If you're not using the cloud, make sure your provider can allocate resources at the drop of a hat for you, or that you have spares to use. If you're using the cloud, make sure you have all the steps down-pat for adding your resources in real time, so if you suddenly get a million users signing up to your site you know how to throw more resources in place.

The "we just got 10,000,000 signups!" scenario is extremely rare. But for cases of unexpected, goal-smashing growth, you need to have an emergency plan as well. You can find examples of them around the web. Typically it's a combination of handicaps to your site to keep some core functionality running. The last thing you want is for everything to go down. It's better to cap the number of incoming connections and allow a slow stream of users to use the site while you rush to obtain more resources to grow the site in time. Anything can become a bottleneck - network traffic, disk i/o, memory ceiling, database connections/queries, etc. Be aware of the maximum level for each criteria by comparing the resource use from your metrics with the configuration of each software component.

The last thing you want, which you'll add probably as you realize you don't have the money or capacity to just keep adding resources, is caching. In short: Cache Everything. Cache on your frontends. Cache on your backends. Cache to disk. Cache in memory. Cache the highest-used pages. Use a bigger journal to cache in the filesystem. If you desperately need iops, using tmpfs and writing changes occasionally with rsync is a form of caching. You can send users to the same servers to maximize cache hits at the cost of high-resource hot spots, or send them random places for better spread-out load at the loss of cache hits and increase in global resource use. Figure out what works best for your application.


Step 4. EXECUTION


So you have your goal, you have your plan, now you need to put it into practice. Scaling is one of those things where you don't need it until you need it. So being prepared to execute your plan at a moment's notice is pretty important. Usually it involves fire drills where your site goes down or you lose capacity and you need to add more quickly. But the management of your site is important as well.

Are your changes automatic? Do you have good revision control and deployment, and can you revert your changes immediately? Is your application's use of your infrastructure abstract enough that you can change backend pieces without ever touching your code? Can you roll out new services at the push of a button? Have you been testing your changes?

It seems obvious, but many times the problem with rapid scaling is simply a lack of best practices. All those little things you ignore because you're a startup and you don't have time to implement configuration management because of your 'just ship it' mentality? Once you've shipped it, and you suddenly need to scale, you get bit in the ass by the eventuality of your apathy to best practices.

Scaling is a never-ending process of analyzing data, testing limits, and growing your infrastructure. There's no easy way to do it, but at the same time, pretty much anyone can do it. The reason scrappy kids right out of school that jump on the startup bandwagon can keep tiny sites operating at huge numbers is because the actual work of adding resources is trivial. You figure out what you're lacking and you add more of it. The key is being constantly aware of what is going on and keeping one step ahead.

Thursday, March 29, 2012

Gender disparity in tech: Free classes bucking the trend?

I recently facilitated a class on Object Oriented Programming with Java. I helped coordinate with the teacher and students to get everyone together at the right time and place for the class, and pass messages to/from the teacher. The class was free and community-organized so there was no traditional academic structure. Anyone who wanted could come.

Half way through the class I noticed something interesting.

At least 75% of the class was made up by women. When asked, most of the students (both male and female) had practically zero programming experience and mostly worked in web development or graphic design.

I was trying to figure out why such a class would bring so many women when all I usually hear is how women are under-represented in tech fields. Then I remembered how most of the classes put together by this organization have a significant female attendance, on average. Almost all of the advertising for these classes is word of mouth, and since many of the organizers are female it makes sense that their social networks would comprise at least partially of women or women-centric user groups.

Now i'm thinking more about the dynamic that males have with each other and how that relates to them getting into the tech field. Would they be less likely to jump into classes if they knew they'd be the only woman in the class? Some might say no, but i've been in classes with all women and it can be a little intimidating for me, a guy. If the entire tech field was dominated by women, would there be a stigma against men getting into tech, because it might mean men are being "girly"?

Of course this was just one class so it's impossible to take anything solid away from something like how many people attended. But my guess is if there were women teaching classes to women, there might be a bigger turn out than is expected in more typical academic settings.

Tuesday, February 28, 2012

hacking

Hacking is not programming.
Hacking is not learning.
Hacking is not making.
Hacking is not sharing.
Hacking is not networking.
Hacking is not something you do.

Hacking is how you do something.

Saturday, February 11, 2012

Captive Portal Security part 1

CAPTIVE PORTAL SECURITY, pt. 1

by Peter Willis


INTRODUCTION

This brief paper will focus on the security policies of networks which require payment or web-based authentication before authorization. The topics of discussion will range from common set-up of captive portals, the methods available to circumvent authorization and ways to prevent attacks. A future paper will go over the captive portals themselves, their design and attack considerations.

TABLE OF CONTENTS

I. CAPTIVE PORTAL OVERVIEW
II. ATTACK METHODOLOGY
III. DEFENSE TACTICS
IV. SUMMARY
V. REFERENCES


I. CAPTIVE PORTAL OVERVIEW

What is a captive portal? A captive portal (or CP) is a generic term for any network which presents an interface, usually via a website, to authorize a user to access a network or the internet. Typically these will be in the fashion of a website where one accepts an agreement not to tamper with the network. It can also cover college campus log-in screens, corporate guest account access, for-pay wifi hotspots, etc.

In this paper I will mostly be discussing the "wifi hotspot" form of CP, where either after payment, user authorization or clicking on an "Accept" button will get you access to the internet. Many attacks to circumvent these networks rely on remote dedicated servers and thus won't be suitable to bypass all forms of CPs (as some networks are intended to grant access to internal-only networks).


II. ATTACK METHODOLOGY

There are many different attacks one can perform to circumvent a CP, ranging from the simplistic to complex. Each has its benefits and weaknesses, relating to performance, reliability and possibility of detection by IDS. I will briefly outline the methods here with more examples/detail given during the paper.


1. Tunnel over DNS

This method involves tunneling network and transport-level packets encoded over DNS records. The reason this attack works is due to the design of the DNS system. A domain or sub-domain can delegate where the answers for their DNS records come from using an NS record. The NS record points to your 3rd-party server. The useful part for bypassing CPs comes in here because most caching nameservers (like those on a CP) must forward on requests to your server if they want to be answered properly, which means we can use the CP as a sort of DNS proxy. Set up a custom nameserver that can pack and unpack packets and a client to transmit them and you have yourself a two-way connection with a server on the internet. Make an SSH connection through it and you have a secure tunnel.

One tool used to exploit this property of DNS is iodine[1]. This tool has enhanced security and methods to auto-detect the type of network it passes through and get the best connection possible. Unfortunately it doesn't work as well in the real world as it does on paper.

The original implementation, OzymanDNS by Daniel Kaminsky, is (once modified) actually the most reliably working solution, even if it works as a presentation layer instead of tunneling raw IP packets. This actually gives a couple benefits in that it can be run by an unpriviledged user (on the client side) and is modular/easy to hack on (written in perl). I have modified a version of this code[2] to optimize it and reduce CPU use, but it's not pretty: an average of about 7 kilobits per second is what you'll see. But if you really need brief access it works just about anywhere.


2. Tunnel over ICMP

Surprisingly difficult to exploit in the real world, this protocol offers an alternative tunnel through a firewall for those that don't block it. Networks use ICMP for all kinds of things, most notably passing errors and routing information when necessary as well as the ubiquitous 'ping' messages. If you can't ping there may still be alternative methods to tunnel data through this protocol, but in general the ping is the simplest.

The attack works like this: send an ICMP message (which being layer 3 is not always restricted the way IP sub-protocols are) with an IP or other packet encoded in its payload similar to tunneling over DNS. The benfit of this method is (when it works) it can provide a highly reliable connection without excessive lag (around 24 kilobits per second bandwidth on average). The tool icmptx[3] works well.

A bigger problem looms with IPv6 though, as there is a newer ICMP (icmpv6) which imposes restrictions on the number of icmp packets per second - around 6 per second. This seriously hampers the performance of any application trying to tunnel data through the protocol, but luckily nobody uses IPv6 yet. In general most CPs seem to block ICMP packets.


3. Firewall pinholes

Seldom probed by anyone but the most desperate CP hackers, most firewalls do allow one or more tcp or udp ports outbound to remote hosts (usually on the internet). These will allow you to tunnel an arbitrary protocol as long as it works over the transport protocol of the pinhole.

My favorite of these is tunneling OpenVPN tunnels over udp port 53 - the DNS port. Some CPs allow port 53 outbound to any host on the internet without checking to see if it's passing DNS requests, which is a big mistake. Unfortunately some CPs intercept and rewrite any traffic going over port 53 to provide their own custom DNS responses which breaks this method.

Sometimes strange high-numbered ports are open to the remote host. The simple way to tell this is to run a tcpdump or custom application on the remote host and scan every port from the CP's network to identify the one that's open. With an automated script this can be accomplished easily and once a hole is found the tunnel will provide full bandwidth to the attacker. For attacking CPs that allow access to internal networks you may be able to craft specific packets which when successfully passed through present a different response, thus enumerating open ports.


4. Transparent proxies

Often times the best holes come from an incorrectly set-up network proxy. Often the ACLs on HTTP proxies are set up only enough to block basic requests and don't handle the whole spectrum of possible HTTP requests. Sometimes you can use the proxy as simply as configuring the CP's gateway IP in your browser's proxy settings. However, sometimes a 3rd party server will help us with the extra mile of getting through their ACLs.

The simplest attack in this case is simply looking for an open proxy on some host on the network. Nmap[4] is bundled with scripts which will quickly detect open proxies. Sometimes one needs to abuse the HTTP specifications to find a hole in a particular implementation of an HTTP proxy. Some ACLs can be bypassed by merely changing CRLF to LF in your request or using a different HTTP method. Some authentication/authorization software even have rules in place to allow you to bypass authorization by adding a "?.jpg", "?.css", "?.xml" or other extention to your request. Sometimes header injection can be used to provide a quick jump through a backend proxy.

However, this may only give you HTTP or HTTPS access through the CP. To pass arbitrary data through the proxy it must allow you to use the CONNECT method to reach a remote host. Usually this is allowed because the SSL port (tcp port 443) needs to transmit encrypted packets which are only passed through a proxy with the CONNECT method. If a CONNECT is supported we can take advantage of a 3rd-party server on the internet to tunnel arbitrary packets over HTTP.

An HTTP proxy is set up on the internet which passes connections on to an internal SSH server on port 22. A client then uses proxytunnel[5] to connect to the CP's gateway and from there issues a CONNECT to the 3rd-party proxy, usually on port 443 (but port 80 is a good alternative to have set up). The final request is made to CONNECT to that machine's ssh port. If all goes well you should be able to use this new connection with your SSH client and forward arbitrary data over it. All the CP sees is HTTP or HTTPS traffic being forwarded from one proxy to another. The end result is a high-bandwidth tunnel through the proxy.

A tool[6] included with this paper will allow you to quickly probe ports found on a CP gateway for common problems in HTTP implementations and look for a way out.


5. MAC spoofing

The last method is the only one which is present in all wireless and some wired CPs by design. Once a host is authorized by the CP, its mac and IP address are allowed unrestricted access. All one needs to do is sniff traffic on the network, find a host that is authorized, and spoof its IP and mac address. Spoofing a mac is dependent on your network card and driver but most modern network devices today support it. The downside of course is you have to observe someone already authenticated, but in places such as a crowded airport lobby this may be less difficult than it seems.


6. Miscellaneous

Of course there are many more methods to circumvent CP access with varying degrees of success. Other methods include use of fuzzers, creative source routing/probing, the abuse of cisco protocols for proxy clustering, and the abuse of "convenience" features of some authentication servers.


III. DEFENSE TACTICS


I can't tell you how to perform these attacks without at least trying to tell you how to fix your own broken networks, so here's the white-hat portion of the paper.

First of all, block all unauthenticated traffic (of any layer) destined for the internet. There's no reason for any client to be able to pass any packets without being authed, so just put that rule at the top of your firewall. That'll stop firewall pinhole attacks and IP over ICMP.

To stop IP over DNS, simply have all DNS requests from unauthed sources return the same record every time... The IP of your CP gateway. Not only will this work fine with your transparent HTTP proxy (your proxy should be sending a redirect to your gateway's website anyway for authentication purposes), you can also run many service-specific proxies that will handle requests and inform the client to auth first via http. SSL won't work, but screw them for trying to be secure.

To stop HTTP transparent proxy abuse, pick a very secure and well-established proxy server that's stable and has all security patches and bug fixes up to date. Make sure you craft your ACLs to explicitly stop ALL requests coming from an unauthed host. It should only return the redirect to your authentication website. If for example the server differentiates between requests which are terminated by LF or CRLF, you should probaly write a lot more rules than less and cover all the possibilities.

For other methods of abuse, make sure your CP solution has no unneeded services or ports available on the network. These can be easy targets for attack (who here has checked if snmp is enabled on their gateway? ssh?) and the less attack vectors the less likely an attacker will get through. Audit all your systems to ensure they don't accidentally allow more access than is necessary.


IV. SUMMARY


Almost all CPs today can be bypassed in one way or another. Clearly there's a certain amount of risk vs reward where a company just doesn't care if the top 1% of users can gain access without authorization. That's good for us hackers, but can be bad for the CP provider.

Did you know that since its inception, Starbucks and McDonald's for-pay wifi had open squid proxies on every default gateway? All you needed to do was run an nmap scan to find it, set it in your browser's proxy list and surf for free. (Oh, and the payment network McDonald's uses to process credit cards uses the same network) Did you know there's a major mobile voice and data service provider which has not one but THREE holes in its service, allowing anyone to use their service for free anywhere in the USA with 3G coverage?

Even if Bradley Manning was not able to get a CDROM of US embassy cables out of a secured facility, he might have been able to find a way though network firewalls using these same techniques and tunnel the data out. It's not that far-fetched: as corporations increasingly lock the internet away from their employees, industrious hackers find new more challenging methods to circumvent their restrictions and get the access they want. With this in mind, those implementing the controls should be wary of the many methods of attack involved and the potential for abuse.

This paper has not covered the possibility of attacking the web applications which authenticate users trying to gain access to the network. I am not a web application pen tester so i'll let someone else review those holes.


V. REFERENCES

1. iodine - http://code.kryo.se/iodine/

2. dnstunnel - http://psydev.syw4e.info/new/dnstunnel/

3. icmptx - http://thomer.com/icmptx/

4. nmap - http://nmap.org/

5. proxytunnel - http://proxytunnel.sourceforge.net/

6. quickprobeportal.pl - http://psydev.syw4e.info/new/misc/quickprobeportal.pl

Thursday, January 12, 2012

Hackerne.ws DNS temporarily broken

If you're going to update DNS, use a tool that sanity checks your configuration as well as running your zones in a sandbox before deploying them. Otherwise this happens and your site goes down:

willisp@darkstar ~/ $ dig hackerne.ws

; <<>> DiG 9.4-ESV-R4 <<>> hackerne.ws
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43879
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;hackerne.ws. IN A

;; ANSWER SECTION:
hackerne.ws. 16 IN CNAME 174.132.225.106.

;; Query time: 80 msec
;; SERVER: 150.123.71.14#53(150.123.71.14)
;; WHEN: Thu Jan 12 09:57:05 2012
;; MSG SIZE rcvd: 58

They soon fixed the problem so i'm not trying to give them too hard a time, but it's a good lesson in why even modest sites should do quality control for all production-touching changes. Unless you're really familiar with DNS the above mistake might get overlooked quickly while troubleshooting.