Friday, October 29, 2010

hacking corporate store fronts

local business in america is kind of a quagmire. it seems that except for a few small areas where tons of self-interested stuck-up liberals take the initiative to completely force out corporate interests from a given city, corporations run things. most storefronts you find in america are cost-cutting franchises and subsidiaries of conglomerates. it's no wonder americans easily swallow any pre-packaged product sold to them: be it music, food, television, movies, games... it's all made to order with few variations and dumbed-down for everyone's generic tastes. (heh, it's kind of funny that those are the only things americans are interested in, too)

local businesses get pushed out by these bigger corporations, mostly due to extremely competitive prices. but local business could bring a lot of variety to consumers and in effect influence the entire culture wholly through local means, if it was done on a large enough scale. the question is, in this capitalistic dog-eat-dog country, how do you introduce local business when the whole economy is based on cutting them off at the knees?

i think big companies could start by taking their already extremely effective cost-cutting measures and branching them out to more specific tastes. i think that if you work closer with all the producers of "content" that you use to produce your products you can still keep a low cost and generate a higher variety of products. integrate more, produce with more efficiency.

it would be pretty simple in principle: for any given "metro area" or whatever you determine to be an area with a specific taste that you could market a certain regional product to, create a brand. then create a line of products that are "mostly" only sold by that brand. in this way not only do you create the appearance of originality and variety, you can hopefully win over the local populace and generate a kind of grassroots following for your band.

the goal here is to *not* allow people to associate your stores regionally/locally the way people do nationally. they should not be able to say "that's the mcdonalds of east texas." part of that is keeping your brand relatively small, but also making sure your products aren't overly cookie-cutter in nature. nothing turns people away from big business faster than the lack of a mom-and-pop appearance. you need to hire good people to help sell the brand, but your products also need to have a certain element of being created or finished in the store itself.

have you ever seen a national franchise which could, for example, cook an omlette made-to-order in two minutes for a customer? i don't think i have. there must be an expense associated with shipping fresh eggs, keeping them cool, allowing for a kitchen area to prepare the ingredients, etc. but sandwich/sub franchises do almost this very thing. quiznos franchises receive pre-cooked bread and ingredients and assembles them in a matter of minutes for its customers, and produces what i consider to be a fairly high quality sandwich for the price/time. so why can't we ship pre-mixed eggs and the same ingredients, throw them in a bowl, put it in a microwave or some other omlette-cooking machine and give people something fresh(ish) and made-to-order/home made?

all you'd need to do at that point is rename the store for a given region and customize the ambiance, and switch around the recipe a bit depending on the area. your store fronts gain the reputation of being a "local", original, consistent source for (hopefully) good products, and your customers gain the knowledge that they're not just buying the same old crap from a national chain, maybe even believing they are helping the local economy. (maybe they could even go so far as to put more reward in the hands of the local store owners/managers as to actually produce more good for the given region? but now i'm really dreaming)

Monday, October 25, 2010

Why I think Devops is stupid

http://www.jedi.be/blog/2010/02/12/what-is-this-devops-thing-anyway/

First of all, this isn't a "movement." People have been trying for years to get quality sysadmins who are also competent programmers. I still believe that except for a few rare cases, these people do not exist. And they shouldn't: clearly, something is wrong.

If I told you I spend all of my time both becoming the best sysadmin I can be, and becoming the best programmer I can be, would you believe me? If so, I have a bridge to sell you. The fact is that when i'm a sysadmin I really don't program much at all. I spend my day at work fighting fires and performing odd jobs and when I get home the last thing I want to do is get back to the computer. And at work, if I spent most of my time researching new development trends and writing new tools in experimental languages, how much real sysadmin work am I doing? No, the truth is I wouldn't have enough time in the day to be both a full-time sysadmin and a full-time programmer. I can only do one job at a time.

"the Devops movement is characterized by people with a multidisciplinary skill set - people who are comfortable with infrastructure and configuration, but also happy to roll up their sleeves, write tests, debug, and ship features."

Sorry. I have a job. I don't want to have to do the developers' jobs too. I'm upgrading the Oracle cluster to RAC and being woken up at 3 AM because some bug somewhere deep in the site caused pages to load all funky, and i'm trying to figure out who committed the flaw and get them to revert it. Even if I wanted to, i'm a sysadmin; i'm not familiar with the developers' codebase, and sometimes not even the language they're writing it in. How the hell can you expect me to realistically debug it in real time? And writing tests? Really, you want me to write the developers' unit tests?

Don't get me wrong. I am fully in support of the general idea of better communication between groups and sysadmins working with developers, DBAs, QA, neteng, etc to build a better product. I think it'd be insane for any group to go about making any major changes without consulting every other group and working out any potentially negative ramifications. But this doesn't mean each group has to know how to do each other group's job. Communication is the key word here, not cross-pollination.

There are lots of technical issues that come up in the building of any product. To make it work as well as possible, there's lots of different problems which have to be accounted for. The problems cited in the above post - 'fear of change,' 'risky deployments,' 'it works on my machine,' 'siloization' - all require planning and cooperation to resolve. But this is basic stuff, to me. You don't need to be a DevOp to realize you're going to need your devs to have the same baseline system for testing their apps as your production system (sometimes more than one). The apps have to be developed in a way that allows for a smooth upgrade in the future. And you need a competent deployment and reversion system with change approval/code review and reporting.

These issues are not solved by simply having a 'DevOp', whose responsibility is not only their own systems but apparently the total management and architecting of the whole process of development of a product and delivering it working flawlessly. To properly deal with these issues you need many things. You need really strong management to keep teams working together and to help them communicate. You need some kind of manager or architect position who can keep track of how everything works and juggle the issues before they become serious problems. You need people who are really good at doing their job and get them to ask for help.

Nobody's job is simple. But creating some new position to supposedly solve all these issues by being super-human techno-gods? Even if you could get these godly Devops people in every corporation, there's no promise that they can even get past the politics inherent to each group to make everything work as harmoniously as the post describes. There is no magic bullet. No movement will make everything alright. The world is harsh and complex, and a DevOp isn't going to save it.

Tuesday, October 19, 2010

utf8 terminals

UTF-8 lovin' for my terminals:
(in bash)
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
(in irssi)
/set recode_autodetect_utf8 ON
/set term_type utf-8
/set term_charset utf-8
(for your terminal)
uxterm -sb -bg black -fg green -g 100x25
(for screen)
screen -U
(for tmux)
tmux -u

my damn fonts keep having a problem with chinese and other languages if i don't use the default font and size. luckily it's barely usable, but still pretty large. more application-specific details here.

Friday, October 15, 2010

do the legwork

in the various positions in the IT industry we all have a specific job to do with various tasks. we don't always do them as well as we could. usually it boils down to someone doing the bare minimum for a variety of reasons and something ends up breaking.

there are different reasons why things might not be done as well as possible. maybe the deadline's fast approaching and you just need something to work. maybe you've not got enough budget. maybe your bosses are just jerks and even though you tell them what you need to get it done right, they ignore you and force you to produce sub-standard work.

the resulting fail will sit in the background for some time until a random occurrence triggers it. by chance something goes wrong and then everyone breaks, and you're left holding the bag. sometimes that means big hassles and wasted money. sometimes it means you get fired. so when you do have the chance, take the time and do it right.

as far as security is concerned this principle affects everything. there are lots of things you can do to secure any given system. the more you do, the less likely it is that the one attacker you were working to stop will be successful in his or her objective. this applies to everyone in the IT field: programmers, admins, NOC, QA, analysts, managers, etc etc. if you do it all right the first time you won't be left with the bag.

so for example. if you work for a large mobile internet service provider and it's your job to set up the service paywall, don't skimp on anything. make sure it's as secure and reliable as possible and don't trust anything to chance. the one person who figures out that way for everyone in the country to get free internet could bring on considerable strain (financially and otherwise) to your employers, and they won't be happy with you.

or if you run the large systems which are targeted by drive-by botnets as command and control machines or injection points, do your jobs, people. apply the latest security-tightening patches. use mandatory access control. use chroots. use separate users for each service. remove the need to log in as root wherever possible. add intrusion detection. keep up with patches! do you know how much of a hassle it is to clean up and replace systems that have been owned en masse just because you allowed a simple shitty buffer overflow to execute?

and programmers, come on. you're never held responsible for these problems. it's always the other groups which are used as the example and who look foolish because of your crappy, insecure code. the code runs on their systems, so the perception is it's their fault they got owned. but they didn't write that shitty file-uploading php script, you did. you let the bot herders in the front door and made it that much easier for them to expand their attack into the network. congratulations, homie. yes, the admins should have tightened security around php to account for unexpected holes, but you shouldn't make it easier for the attackers either.

and firewall dudes: how hard is it to friggin download a malware watch list and block bad domains/IPs? you're responsible for both the servers AND desktops which are affected by worms/trojans/etc. you know how to tighten these boxes down and tighten up the network access, so do it already!

you're saving yourself work in the end. how many of us have been caught in a tight deadline when suddenly all work has to stop to deal with the intrusion and see how far it got? do you have the spare boxes and cycles to deal with that? how is it affecting your bottom line? your sleep schedule? in the end it's the executives and managers who need to be more proactive in enforcing these trends in the rest of the work force, because if they don't force people to then nobody's going to take the extra time. create a culture of polished work and everyone should benefit.

Friday, October 8, 2010

how NOT to design software for ease of troubleshooting

psypete@pinhead:~/svn/bin/src/etherdump$ svn up
At revision 211.
Killed by signal 15.

strace gives no indication wtf is going on and there's no debugging mode to give me more information. Of course this is subversion, so instead of a simple man page to give me some help I have to read through the 'svn book' or run commands 20 times to even know there's no debugging flags (afaict).

What is the actual problem and fix, after 15 minutes of googling? Some version bump re-introduced a bug (I didn't know I even upgraded subversion so perhaps this is some other thing making the bug pop up) that causes svn to kill itself if ssh isn't playing nice. Effectively you have to use "-q" anywhere that svn calls ssh, which was my weird tunnel subversion config change.

The tool could have spit out something like "hey, ssh is giving me shit, so i'm bailing; check out ssh" and it would have greatly decreased the time it took me to resolve the issue. Instead it just committed hara-kiri and told me a cryptic signal number. This is not how to design a tool made for user interaction.

Monday, October 4, 2010

a brief introduction to bad internet paywall security

for some reason everybody seems to leave some hole in an internet paywall you can go through to get free internet access. there are some obvious methods, and some less obvious methods. at the end of the day, though, you should be aware of all of these when you deploy one.

ip over dns


this one is a given. if you have a caching dns server/forwarder, ip over dns like iodine or NSTX will usually provide you with a somewhat-unstable-but-probable internet access. the fix is of course to just tell dnsmasq to point all lookups by an unauthorized client to your http server and provide an http redirect to the paywall site. apparently this is ridiculously hard for admins to comprehend.

tunneling out through firewall pinholes


if admins set their firewalls up right, there should be no packets originating from an unauthorized wifi client which can hit a host on the internet. apparently it's much easier to just allow any wifi client to connect to udp port 53 on a remote host without even using a real dns service to pass along the query. openvpn listening on port 53 becomes highly useful here. a creative hacker could use something like a google voice-powered SMS-controlled app to report back any SYN packets in a 10-minute window and just try all 65k ports to find an open pinhole in a firewall.

ip over icmp


this one isn't nearly as likely to work as the last two, but when it does work it's much more stable a connection than ip over dns. examples are hans and ICMPTX. however it's usually rate limited to around 23kB/s in my experience (and it's probably much much much slower on IPv6, according to the spec only allowing something like 4 ICMP messages per second?), so if you can use a tunnel straight to a remote host without going through another protocol and its overhead, all the better.

overly permissive transparent squid proxy


so far i think i've only found one such proxy that successfully denies http requests to unauthorized users. people just don't seem to understand that even if your proxy doesn't have an IP address i can still use it. a very simple example is just doing
`echo -en "GET http://www.google.com/ HTTP/1.1\nHost: www.google.com\n\n" | nc www.google.com 80`
. if this succeeds, their proxy is allowing anyone to just go right through to the internets without authing. to use this in practice download ProxyTunnel and use Dag's SSH-over-HTTP method to open an ssh tunnel with SOCKS5 proxy, or hell, a ppp-over-ssh tunnel to get Hulu to work. you should try both port 80 and 443 with this method as sometimes they'll only allow one outbound through the proxy. also take note that though the default transparent proxy might be too restrictive, you should scan the default route and the rest of the network with nmap for more open proxy ports like 3128, 8080, etc (hint: AT&T's open proxy port is non-standard). for the most part some variation on this ssh config line will get you what you want:
ProxyCommand proxytunnel -p www.google.com:80 -r remotehost:public_http_port -d remotehost:internal_ssh_port -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)\n"


MAC address/IP address cloning


this is probably the easiest/most reliable method to get through a paywall. if someone else is already authed, just sniff the network, find their MAC and IP address, set it as your own, and start browsing. to be honest i don't ever use this method but it should work in theory. if they enforce WPA encryption it should make this method difficult to impossible, though i'm really not up to speed on all WPA attacks.