Last Updated: 2011-09-08 03:38:14 UTC
by Rob VandenBrink (Version: 1)
I know, I know, this title sounds like heresy. The IT and Infosec villagers are charging up the hill right now, forks out and torches ablaze! I think I can hear them - something about "test first, then apply in a timely manner"?? (Methinks they weren't born under a poet's star). While I get their point, it's time to throw in the towel on this I think.
On every security assessment I do for a client who's doing their best to do things the "right way", I find at least a few, but sometimes a barnful of servers that have unpatched vulnerabilties (and often are compromised).
Really, look at the volume of patches we've got to deal with:
From Microsoft - once a month, but anywhere from 10-40 in one shot, every month! Since the turnaround from patch release to exploit on most MS patches is measured in hours (and is often in negative days), what exactly is "timely"?
Browsers - Oh talk to me of browsers, do! Chrome is releasing patches so quickly now that I can't make head or tails of the version (it was 13.0.782.220 today, yesterday is was .218, the update just snuck in there when I wasn't looking). Firefox is debating removing their version number from help/about entirely - they're talking about just reporting "days since your last confession ... er ... update" instead (the version will still be in the about:support url - a nifty page to take a close look at once in a while). IE reports a sentance-like version number similar to Chrome.
And this doesn't count email clients and severs, VOIP and IM apps, databases and all the stuff that keeps the wheels turning these days.
In short, dozens (or more) critical patches per week are in the hopper for the average IT department. I don't know about you, but I don't have a team of testers ready to leap into action, and if I had to truly, fully test 12 patches in one week, I would most likely not have time to do any actual work, or probably get any sleep either.
Where it's not already in place, it's really time to turn auto-update on for almost everything, grab patches the minute they are out of the gate, and keep the impulse engines - er- patch "velocity" at maximum. The big decision then is when to schedule reboots for the disruptive updates. This assumes that we're talking about "reliable" products and companies - Microsoft, Apple, Oracle, the larger Linux distros, Apache and MySQL for example - people who *do* have a staff who is dedicated to testing and QA on patches (I realize that "reliable" is a matter of opinion here ..). I'm NOT recommending this for any independant / small team open source stuff, or products that'll give you a daily feed off subversion or whatever. Or if you've got a dedicated VM that has your web app pentest kit, wireless drivers and 6 versions of Python for the 30 tools running all just so, any updates there could really make a mess. But these are the exceptions rather than the rule in most datacenters.
Going to auto-pilot is almost the only option in most companies, management simply isn't paying anyone to test patches, they're paying folks to keep the projects rolling and the tapes running on time (or whatever other daily tasks "count" in your organization). The more you can automate the better.
Mind you, testing large "roll up" patch sets and Service Packs is still recommended. These updates are more likely to change operation of underlying OS components (remember the chaos when packet signing became the default in Windows?).
There are a few risks in the "turn auto-update on and stand back" approach:
- A bad patch will absolutely sneak in once in a while, and something will break. For this, in most cases, it's better to suck it up for that one day, and deal with one bad patch per year as opposed to being owned for 364 days. (just my opinion mind you)
- If your update source is compromised, you are really and truly toast - look at the (very recent) kernel.org compromise ( http://isc.sans.edu/diary.html?storyid=11497 ) for instance. Now, I look at a situation like that, and I figure - "if they can compromise a trusted source like that, am I going to spot their hacked code by testing it?" Probably not, they're likely better coders than I am. It's not a risk I should ignore, but there isn't much I can do about it, I try really hard to (ignore it
What do you think? How are you dealing with the volume of patches we're faced with, and how's that workin' for ya? Please, use our comment form and let us know what you're seeing!