Last Updated: 2011-06-20 00:49:59 UTC
by Chris Mohan (Version: 1)
The media is full of security horror stories of company after company being breached by attackers, but very little information is actual forthcoming on the real details.
As an incident responder I attempt to understand what occurred and learn from these attacks, so I'm always looking for factual details of what actually happened, rather than conjecture, hearsay or pure guess work.
Back in April Barracuda Networks, a security solution provider, got compromised and lost names and email addresses. They disclose the breach then took the admirable step of publishing how the breach took place, with screen shots of logs, and their lessons learnt from the attack .
I hope that those who unfortunate to suffer future breaches are equally generous enough to share their logs and lessons learnt for the rest of us to understand and adapt for our own systems. The attackers share their tips and tricks, as anyone looking at the uploaded chat logs to public sites like pastebin can attest to this. We need the very smart folks looking after the security at theses attacked companies, that can step up, to take time to write up what really happened is going to make it accessible for the rest of us to learn from.
Seeing the events of an attack in recorded in log files is a terrible, yet beautiful thing. To me it means we, as defenders, did one thing right since detection is always a must. If the attack couldn't or wasn't blocked, then being able to replay how a system was compromised is the only way forward to stopping it from occurring again.
Logs review should be a intrinsic routine performed by everyone, daily if possible. Whether it be a visual, line by line review* or by using grep, a simple batch script or a state-of-the-art security information and event management system to parse the logs in to an easy to read and digest format for even a novice IT person to review and understand. This should be part of the working day process for all levels of support and security staff; drinking that morning coffee while flicking through the highlights of systems should be part of the job description.
Log files need to easy to understand and get information from. As someone who works with huge Windows IIS logs files, automation is your friend here. Jason Fossen's Search_Text_Log.vbs script  is a great starting point for scripters or for a more dynamic analysis tool Microsoft's log parser  is well worth taking the time to get to grips with. As an example of some of the information you can extract from IIS logs have a read here  see how easy it is to pull pertinent data and this blog piece  has a excellent way to get visual trending IIS data.
If log analysis isn't something you do much of, then a marvellous way to get some practice in is from this Honeynet.org challenge 
It's important to note logging has to be enabled on your systems, set up and reviewed to produce useful information. Multiple logging sources have to be using the same time source, to make correlation easy, so take the time to make sure your environment is configured and logging correctly before you need to review the logs for an incident.
As always, if you have any suggestions, insights or tips please feel free to comment.
 Download log parser from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en
* for you own time management, eyesight and frankly sanity try to avoid this.
Chris Mohan --- Internet Storm Center Handler on Duty