Threat Level: green Handler on Duty: Johannes Ullrich

SANS ISC: InfoSec Handlers Diary Blog - Internet Storm Center Diary 2014-05-15 InfoSec Handlers Diary Blog


Sign Up for Free!   Forgot Password?
Log In or Sign Up for Free!
APPLE-SA-2014-05-15-2 iTunes 11.2 available for download - security fixes address CVE-2014-1296: http://support.apple.com/kb/HT1222 & http://support.apple.com/kb/HT6245
APPLE-SA-2014-05-15-1 addresses multiple security issues, updates OS X Mavericks v10.9.3 - more info here: http://support.apple.com/kb/HT6207

Collecting Workstation / Software Inventory Several Ways

Published: 2014-05-15
Last Updated: 2014-05-15 22:12:06 UTC
by Rob VandenBrink (Version: 1)
6 comment(s)

One of the "prepare for a zero day" steps that I highlighted in my story last week was to inventory your network stations, and know what's running on them.  In short, the first 2 points in the SANS 20 Critical Security Controls.  This can mean lots of things depending on your point of view.

Nmap can make an educated guess on the existence of hosts, their OS and active services:
nmap -p0-65535 -O -sV x.x.x.0/24
Good information, but not "take it to the bank" accuracy.  It'll also take a LONG time to run, you might want to trim down the number of ports being evaluated (or not).  Even if you don't take this info as gospel, it's still good supplemental info, for stations that are not in your domain.  You can kick this up a notch with Nessus (Nessus will also login to stations and enumerate software if you have credentials)

If you're running active directory, you can get a list of hosts using netdom, and a list of apps on each host using WMIC:
netdom.exe query /domain:domainname.com workstation | find /v "List of Workstations" >stations.out

(if you use "server" instead of "workstation", you'll get the server list instead)

and for each station:
wmic product list brief

But having run exactly this recently, this can take a LONG time in a larger domain.  How can we speed this up?  In a word, Powershell.
To inventory a domain:
import-module ActiveDirectory
Get-ADComputer -Filter * -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack,OperatingSystemVersion


To inventory the software on a remote workstation:
Get-WmiObject -Class Win32_Product -computername stationnamegoeshere | Select-Object -Property Name

( see here for more info: http://technet.microsoft.com/en-us/library/ee176860.aspx)

I collected this information first using the netdom/wmic way (hours), then using powershell (minutes).  Guess which way I'd recommend?

OK, now we've got what can easily be Megabytes of text.  How do we find out who needs some TLC?  Who's running old or unpatched software?

As an example - who has or does NOT have EMET 4.1 installed?

To check this with WMIC:

"go.cmd" (for some reason all my parent scripts are called "go")  might look like:
@echo off
for /f %%G in (stations.out) do call emetchk.cmd

and emetchk.cmd might look like:
@echo off
echo %1  >> inventory.txt
wmic /node:%1 product where "name like 'EMET%%'" get name, identifyingnumber, InstallDate >> inventory.txt
echo.


Or with powershell, the domain enumeration would look like:
import-module ActiveDirectory
Get-ADComputer -Filter * -Property * | Format-Table Name,OperatingSystem,OperatingSystemServicePack,OperatingSystemVersion  > stations.out

Then, to enumerate the actual applications (for each station in stations.out), you could either use the emetchk.cmd script above, or re-do the thing in powershell (I haven't gotten that far yet, but if any of our readers want to add a script in the comments, I'm sure folks would love to see it!) - in this example the

Get-WmiObject -Class Win32_Product -computername stationname | Select-Object -Property Name > stationname.txt

Done!

If you run this periodically, you can "diff" the results between runs to see what's changed.  Diff is standard in linux, is part of Windows these days also if you install the SFU (services for unix), or you can get a nice diff report in powershell with :

Compare-Object -ReferenceObject (Get-Content c:\path\file01.txt) -DifferenceObject (Get-Content c:\path\file02.txt)

But what about the stations who aren't in our corporate domain?  Even if your domain inventory is solid, you still must sniff network traffic using tools like PVS (from Tenable) or P0F (open source, from http://lcamtuf.coredump.cx/p0f3/) to identify folks who are running old versions of java, flash, air, IE, Firefox (pre-auto update versions mostly) and so on, that aren't in your domain so might get missed in your "traditional" data collection.  Normally these sniffer stations monitor traffic in and out of choke points in the network like firewalls or routers.  We covered this earlier this year here: https://isc.sans.edu/diary.html?date=2013-12-19

I hope this outlines free or close to free solutions to get these tasks done for you.  If you've found other (or better) ways to collect this info without a large cash outlay and/or a multi-week project, please share using our comment form.

===============
Rob VandenBrink
Metafore

Keywords:
6 comment(s)

Breaches and Attacks that are "Not in Scope"

Published: 2014-05-15
Last Updated: 2014-05-15 19:05:09 UTC
by Rob VandenBrink (Version: 1)
2 comment(s)

Last week, we saw Orange (a Telecom company based in France) compromised, with the info for 1.3 million clients breach.  At this time, it does not appear that any credit card numbers or credentials were exposed in that event.(http://www.reuters.com/article/2014/05/07/france-telecomunications-idUSL6N0NT2I120140507)

The interesting thing about this data breach was that it involved systems that would not be considered "primary" - the site compromised housed contact information for customers who had "opted in" to receive sales and marketing information.

I'm seeing this as a disturbing trend.  During security assessments, penetration tests and especially in PCI audits, I see organizations narrow the scope to systems that they deem as "important".   But guess what, the data being protected has sprawled into other departments, and is now housed on other servers, in other security zones where it should not be, and in some cases is in spreadsheets on laptops or tablets, often unencrypted.  Backups images and backup servers are other components that are often not as well protected as the primary data (don't ask me why this oversight is so so common)

The common quote amongst penetration testers and other security professions for this situation is "guess what, the internet (and the real attackers) have not read or signed your scope document"

It's easy to say that we need to be better stewards of our customer's information, but really we do.  Organisations need to characterise the "what does our information look like" (with regex's, or dummy customer records that you can search for), then go actively hunt for it.  Be your own Google - write scripts to crawl your own servers and workstations looking for this information.  Once this process is in place, it's easy to run this periodically, or better yet, continuously.  Put this info into your SNORT (or other IPS) signatures so you can see them on the wire, in emails, file/copy or file/save operations.

Too often the breach that happens is on a system that's out of scope and much less protected than our "crown jewels" data deserves.  If you're in the process of establishing a scope for PCI or some other regulatory framework, stop and ask yourself "wouldn't it be a good idea to put these controls on the rest of the network too?"

===============
Rob VandenBrink
Metafore

Keywords:
2 comment(s)
Diary Archives