Last Updated: 2009-11-08 16:31:39 UTC
by Kevin Liston (Version: 1)
Legacy systems have been a popular topic here recently (see http://isc.sans.org/diary.html?storyid=7528 and http://isc.sans.org/diary.html?storyid=7546). Any environment of sufficient size, complexity or age will have its share of legacy systems. While we can work with policy and management to phase them out, in the meantime one has to deal with the fact that they’re on the network and vulnerable, which makes your network vulnerable. Does it have to be that way?
Consider this simplified example: your company makes widgets, the widget-making machine is computer controlled, and the company that wrote the software is now out of business, so there is no chance of upgrades or patches in your future. A bad-case example scenario: a consultant from Acme Industries comes into your facility with a laptop infected with an old worm (say Downandup,) and when they connect to your network it infects your widget-making machine. Hilarity ensues.
A possible solution is to reconsider why that legacy machine needs to be on the network. Do you know why? It’s probably serving a web application, or someone is VNCing into the system to manage, or it has to send out status emails, etc. That’s the first step: understand what services are required. Then, use another device (because if you could lock down the legacy system it would already be locked down, right?) to isolate that system. Old techniques like Access-Control-Lists, and Virtual LANs won’t block a dedicated human attacker, but against automated malware it can be quite effective. If you have to expose a vulnerable service, limit that exposure to known and trusted systems on your network, not to everyone on you network. Also, make sure that the isolation works both ways, if something manages to get into the system, you can at least limit it from spamming out to the rest of your network.
This approach also works when you have to plug a vendor’s “Appliance” into your network.