Servers which are exposed to the internet are usually attacked daily by automated scanners. Many times these scanners do not care if the application being targeted is present on the server, and in some instances, if there is even a live host at the external IP address. They simply launch the exploit attempt across the internet IP space. Most of this automated scanning target common web applications vulnerabilities.
Organizations with public facing devices can expect attacks sourcing from multiple external IPs hitting their assets daily. The volume of this activity can be overwhelming. How do organizations which receive alerts for these attacks more effectively respond to these attacks? Some organizations simply block each IP address as an alert is generated. This seems to prove ineffective as the activity is likely to continue to occur from another IP address and the issue is not an IP based access control issue. Moreover, the number of unique IPs that attempt to exploit a public facing server can grow into the hundreds within just a day. A more adequate response would be to verify if the targeted application or vulnerability exists on the host being targeted. If not, future alerts for similar activity can be filtered. But as scanners continue to evolve and incorporate more exploit specific features, this can become tedious as well.
Because many alerts only show the attempted exploit and give no indication of success or failure, should organizations validate each unique request by offending IPs to determine the validity of the attack? Again this can be almost unbearable due to volume of alerts generated.
How can organizations more effectively determine which attacks they should be responding to in the wake of constant automated scanning?
|thread locked Quote Subscribe||
Nov 9th 2014
6 years ago