Last Updated: 2010-08-10 00:41:50 UTC
by Stephen Hall (Version: 1)
Seb dropped me a note today to ask to remind our readers that we are on countdown to a bumper crop of patches being released by Microsoft on Tuesday.
On Microsoft Advance Notification , they are reporting 14 bulletins, with 8 critical and 6 important. Given that all the critical are all remote code executing in classification it's time to dust off your monthly patching process and get it all ship shape ready for the fun to start.
Given we have a few days between Seb's timely reminder, and when we need to push the patch button, how good do you think your patching processes are. How do you measure their effectiveness, how do you measure their maturity?
Maybe you consider scoring them against a scale such as COBIT? There is a nice table which explains the ratings within COBIT (taken from SEI Capability Maturity Model (CMM)) on the ISACA site which I've taken and reproduced below:
- Level 0: Non-existent
- Level 1: Initial/ad hoc
- Level 2: Repeatable but Intuitive
- Level 3: Defined Process
- Level 4: Managed and Measurable
- Level 5: Optimized
Given the frequency which suppliers, including Microsoft, release such patches, where would you score yourself?
If you score somewhere between 3, and 4 in that you have a process, but you don't measure your success, what would you do to get you up towards a 4, or maybe even a 5.
Let me know before you get busy patching those systems, and I'll update with the best suggestions.