Last Updated: 2012-05-01 15:31:30 UTC
by Rob VandenBrink (Version: 2)
Just a quick update to Johannes's story on the 27th about the Oracle TNS listener vulnerability ( http://isc.sans.edu/diary.html?storyid=13069 )
We received two updates from our readers on this today:
Reader "anothergeek" posted a comment to Johannes's story, noting that Oracle released a workaround today (Apr 30) - find details here ==> http://www.oracle.com/technetwork/topics/security/alert-cve-2012-1675-1608180.html
Shortly after, reader R.P. pointed us to a page that had proof of concept ( with a video no less) ==> http://eromang.zataz.com/2012/04/30/oracle-database-tns-poison-0day-video-demonstration/
So get that maintenance window scheduled folks! Those patches don't do you any good in your Downloads folder!
From the perspective of someone who does audits and assessments, it's a sad thing to note that in many organizations it's tough to schedule maintenance on a large Oracle server. So many applications get piled on these that database and operating system patches can be a real challenge to book, because an interruption in service can affect dozens or hundreds of applications.
Sadly this means that database patches are often quarterly or annual events. Or "fairy tale events" (as in never-never).
Last Updated: 2012-04-30 02:00:32 UTC
by Rob VandenBrink (Version: 1)
Remember back in 2010, Google was in hot water for some wardriving activities, where personal information was gathered from unencrypted wireless networks found during it's Streetview activities? Deb wrote this up here ==> https://isc.sans.edu/diary.html?storyid=8794
Well, it looks like the discussion won't die - the FCC has just posted a summary of its findings here, along with some good background and a chronology of events in their investigation ==> http://transition.fcc.gov/DA-12-592A1.pdf
You'll notice that it's heavily redacted. A version with much less redacting can be found here ==> http://www.scribd.com/fullscreen/91652398
It's very interesting reading. What I found most interesting in the paper was:
- I thought it was sensible that the engineer didn't go write a new tool for this - they used Kismet to collect the data, then massaged Kismet’s output during their later analysis. Aside from the fact that anyone who's been in almost any SANS class would realize how wrong using the tool was, at least they didn't go write something from scratch.
- Page 2 outlines the various radio licenses held by Google. This caught my eye mostly because I'm in the process of studying up for my own license.
- The suggestion and implementation for the data collection in the controversy came from unnamed engineers ("Engineer Doe" in the paper). I found it really interesting how the final "findings" document doesn't name actual names - I'd have thought that assigning responsibility would be one of the main purposes of this doc, but hey, what do I know?
- Engineer Doe actually outlined in a design document how the tool would collect payloads (page 10/11), but then discounted the impact because the Streetview cars wouldn't "be in close proximity to any given user for an extended period of time". The approval for the activity came from a manager who (as far as this doc is concerned) didn't understand the implications of collecting this info, or maybe didn't read the doc, or missed the importance of that section - though a rather pointed question about where URL information was coming from was lifted out of one critical email.
Needless to say, violating Privacy Legislation "just a little bit" is like being a little bit pregnant - the final data included userids, passwords, health information, you name it. As they say "close only counts in horseshoes and hand grenades" - NOT in Compliance to Privacy rules !
Long story short, this document outlines how the manager(s) of the project trusted the engineers word on the legal implications of their activity. I see this frequently in my "day job". Managers often don't know when to seek a legal opinion - in a lot of cases, if it sounds technical, it must be a technical decision right? So they ask their technical people. Or if they know that they need a legal opinion, they frequently don't have a budget to go down this road, so are left on their own to take their best shot at the "do the right thing" decision. As you can imagine, if the results of a decision like this ever comes back to see the light of day, it seldom ends well. Though in Google's case, they have a legal department on staff, and I'd imagine that one of their primary directives is to keep an eye on Privacy Legislation, Regulations and Compliance to said legislation. Though you can't fault the legal team if the question never gets directed their way (back to middle managment).
From a project manager point of view, this nicely outlines how expanding the scope of a project without the approval of the project sponsor is almost always a bad idea. in most cases I’ve seen, the implications of changing the scope are all around impacts to budget and schedule, but in this case, a good idea and a neat project (Google Streetview) ended up being associated with activity that ended up being deemed illegal, which is a real shame. From a project manager's perspective, exceeding the project scope is almost as bad a failure as not meeting the scope. Exceeding the scope means that either you exceeded the budget or schedule, mis-estimated the budget or schedule, or in this case didn't get the legal homework done on the scope overage.
Take a minute to read the FCC doc (either version). It's an interesting chronology of a technical project's development and execution, mixed in with company politics, legal investigation and a liberal sprinkling of "I don't recall the details of that event" type statements. Not the stuff that blockbuster movies are made of, but interesting nonetheless !
We invite your opinions, or any corrections if I've mis-interpreted any of this - please use our COMMENT FORM. I've hit the high points, but I'm no more an lawyer than "Engineer Doe"
Last Updated: 2012-04-30 01:10:21 UTC
by Rob VandenBrink (Version: 1)
I was reading the other night, which since I've migrated my library means that I was on my iPad.
My kid (he's 11) happened to be in the room, playing a game on one console or another. I'm deep in my book, and he's deep in his game, when he pipes up with "Y'know Dad?"
"You should enable complex passwords on your tablet"
(Really, he said exactly that! I guess he was in Settings / Security and wasn't playing a game after all ! )
"Why is that?" I said - (I'm hoping he comes up with a good answer here)
"Because if somebody takes your tablet, it'll be harder for them to guess your password" (good answer!)
"Good idea - is there anything else I should know?"
"If they guess your password wrong 10 times, your tablet will get wiped out, so they won't get your stuff" (Oh - bonus points!)
So aside from me having a really proud parent moment, why is this on the ISC page? It's really good advice, that's why !
It's surprising how many people use the last 4 digits of their phone number, their birthday, or worse yet, their bank card PIN (yes, really) for a password, or have no password at all. And yet, we have all kinds of confidential information on our tablets and phones - mostly in the form of corporate emails and sometimes documents.
As is the case in so many things, when we in the security community discuss tablet security, it's usually about the more advanced and interesting topics like remote management, remote data wipe or forensics. These are valuable discussions - but in a lot of cases, basic (and I mean REALLY BASIC) security 101 advice to our user community will go a lot further in enhancing our security position. Advice like I got from my kid:
- Set a password !
- Make sure that it's reasonably complex (letters and numbers)
- Make sure that it's not a family member name, phone number, birthday, bank PIN or something that might be found on your facebook page
- Set a screen saver timeout
- Set the device to lock when you close the cover
- Delete any documents that you are finished with - remember, the doc on your tablet is just an out of date copy
This may seem like really basic advice, and that's because it is. But in the current wave of BYOD (Bring Your Own Device) policies that we're seeing at many organizations, we're seeing almost zero attention put on the security of the organization's data. BYOD seems to be about transferring costs to our users on one hand, and keeping them happy by letting them use their tablets and phones at work (or school).
Good resources for iPad security (as well as Android and other tablets also) can be found in the SANS Reading Room ( http://www.sans.org/reading_room/ )
Vendors also maintain security documentation - Apple has some good (but basic) guidance at ==> http://www.apple.com/ipad/business/docs/iPad_Security.pdf
NIST has guidance for Android and Apple (though both are bit out of date):
Please, use our COMMENT FORM to pass along any tablet security tips or links you may have.