After some administrivia, a site and hosting change and some changes in management, Fudsec is back.

On these pages you can read posts by guest authors that have been published at Fudsec since 2009.

We seek posts from information security professionals about Fear, Uncertainty and Doubt (FUD) in security, information-security, intelligence and the marketing of all those disciplines.

Please use the contact form below to contact us.

“Rabbit-rabbit” folks on this 1st day of the month. Just when many of you thought it was safe to go back into the water. Just when you thought nothing could be worse than APT… think again. Wade Baker followed his nose and unearthed something even more silent – even more deadly. This is the Press Release “they” didn’t want you to see.

by Wade Baker (@wadebaker)

Advanced Persistent Threats (APTs) garnered a huge amount of attention within the security community in 2010. Reports of sophisticated attacks against high-profile organizations provided ample fuel, and the fear of APTs spread like wildfire. Many expressed a sense of hopelessness against this new foe. Trade secrets were lost. Reputations damaged. White-knuckled fear and frustration ensued.

But that was last year, and there is no relief for the afflicted, no rest for the weary.

2011 brings with it a foul wind of another, even more advanced, and vastly more persistent threat into our midst. These vile agents known as Far Advanced Relentless Threats have quickly become an assault to the senses, permeating corporate environments with ease. 

Intelligence and research analyst Wade Baker laments “the worse part about this new threat is that the data on their origins, behaviors, and motives is so scarce. Security hinges on knowing our enemy, but that’s impossible with Far Advanced Relentless Threats. They rise up from the bowels of who-knows-where and hit you like a ton of bricks so fast it can take your breath away.”

When asked about whether the analyst community is looking into this situation, industry analyst Josh Corman answers “Absolutely.” “As soon as the news broke wind of this new threat, we stuck our noses out to see what we could learn. It didn’t take long to catch a whiff of Far Advanced Relentless Threats affecting our own ranks. They hit Andrew Hay bad one day last week; it was nasty and it’s going to take some time to recover.”

Researchers are, at least, trying to better understand how they work. “Those who incorporate JavaBeans into their applications seem particularly vulnerable” says application security specialist Jeremiah Grossman. “Far Advanced Relentless Threats typically follow an attack pattern that results in a sudden and violent buffer overflow condition. Being on the receiving end of that kind of force really stinks.”

According to industry expert Christofer Hoff, one of the aspects of Far Advanced Relentless Threats that makes them so invasive is their ability to spread rapidly via the cloud. “They’re extremely efficient,” he says. “They are highly scalable, deploy quickly, and can also dissipate swiftly as though they were never there. By then, of course, the damage has already been done…and don’t even get me started on what this will mean for cropdusting and cloudbursting.”

“Some Far Advanced Relentless Threats trumpet their presence loudly, but it’s the silent ones that are truly deadly,” claims forensic investigator Andrew Valentine. “In most circumstances they leave no lasting evidence and studying those rare logs that are left behind hasn’t yielded much useful information regarding the identity and/or origin of these threats.”

Because of their stealthy tactics, some believe Far Advanced Relentless Threats are a bunch of hot air. But those who have experienced their awful reality first-hand know better. “It can really damage your reputation,” say Alex Hutton, “and that awful stain that may never wash away. When that happens, you might as well just go home; there’s no showing your face again in public after that.”

Not everyone is ready to surrender and go home, however. Chris Porter has put together a special unit known as the Far Advanced Relentless Threat Emergency Response Squad. “We can’t keep holding back and silently letting things go. It’s not the time to be timid; it’s go time. We’re gonna drop some bombs,” he says pointedly and confidently. 

Happy April 1st!
Be sure to use the #FARTsec hash when referring to this new threat.

 

By Bob Rudis (@hrbrmstr)

By now, most infosec folk have digested, opined on and come to loathe the EMC (RSA) SecurID breach story that broke on March 17. Their 8-K filing contains both the open (public) letter as well as the initial guidance provided to customers on steps they should take to ensure the CIA of their SecurID infrastructure. EMC released additional information on March 22, but no official communication has gone into any real detail as to the specific vectors of the attack save for a singular line:

“Our investigation has led us to believe that the attack is in the category of an Advanced Persistent Threat (APT).”

Despite that vague speculation (“led us to believe” is not “we confidently know”) on the part of EMC, it seems that there are at least two vendors who know exactly what APT-style was used and how they can stop it. The problem is that they seem to disagree on which APT it was.

Vendor #1

For various reasons, I had to redact portions of this particular communication. I can attest to the authenticity of the e-mail, but you could argue that makes me about as trustworthy as a Comodo SSL certificate. Their e-mail came soon after the breach announcement, hence me putting them first. Here is what they claim to know what happened to EMC:

View image

You can read the full, redacted e-mail at your leisure. Thankfully, we already use their technology, so I can be confident I’m fully protected against the EMC-felling APT. (HTML6 really needs a <sarcasm> tag).

Vendor #2

Just as I was feeling smugly safe all weekend, I awoke to the following in e-mail today (as did many others):

view image

I hadn’t even had one ounce of caffeine yet, but was forced into immediately questioning my security posture and whether or not I was truly protected from these “APTs”. Given the intensity of their message, these folks must have the inside scoop:

view image

Quite the differing views on what happened and where I need to focus my protection efforts. Which one should I believe?

Who Protects Us From The Protectors?

Both vendors called out in this post seized on the opportunity to feast on the wounded carcass of a competitor who is a huge player in the IT security & compliance sector. Neither has helped me effectively communicate the real threat(s) to my stakeholders and neither has given me anything tangible to put into a roadmap for my security program. Even EMC itself caused a significant amount of churn in many organizations and has done it’s own share of spreading Fear, Uncertainty and Doubt due to the sheer lack of information from their breach.

I am fully aware of how difficult the situation is for EMC and the fine line they need to walk in this situation. However, fueling the APT FUD machine was unnecessary and has only encouraged more speculation in the infosec community and seems to have brought out the worst in some other companies in this sector.

We need to make it clear to vendors that we won’t stand for opportunistic scare tactics like this and we also need to continue to foster a community of sharing and open discourse between each other to keep the FUD under control.

As unlikely as it would be for the Wikileaks phenomenon to be uttered in proximity of FUD, our returning champion Chris Swan felt compelled to speak on the matter. Let’s hope he doesn’t get us DDoS’d (Wait. DDoS attacks are just FUD, right? We’ve lost track.)

by Chris Swan (@cpswan)

 

Firstly this isn’t a post about the rights or wrongs of Wikileaks itself. That’s been covered elsewhere in a more serious, thoughtful and funny way than I could ever do myself.

This is about Wikileaks being the new mother lode of FUD. It’s becoming the centre of the stories that security vendors tell customers to keep them scared at night.

I’m not going to link to the guilty. We all know who they are, and I could never be comprehensive enough. It would be like having just a few hundred examples out of a quarter of a million. We could point and laugh at one culprit without realising that an even more egregious example is just around the corner.

What I have to say here has its genesis in Andrew McAfee’s post a few days ago ‘Did WikiLeaks’ “Cablegate” Result From Too Much Information Sharing?’. This is a problematic question, and seems to put information sharing (which is key to running a business or government) at odds with security (which is key to running a business or government) – what to do?

I made some comments on the post, which are worth repeating here:

The problem here wasn’t classification. The material was correctly classified, and processed on the right systems.

The problem here wasn’t clearance. Whoever did this almost certainly needed access to material of this protective marking.

As you rightly point out the problem isn’t about sharing. The intelligence community (and military at large) have got better at sharing, and need to continue.

The problem is aggregation. This is a well known problem in the military/security community, and one that has changed dramatically in the digital era. It’s bad enough if you have an entire aircraft, ship or tank filled with sensitive material on paper fall into enemy hands, but as we see here that’s nothing compared to what you can get onto a thumb drive.

The massive fail appears to be that the monitoring systems didn’t ring alarm bells when somebody was bulk downloading massive quantities of data. Quantities of data that couldn’t possibly have been read by an individual (or even a large unit). This should be the focus of the fire drill that’s surely going on right now. This isn’t about horses or stable doors, this is about somebody driving a virtual semi-trailer out the gate and nobody noticing.

I’ve since had time to reflect on those comments…

I now very much doubt that the material was correctly classified. A lot of it is marked SECRET, and it’s worth having a quick reminder of its definition“Secret” shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause serious damage to the national security. Arguably ‘serious damage’ hasn’t (yet) been caused, and hence QED the documents were incorrectly classified. It’s also worth mentioning here that the US seems to be stuck in an old world system of ‘classification’ where others (such as the UK) have moved on to a more refined concept of ‘protective marking’. In that system there’s a sub category for ‘Impact on foreign relations’ and at business impact level 3 we find ‘Cause embarrassment to Diplomatic relations’, which is where we seem to find ourselves.

Pointing the finger at aggregation is perhaps an oversimplification. Schneier is right that it’s really an access control issue – at least to the extent that you don’t get an inappropriate aggregation if you have the right access control. It would appear that the issue with SIPRNet is that there’s no effective compartmentalisation of material (as there would be on systems holding TOP SECRET) material. Of course we see this issue in business too. Cleared to see != need to know, and there’s often a specific need for compartmentalisation to create ethical boundaries (or their more politically incorrect cousins Chinese firewalls).

It’s at this point that the FUD toting security industry bandwagon rolls into town and says ‘my product/service can solve these (access control) issues’. We’ll be seeing a lot of DLP/ERM/IRM vendors doing this over the coming weeks and months. More so if Wikileaks move on from government to big business, as has been threatened. The problem is that this is total BS. I wrote some years ago about ‘the wrongs of enterprise rights management’ and spent a great deal of time socialising the issues with security vendors. Largely those issues have been ignored, and the vendors have continued to peddle solutions that are just as broken now as they were then. That’s because these are hard problems. Problems that require business commitment and human input. Problems that can’t be solved by a technology silver bullet. Of course the technology could get better at helping us with the organisational and people issues here, but it’s not a magic wand.

Perhaps some of the solutions out there could have helped with what happened on SIPRNet by creating workable compartmentalisation overlays, observing anomalous access patterns or preventing exfiltration. But that would be a question of scope and scale, and ‘cablegate’ may be unique in that. The real problem here is that there’s nothing technology can do about an authorised insider turning rogue and leaking a single critical piece of information, and that’s what we’re likely to see next – single smoking guns that cause real harm to businesses (and likely an ethical car crash for added PR impact). The FUDmeisters might claim that they can sell the solution to these problems, but I fear they can only solve much simpler issues.

 

This post comes from Peter Hesse. Peter knows a thing or two about SSL Certificates. With apologies, Peter submited this a while ago. The recent FireSheep hooplah triggered the SSL thought, which triggered this unrelated post. 

by Peter Hesse (@pmhesse)

 

Earlier this week, a phone call from a friend drove me to write this on twitter:

Wow, SSL Certificates are really a ginormous scam.

I then received a few followup messages on twitter, and ended up responding to an SSL vendor by email, which in turn inspired me to write this post.

As background, I have worked in/around/with public key infrastructure (PKI) for nearly my entire professional career. My first software development job was working on a certification authority reference model for NIST in 1996.

So, I know a thing or two about SSL certificates. For example, I know they cost far less to create and maintain than SSL vendors typically charge. There is no additional burden on the issuer between the different levels of certificates: the costs of hardware, hosting, audit, etc. are very similar between the types of certificates (perhaps excluding extended validation or EV certificates).

I can understand charging more based on the speed of issuance of the certificate, and the quality and depth of the validation performed to ensure the requestor works for the organization whose name will appear in the certificate. After all, you can usually only pick two of [faster | cheaper | better]. SSL certificate issuers are free to charge what they think people are willing to pay for certificates rather than trying to relate it to the actual cost of creation and management. That is their right, and it is my right to call them out when I feel the prices are ridiculous.

The “scam” of SSL certificates these days is that the sales representatives are being trained to use fear, uncertainty, and doubt to scare people into buying more expensive certificates than they need. The following is from a friend relaying his exchange with an SSL vendor: Sales rep stated our current certificate is hackable because it can go down to 40bit, explained that this makes us vulnerable. I argued 

“I only allow 128-bit at the server”, and he said “yes, but since your cert is only 40 bit it can still be compromised; you need a server gated cryptography certificate.”

If you know what you are doing (security wise) you will block all weak cryptographic ciphers at your web server. This may prevent older browsers from being able to connect to your site, but will ensure the cryptographic strength is always high. The sales representative was trying to scare my friend into thinking this wouldn’t do the trick, which is patently false. The following link gives some good reasons why SGC certificates are a bad idea and don’t solve the weak encryption problem. Even the wikipedia entry for SGC calls SGC certificates “obsolete” (and no, I didn’t just go edit that entry to say that… as far as you know, anyway *evil-arched-eyebrows*).

The sales representative also continued the discussion to try and convince my friend that one certificate wasn’t enough. In discussing his configuration, he revealed he has many back-end servers which all sit behind an SSL-offloading load balancing proxy. The sales representative tried to convince him that he would now need to buy a certificate for each of the back-end servers to afford him the best protection. So instead of needing one or two certificates, my friend was going to need twenty! Yes, I think we all know that defense in depth is important and he should indeed use SSL between his proxy and the back-end servers. Paying $50-$1500 each for browser-trusted SSL certificates on the back end is just a flat-out waste of money.

Self-signed certificates, or certificates generated by an in-house PKI would provide at least the same level of security at a far reduced cost. So, there you have it. Make sure you know what you need before you try to buy an SSL certificate. The sales representatives are willing and able to charge you whatever they can scare you into believing you need.

 

Now repeat poster, Ben, was chomping at the bit to be share his thoughts on (brace yourself) Cyber War. Further, he wanted it introduced with some AC/DC lyrics from “Thunderstruck”. We at least thought we could go with “Let’s have a war” by Fear (or A Perfect Circle) instead. Someone should update it for “Let’s have a (cyber)war” sometime. With some level of protest… have at it.

 

…I was caught

In the middle of a railroad track (Thunder)

 

And I knew there was no turning back (Thunder)

My mind raced

And I thought what could I do (Thunder)

And I knew

There was no help, no help from you (Thunder)

 

Sound of the drums

Beatin’ in my heart

The thunder of guns

Tore me apart

You’ve been – thunderstruck..

 

by Ben Tomhave (@falconsview)


I’ve been reading Richard Clarke’s latest book, Cyber War, recently in an effort to delve deeper into the topic. Maybe it’s been all the recent inflammatory rhetoric, or maybe it’s an earnest interest, or maybe - just maybe – it comes from an innate interest in fighting obtuse uses and abuses of FUD.

The tone of the book initially is far less FUD-y than one might expect. Some of the tech details are clearly off a bit, but overall it’s been surprisingly level-headed. Except for the scenarios. These are some of the most over-the-top scenarios I’ve seen since “digital Pearl Harbor” in 2000. However, in this case it gives me pause, and not just because of the glaring FUD factor.

What I wonder is this: just how much data and control must we lose before we stand up and start taking action? How much proprietary designs, plans, formulas, etc., must be compromised? How many SCADAsystems have to be pwnd? Is it really going to take a massive blackout before energy company execs wake up and smell the ozone?

Clarke asserts that foreign assets already have embedded attack tools (“logic bombs”) into many, if not all, critical infrastructures. We’ve not done an adequate job of supply chain management, so consider that his assertion may, in fact, be fact-based and plausible. Now add factual assertions that massive research databases (academic, government, and corporate) have been copied wholesale by these same foreign assets. Accept this as fact, if you will, and not as FUD. How does this change your perspective on the topic?

The Case For FUD

Taking the previous examples as fact (as an example here – we can debate the depth of pwnage, but I think we can all agree that there are serious concerns here), there may be a valid case for FUDtastic scenarios like the ones Clarke uses in his book. The “digital Pearl Harbor” example of yore is nothing. He puts an interesting spin on it: what if there is reasonable upside to a foreign power to take down our critical infrastructure in a single, well-coordinated attack? What if our assumption of a “cold war” styled standoff (based largely on a belief in economic interdependency) isn’t actually valid?

If anybody has attended Black Hat and DEFCON, then they should know definitively just how good the breakers are these days, and just how behind the curve most organizations really are. Pulling out a book like Clarke’s can help drive home this point in a wonderfully FUDerific manner. “If you don’t fix things NOW, then you will lose everything!!!” Or so it might go in your head. After all, there’s nothing like a healthy dose of fear to motivate people. Or does it really work that way?

The Case Against FUD

There are a couple deficiencies with using FUD to make an argument. Excessive and continuous use of FUD can elevate the message to a state of background noise. It can also hurt your credibility. If every time you open your mouth FUD spews forth, then people will tune you out or avoid you. We in infosec – especially vendors – seem to be guilty of this historically, as evidenced by how hard it is to get the attention of execs.

Another problem is context. If everything is expressed as the highest of high risks, then how do you decide how to respond? If everything rates a 10 (on a 10-pt scale), then does that mean everything must be addressed immediately? How do you justify that?

Along these same lines, there’s also typically a lack of adequate supporting data to justify the consistently hyped state. Where are the metrics and measurements? Have the risk factors been measured and ranked using a reliable method? FUD tends to not have these supporting structures, which further damages credibility.

“We’re So Screwed”

This statement probable summarizes our situation today, at least from the U.S. perspective. How do we get this message across? If we have a high degree of credibility, and if we haven’t abused the use of escalated rhetoric, and if we have some facts to back us up, then and only then can we whip out some FUD to make our point (of course, we could debate if this is really FUD, but I digress…). You have all thattoday, right? No? Uh oh. Now what?

This, I think, reflects our current situation. We are sorely in need of a breakthrough, too (SCADA owners – I’m looking at you!). One such step being taken is that DHS is now sending teams off to energy companies to help with security, but this seems unlikely to be sufficient. We have decent methods for modeling risk (e.g. FAIR). How do we take the next step? How do we get the message across in a meaningful way that spurs meaningful action?

What do you think?

 

Here at the Fudsec Summer Resort, we were chilling with our wine coolers in between rides on the tire-swing, enjoying the hottest part of summer with some time off, and then Jack Daniel (@jack_daniel) goes and writes a perfect FUD-related rant on his Uncommonsense Security Blog. Several people DM’d us and asked us to re-post. With Jack’s kind permission, we’ve done so below. All hail Jack for a great analysis of some serious FUD

Someone has done some wildly successful social engineering. Amazing, actually. I am not talking about the “Robin Sage” social media/social engineering case where a lot of people who should know better gave up a lot of information in a lot of different ways. That may be interesting (we’ll see when it is presented), but even though some of the results were sensitive, that is building on a lot of prior work.

I am talking about the coverage of that story, where the reporting has largely been horrible, gullible, naive crap. Sorry folks, but yes, that includes coverage from people I like. If you believe a lot of what you read, you would think that a lot of people were “duped” into following/friending/linking/whatevering Ms. Sage. This shows a gross lack of understanding of both social networking and the security community- both on the part of the journalists, and to a lesser extent, the researcher.

The people who “over-shared” really are a problem, and it may be interesting to see what Thomas Ryan (the person behind Robin Sage) presents at DefCon. It looks like s/he got a lot of sensitive information from people who should know better- three letter agencies, military, and more. Interesting, but “people are stupid and gullible” is not really ground-breaking, nor is mining/abusing social networking to prove this point a new idea either. It does sound like the scope and scale may be noteworthy. But not new, and being a skeptic, I’m not sure it is newsworthy.

Where things fall apart is the nonsense over stories which pretty much proclaim that MILLIONS OF SECURITY PROS DUPED, and point to the number of friends/links/etc. the virtually perky Ms. Sage gathered. I would like to point out four things:

  1. Different people use social networks in different ways. Just because someone accepts your connection request does not mean they are fooled by you. They may not even care if you are real or fake.
    • Maybe they (sadly common) think that more connections means they are more important.
    • Maybe they are public figures of some kind, and accept most requests as a matter of policy. If people are careful with what information they share, there is nothing wrong with this. Nothing. It is voluntary, get over it. It is how Social Media and Social Networking work for many people. If you don’t like this approach- don’t use it.
    • The decision to accept may be based on connections offered (via friend-of-a-friend linking) instead of being based on the person making the request. Again, if you are cautious about what you share, there isn’t a risk here- even if it is a pretty shallow move. Robin certainly had some interesting friends/links to entice people. Put another way: Some days, the wingman scores.
  2. Once Robin Sage became fairly visible, the drama got interesting and a lot of people began following/linking to the myriad of Robin Sages (yes, there were clones and evil twins, too) just to watch the train wreck. I was one of these, and like many others I had my suspicions- but didn’t care if she was real, fake, or just another troll, there was entertainment. People were not duped, they grabbed a beer and some popcorn and watched the show.
  3. Robin Sage was called out. Spotted. Thoroughly outed. Many thought “something was fishy”. Some people did actual research and provided real details. People had to connect/accept to do the research and confirm their suspicions. The press almost completely missed this critical point. They also missed the fact that once this was widely known, even more people connected to and followed Robin to watch the evolving train wreck mentioned in point 2.
  4. Mr.. Ryan apparently convinced (socially engineered) much of the media into thinking this was something it wasn’t, then and the result was not journalism, it was an embarrassment.

And this is just the worst of it this week. Half baked ideas, giant (and flawed) leaps of logic, obvious vendor spin, and more were on parade this week. Maybe it was the heat and no one could think clearly. Maybe it was Vacation from Healthy Skepticism Week and no one told me. I don’t know, but I’m not happy about it.

 

For a year, Fudsec.com has brought you the finest FUD-bashing that money can buy, and many have asked us how they can post here (email us at the address below if you’d like to).

All too often, though, we’ve outed fear, uncertainty and doubt without thought to giving credit to those who toil thanklessly to create it.

We’re out to change that.

Announcing the FUDdies® - the industry standard recognition of innovation and creativity in the prodution of FUD. After all, coming up with new ways to wrest legitimate budget dollars from security initiatives towards illegitimate boxes is no easy task. Join Fudsec.com as we honor those in the business of making this magic happen.

Face it, folks: there’s tons of FUD out there, and even here on Fudsec there are few people being specifically called out for FUD. So let’s bring it. Tell us who’s doing it. Tell the community about it. 

We need your help to get these going. Email us your thoughts, your nominations, or anything else you think we should think about. Right now, there are two categories of FUDdie: FUDiest Campaign, and Most Unctuous Information Security Marketing Executive.

Voting is held by secret ballot at fudsec ( at ) gmail com  , and all results are reviewed by a top secret, anonymous committee whose decisions shall be final.

Prizes are coveted, genuine Reynolds-built aluminum foil caps, which look great and shield your brain from electromagnetic mind control carrier waves and beacons. The prizes will be announced at RSA 2011, which means we need help now.

Vote early! Vote with your heart! 

The Fudsec Team

 

Today’s post comes from Ben Tomhave. Ben and others felt the Zalewski ZDNet piece was a bit of a “Blame or Frame Job” on our industry and was compelled to respond. Do you agree? You’ll want to follow the links if you haven’t already read them. Any post that starts with a Sin City reference is likely to be gritty.


by Ben Tomhave (@falconsview)


“I’ve been framed for murder and the cops are in on it. But the real enemy, the son of a bitch who killed the angel lying next to me, he’s out there somewhere, out of sight, the big missing piece that’ll give me the how and the why and a face and a name and a soul to send screaming into hell.” (“Marv” in the movie Sin City)

I’ve read and reread (a couple times) the May 20th article “Security engineering: broken promises” by Michal Zalewski of Google (a guest post on ZDNet’s “Zero Day” feature). I have to say, I find it highly disappointing and FUD-tastically frustrating. The bio at the end describes him as a “security researcher,” which in my mind makes him a “breaker” more than a “fixer” (supported by the kinds of tools he’s released). As such, we have to expect a degree of whining cynicism about how bad things are, but I would have at least hoped he’d have a little more clue before spreading FUD doom and gloom.

Framing Frameworks
“…for several decades, we have in essence completely failed to come up
with even the most rudimentary, usable frameworks for understanding and
assessing the security of modern software… The frustrating, jealously
guarded secret is that when it comes to actually enabling others to
develop secure systems, we deliver far less value than could be expected.”

As a card-carrying member of OWASP, I find this statement to be ill-informed and suspicious. While it is true that we don’t have mathematical models describing software security (to which he later alludes), it is completely false to say that we lack frameworks for understanding and assessing software security (which he never defines). There are lots of options to choose from, whether it be OpenSAMMBSIMM/BSIMM2, or even the various efforts of groups like OWASP, ISECOM, or WASC. Let’s also not forget efforts like Microsoft’s SDL.

In terms of enabling others, this is not a security failure, it’s a management and business failure. Many like to throw blame onto security teams for this situation, but everything ultimately comes down to the decision-makers and their needing to place proper emphasis on the need/requirement for writing secure code+apps.

Framing Risk Management
Now we get into some very FUD-erific territory…

“…[risk management] introduces a dangerous fallacy: that structured
inadequacy is almost as good as adequacy, and that underfunded security
efforts plus risk management are about as good as properly funded
security work.”

and

“…security incidents are nearly certain, but out of thousands exposed
non-trivial resources, any resource could be used as an attack vector,
and none of them is likely to see a volume of events that would make
statistical analysis meaningful within the scope of the enterprise.”

and

“…in information security, there is nothing contributed by healthy
assets to directly offset the impact of a compromise, and there is an
insufficient number of events to model their distribution with any
degree of certainty; plus, there is no way to reliably limit the maximum
per-incident loss incurred.”

Wow, talk about cynical. First off, apparently risk management has no value. Second, risk management apparently detracts from security initiatives. Third, because there are potentially infinite threat vectors, the statistical analysis performed in risk assessment is pointless. All of this prattle belies a keen ignorance about risk management, and once again seems to suggest that software security failures are a result of something other than poor coding practices under the rule of security-disinterested business leaders.

More importantly, his risk management comments don’t seem to have much of anything to do with risk management, but instead seem to be focused on risk assessment methods. He probably also thinks that qualitative risk assessment techniques are de rigueur. It never ceases to amaze me when criticism is launched from a place of ignorance.

Framing Unified Theories
As the piece progresses (or maybe it digresses), it seems that we finally start to see his true intentions as he talks about CWE and CVSS, saying: “Having said that, none of them yielded a grand theory of secure software yet – and I doubt such a framework is within sight.” This comment finally reveals Zalewski’s true intent or hope, and that is some sort of mystical silver bullet “grand theory of secure software.” I thought this guy was a security researcher for the venerable GOOG? Anybody else’s spidey sense tingling over the inanity of his comment here?

Of course, perhaps the biggest problem is Zalewski chafing at what is actually “good enough” from a software security perspective. Frameworks seem to be the preferred ideal du jour, but to what end, and with what backing? More importantly, to quote Amrit Williams:

“What we must learn to accept is that security – as it pertains to both
the development of software and its operational use – is ultimately more
survivable than we like to believe.” (from “The Simple Elegance of Faith; When Good Enough Is“)

Call me crazy, but it seems like Zalewski is framing infosec for the failing of business leaders, compounded by his own ignorance.

What do you think?

Also check out Jack Daniel’s response (“A bit of deep thought.”) as he links to several other replies as well.

This week’s post comes from Eric Hanselman. Eric has an uncommon, common sense. Eric tried to leave Security two years ago after the RSA conference – bound for Virtualization-land. Alas, security pulls you back in and he was right back at RSA 2009. We always say “we’ll do better at security the next time.” “We’ll bake security in.” There were a lot of promises and claims made about how much better virtualization security would be. Here is sort of a “state of the union” from Eric.

by Eric Hanselman (@e_hanselman)

We’re heading in to a brave new world of desktop security and we need to do it with our eyes open.  There’s a lot of potential benefit that desktop virtualization can bring to an organization.  Like any new technology, though, there’s a lot of misunderstanding of the change in risk dynamics and how to deal with them.  In recent weeks there have been announcements and discussions that bear some analysis.

Hosted and Virtual desktops (HVD is the Gartner term) deliver awesome mitigation for data loss.  The desktop is back in the data center and the only the screen image makes it back to the user.  There are also all of these really great operational expense savings.  It’s easy to think that it resolves some of our biggest endpoint protection headaches.  There’s an air of irrational exuberance out there, that’s a little disturbing.

There are two big concerns:

·       Users think that desktops in datacenters are wicked safe.

·       Vendors aren’t disabusing them of this delusion.

At RSA this year, in two different virtualization security sessions, I heard attendees ask if anti-virus software was still needed with virtual desktops.  Lest you think that these were aberrations, industry analysts are posing the question, as well.

Forget about all of the Blue/Red Pill hysteria.  There’s a much more fundamental issue that we need to address.  Yes, the desktops are now in the datacenter, but there are still a whole set of security issues that have to be handled.  We’ve made a big jump forward with physical security.  It’s now a lot harder for random people to plug USB devices in to desktops or walk off with the thing that holds all that local data.  We’ve paid for this by turning every user in to a remote user.  Remote access security is something that we should have a good handle on, but now every user needs it.  IAM capabilities take a big step forward.

Securing the desktop is where real work still needs to be done and that falls to the traditional tools of endpoint defense.  The hitch is that our existing tools don’t play well with the virtual world.  For the security conscious, the virtual desktop gets built like the physical desktop.  Tried and true desktop suites can be managed in the virtual world alongside the physical desktops.  This works.

There’s a danger lurking here, if we don’t understand the impact in the virtual world.  There are a number of horror stories of a newly minted virtual installation being brought to its knees when every one of the virtual desktops was scheduled to do system scans at the same time.  Even if our suite supports flexible scheduling, those compute and I/O intensive tasks that worked so well when distributed across bunches of under-utilized systems are a huge load when brought back to a shared set of servers.

This is a problem that has many people considering turning off traditional protections.  A big difference between server and desktop virtualization is the concern about scale.  Running endpoint protection on virtual desktops reduces the number of desktops that can be hosted on a given set of hardware.  There are virtualization vendor claims that, by destroying each desktop after use, we eliminate infection.  This is the first vendor complicity issue.

What about all of that user data?  Aren’t there a lot of PDF’s full of APT’s out there?  Fortunately, virtualization can address a part of this.  But only part.

One big benefit of desktop virtualization is that I’ve got all of my users’ disks in the datacenter.  They’re available all of the time.  If I’ve got enough disk I/O capacity, I can scan all of those disks any time with minimal user impact.  I’ve also got the potential to remediate issues centrally.  A big win.  Some traditional AV vendors pitch this as their “virtual” solution today.

The piece that isn’t covered is execution monitoring.  The virtual environment still doesn’t have a way to keep tabs on live processes.  There are good signs, but they’re not complete.  VMware’s VMSafe opens memory pages for inspection, but, again, we’re back to static signature scans and advanced threats have proven that they’re pretty good at obfuscation.  And only VMware offers this today.  And only a few security vendors are doing anything with VMsafe.  This is a missed opportunity.

We now come to the recent announcement by Citrix and McAfee of their partnership for virtual desktop security, the MOVE platform.  This sounds like it’s going on the right direction.  It makes the agent functions more granular and allows processing to be split between the desktop and the virtual environment.

How will this fare when put under the scrutiny of the recently developed SCSOVLF metric?  Not well, unfortunately.  To begin with, it’s still a “concept” with delivery some months off. Details are still emerging, but the first stage seems to move some analysis parts to a separate VM and leans heavily on virtualization being a great way to improve configuration management.  Points off for relabeling something that we should have been doing already.

There is a second phase to MOVE, native hypervisor inspection.  My heart leapt!  Until I realized that it’s application  and process whitelisting.  This is desktop security, not server, right?  There are a lot people who’ve been burned out there by the twin issues of manageability and effectiveness for whitelisting.  It puts us right back to manually locking down users’ desktops.  While this is a step in the right direction, it comes with a high cost.  And more sophisticated threats already know how to beat it (DLL injection anyone?).

What we really need is endpoint protection that can rely on sophisticated techniques in the hypervisor.  Have per instance execution monitoring for the desktop, and leave the signature scans to a storage analysis piece.  And correlate the two, please.

And wouldn’t it be even better if, while providing virtual execution cycles, the virtualization layer was doing some effective protection, as well.  A guy can dream, right?

Follow

Get every new post delivered to your Inbox.