Wednesday, December 22, 2010

Brilliant

Jason Holcomb of Digital Bond tuned me into this little snippet of brilliant insight from Ralph Langer..... which ties directly into my earlier post:

(Quoting myself) So if you as an asset owner are bewildered by the ease in which Stuxnet propagated, and bemoaning the fact that there is little in most systems that would have stopped it, well you need look no further for culprits then yourselves collectively, as you as a community have simply not demanded it in the products you buy.

Because of the fundamental lack of security in control systems we instead rely on bolt on hardening, and perimeter control. In the face of the metrics coming out about the number of systems infected by Stuxnet, it is obvious that this approach has failed.


Ralph's Brilliant Insight:

As an asset owner, you should presently live under the assumption that you continue to operate because the forces behind Stuxnet allow you to do so.

No truer nor more insightful statement has been made about Stuxnet to date. The biggest take away needs to be and again I am repeating myself...... This could have been a rock instead of a scalpel. The authors went to extreme measures to minimize collateral damage.

I hate to spread FUD, but in the light of the demonstrable impact and possibility of Stuxnet, what would of occurred if this had been merely an instrument of blunt trauma, bricking every flavor of every field device that it could have?

Wednesday, November 17, 2010

What not to do....

make publicly available documents showing:
network topology
full scada schematics
wireless hotspots
camera coverage
fencing

yada yada yada.

Basically do not do nor make available what you see here:
http://www.tetratech.com/View-document-details/238-Saginaw-Water-Plant-Security-and-SCADA-Improvements-11MB.html

my head asplode.

Thursday, September 30, 2010

The root of the problem

In its simplest form the root of the issue with securing control systems is that there is no inherent security in a control system. There are no mechanisms when you purchase and deploy a control system to ensure confidentiality, integrity and authenticity as these were not a driving design criteria.

In a control system the driving principle is availability, reliability and safety. Confidentially is not really needed, but integrity and authenticity measures would go a long way to alleviating many weaknesses of the type exploited by Stuxnet.

The vendors ultimately produce what the asset owners use, and they are only going to revamp their product lines to include security if the market demands it, or by legislative fiat.

Asset owners, when will you start to demand security in your control systems products to the degree that the vendors must respond? The only other mechanism by which this will occur is by federal mandate which flies in the face of the principles of a free market.

So far the market hasn't demanded it. In the 6 years I have been examining thees system there has been no significant change in product lines that indicates that security is a driving design criteria. And this is without addresses the tens of thousands of legacy systems.

So if you as an asset owner are bewildered by the ease in which Stuxnet propagated, and bemoaning the fact that there is little in most systems that would have stopped it, well you need look no further for culprits then yourselves collectively, as you as a community have simply not demanded it in the products you buy.

Because of the fundamental lack of security in control systems we instead rely on bolt on hardening, and perimeter control. In the face of the metrics coming out about the number of systems infected by Stuxnet, it is obvious that this approach has failed.

Tuesday, September 21, 2010

Stuxnet thoughts and process reactions

There has been a lot of discussion of the Stuxnet malware in the control systems sphere the last couple of weeks. As details emerge it becomes ever more apparent that this malware was the equivalent of a scalpel. By that I mean it targets a specific plant floor, not even a specific product line, but a specific plant floor and then monkeys with code blocks on the PLCs. The end goal is not clear yet but it does not require a huge bit of stretching to say that the target is the Bushehr Nuclear plant in Iran and that it (Stuxnet) most likely has it's origin in either the good ole US of A or Israel.

This package was too surgical to elicit much of a response by our industry. Too few felt the pain.

My biggest take away has been the reaction of the industry, which appears minimal. As this package is a scalpel it does: no, to minimal damage on the non-target systems that it infects. And so, the asset owners are not screaming bloody murder.

Now if the payload had been a hammer instead of a scalpel would the asset owners be as quiet? What I mean by this, is that as it has little adverse impact no one is screaming for heads. But, instead of doing little damage had this package turned a couple of 10,000 PLCs into expensive paperweights across multiple brands(which is demonstrably possible) how would the industry have responded?

I think that the asset owners would be screaming for blood and that the vendors would be forced into change, by pure market force if not legislative fiat. As it is, nothing is going to change. This malware has shown that a smart worm could quite possibly kill thousands of PLCs and yet little is being said in these regards. So business as usual will continue and systems with no inherent security will continue to be the norm, even in forthcoming product lines.

Something has got to change.

My second take away is that the system is broken. This is a direct reference to my previous "Moot" blog post from a couple of months ago, deriding the mootness of most security research in this field.

Siemens is by the INL's own website's admission a research partner with INL. And yet if the exploit paths employed by stuxnet were detected by the INL's assessment(s) of the Siemens products, then either: Siemens failed to act on the finding; or the vector was not found. Either way shows that spending a chunk of tax dollars to produce assessments to which the vendor has sole dissemination discretion seems to serve no one but the vendor. The vendor can choose to squash, ignore or act upon the findings and the lab, for all its work, bound by NDAs and CRADAs, has to remain mum. This in no way serves the interests of the asset owners or the tax payers at large who contribute a significant portion of the funding for these assessments. But this appears to be the mode in which these assessments are handled. Perform the work on the taxpayer's nickel and trust in the good will of the asset owner to do something about the findings.

Continuing on mootness, ICS-CERT failed. They neither have led in the analysis of the malware package, nor have provided any real mitigation. Instead Symantec, Kaspersky, and Ralph Langer's team have produced the most usable results. Again this may be due to ICS-CERT being some what bound in what they can disclose..... but if this is the case then why do they exist? What value is the tax payer getting from their efforts?

Why do we fund ICS-CERT, and research at the national labs if the results can not be shared, and if they provide no real leadership?

Siemens has also failed. Failed to say much of anything or provide their users with any real guidance on check for the presence of or mitigating the exploit paths. This failure is so bad that changing the default DB passwords, said passwords were one of the exploit vectors, will break the system.

So my take aways from this incident are:

*There are some damn crafty control systems hackers out there with access to real resources.

*The labs and ICS-CERT are providing little true leadership.

*The vendors are continuing like it is 1995. (Ok I will cut them a little slack on this as they have only been invited to the table in many ways since ICSJWG fall 09).

*The impact of the malware could have been potentially huge. In most ways this is good, in terms of driving a real reaction and security it is too bad.

Friday, August 20, 2010

Couple of little scripts

used to check for:
Well from a hacker's side, credential re-use, you know see if that password hash you just cracked will work on other systems;)

From a defender's side... check for ssh services and service account existence.

First the login/authentication tester (expect script):


#!/usr/bin/expect -f
#!/usr/bin/expect -d

set host [lrange $argv 0 0]
set uname "username" #placeholder username
set pass "password" #placeholder password
set timeout 120
set win "TESTLOGINFAIL $uname@$host\n"
spawn -noecho ssh -l $uname $host
log_user 0
match_max 100000

expect {
"(yes/no)?" {
send -- "yes\r"
exp_continue
}
"assword:" {
send -- "$pass\r"
expect {
"$ " {
puts "TESTLOGINTRUE $uname@$host\n"
send -- "exit\r"
close
exit
}
"Permission denied" {
puts $win
exit
}
timeout {
puts $win
exit
}
eof {
puts $win
exit
}
}
}
-re . {
exp_continue
}
timeout {
puts $win
exit
}
eof {
puts $win
exit
}


}


Next a wrapper to pass the test a range of IPs and to parallel-ize it for speedy performance (in perl with a nod to factor the perl wizard).



#!/usr/bin/perl
#
#
use Net::IP;
use Parallel::ForkManager;

#input can be of the form 192.168.10.1-192.168.10.22
#or 192.168.10.0/24

my $argIn = $ARGV[0];

my $pm = new Parallel::ForkManager(25);
my $ip = new Net::IP ($argIn) || die;
# Loop
do {
$pm->start and next;
my $bVal = 0;
my $nIP = $ip->ip();
$bVal = `./fail.exp $nIP`;
#$bVal = `./fail.exp $nIP`;
if($bVal =~ /TESTLOGINTRUE/){
print "$bVal : TRUE\n";
}else{
print "$nIP : FALSE\n";
}
$pm->finish;
} while (++$ip);
$pm->wait_all_children;

Enjoy!

Wednesday, August 4, 2010

The "mootness" of Control System Security Research

A lot of research is ongoing with the appellatic title of "control systems research."
The goal; securing control systems from cyber attack. Research that is funded in part by the control systems vendors and in part by the taxpayer.

A good part of this research is entirely moot.

What to I mean by moot? Well let me qualify that assertion.

Much of the research performed against control systems in laboratory environments involves: creating a mock control system, fuzzing the applications, and studying the protocols, communications, and perhaps the back end code with the hopes of finding a buffer overflow or other exploitable vulnerability.

From the exploit and the knowledge gained in finding the exploit a patch can be developed, or a mitigating control can be derived.

This type of research is occurring in the national labs via the National SCADA Test Bed, and in other private research groups. Create a mock environment and test it for vulnerabilities in all their various forms.

Though finding vulnerabilites and developing exploits is cool work and will get you props in the community, such efforts do little to truly secure control systems.

I state that this type of research is moot because if an attacker has enough exposure to the control system that he can exploit the SCADA specific software then the attacker is already in a position where much easier level 2 and level 3 (TCP model) attacks can be performed.

As control systems do not provide mechanism for ensuring the integrity and authenticity of communications, it is easier for any one with enough exposure to launch a buffer overflow of a control system specific service (assuming that the control system is not directly exposed to the internet, and corporate environments) to simply poison the ARP tables and tamper with the packets and control the process. Or failing that, there exists sufficient clear text authentication, default passwords, group passwords, re-use of passwords (are we seeing a theme here?) to provide vectors into the system. Next, many control systems run on older versions, and unpatched versions of their respective OSs which have known exploits. If an OS level exploit is unavailable, application specific exploits to the common services and applications; web servers, browsers, media players, readers and the like, abound in control systems where patching is often an after thought, undesired and known to at times have bad effects.

Given the state of existing control systems, crafting a custom exploit against a specific piece of control system software is most likely the last thing an attacker will do. There are simply too many easier methods of controlling the control system. A prime example is the recent Stuxnet Trojan. It relied on an OS level 0day and default control system passwords to extricate data. No control systems specific vulnerability was truly employed as the default password was known. In fact it was known that changing the password from the default was known to have bad results.

Yet this type of vulnerability research (finding control system specific software vulnerabilities) receives a large emphasis. Is it warranted?

Would research dollars be better spent in researching methods of providing integrity and authenticity measures to existing product lines and legacy systems? I say yes.

In as much as layer 2 and 3 attacks are so readily performed on control systems most research into control systems specific layer 4 vulnerabilities, is efectively moot.

Wednesday, July 28, 2010

The NERC audit process is very broken

If the chief objective of the NERC audit process is to improve the security posture of the asset owners and ISOs, then it fails miserably to achieve said goal. Instead, because the auditors can penalize asset owners severely for items that have no bearing on security, the focus shifts from security to compliance with the standards. Standards to which it is fully possible to be entirely compliant with while having a very poor security posture.

In this effort to be compliant in order to avoid penalties, asset owners with limited resources emphasize compliance. After all it is compliance to the standard which allows you to go un-penalized, not security. The audit process and its emphasis of compliance rather than security serves to divert resources from security. Time, money and effort goes into being compliant instead of secure. Again compliant to a standard that has little bearing on true security.

The process is broken, contributing to poorer security through the diverting of resources. Having said that...... I have wracked my brain for a better model and at a loss for a solution, as all the scenarios I come up with involve huge government inclusion into security monitoring and testing.

Monday, July 26, 2010

The Basic Persistent Threat

So Jason Holcomb (of Digital Bond) and I are coining some new phrases in regards to cyber security as it applies to control systems. Control systems are a literal regression to many of IT's worst practices of 10+years ago.

Mine: Security through Divine Intervention. When the coding and basic schema of your products, and architecture are so bad, the only thing that keeps you from being pwned daily is an act of deity.

When we are just glad every morning that that lights still come on, and credit it to divine intervention.

Jason's: Basic Persistent Threat: When your security architecture is so bass-ackwards that no "Advanced" techniques are required to keep a persistent presence. As opposed to the APT, advanced persistent threat.

This means a 12 year old with a stick can poke holes in your architecture and maintain a persistent presence.