Securius Newsletter

May 22, 2000
Volume 1, Number 6
http://www.securius.com

Ten General Security Rules 6-10

By Seth Ross

This issue:

  • Rule 6: There's Always Someone Out There Smarter, More
    Knowledgeable, or Better-Equipped Than You
  • Rule 7: There Are No Turn-key Security Solutions
  • Rule 8: Good and Evil Blend into Gray
  • Rule 9: Think Like the Enemy
  • Rule 10: Trust is a Relative Concept

Last issue:

  • Rule 1: Security Through Obscurity Doesn't Work
  • Rule 2: Full Disclosure of Bugs and Holes Benefits Security
  • Rule 3: System Security Degrades in Direct Proportion to Use
  • Rule 4: Do It Right Before Someone Does It Wrong for You
  • Rule 5: The Fear of Getting Caught is the Beginning of Wisdom

NOTE:
The next issue of this newsletter will come out under a different name: The Securius.com Newsletter. I invite all of you to visit the newsletter's new home at the URL http://www.securius.com. As always, thank you for your continuing support and readership. Yours,
Seth Ross

TEN GENERAL SECURITY RULES

In an effort to paint on a larger canvas, I've devised ten general rules or themes that present a way of looking at and thinking about computer security. Here are rules #6-10. In case you missed last issue, you can review the first five rules at http://www.securius.com/newsletter/archive/105.txt

Note that a version of these rules first appeared in my book, _UNIX System Security Tools_ (McGraw-Hill, 1999). You can find out more about the book at http://www.albion.com/usst/

Here we go ...

Rule 6: There's Always Someone Out There Smarter, More Knowledgeable, or Better-Equipped Than You

Be careful about the assumptions you make concerning the threats your systems face. Even redundant security mechanisms and careful monitoring won't necessarily protect you against the uebercracker.

Many security threat models assume than the bad guy will be a one-dimensional loner or a script kiddie probing systems for fun. While redundant security mechanisms and careful monitoring might protect against these threat models, they may fail against a determined, hardened, and skilled professional — an uebercracker.

Consider this excerpt from Dan Farmer's and Wietse Venema's article, "Improving the Security of Your Site by Breaking Into It":[1]

Why "uebercracker"? The idea is stolen, obviously, from Nietzsche's uebermensch, or, literally translated into English, "over man." Nietzsche used the term not to refer to a comic book superman, but instead a man who had gone beyond the incompetence, pettiness, and weakness of the everyday man. The uebercracker is therefore the system cracker who has gone beyond simple cookbook methods of breaking into systems. An uebercracker is not usually motivated to perform random acts of violence. Targets are not arbitrary - there is a purpose, whether it be personal monetary gain, a hit and run raid for information, or a challenge to strike a major or prestigious site or net.personality. An uebercracker is hard to detect, harder to stop, and hardest to keep out of your site for good.

An even more serious threat than the uebercracker is the attack cell -- a complex group of individuals who work together to attack systems in order to further a common goal. While an organization prepares for the lone cracker, an attack may be executed by professionals with extensive financial and technical resources. An attack cell might include a social engineering expert who's just been hired into Marketing, a systems expert who can model your network box-by-box and port-by-port, a security programmer who's spent years developing custom tools, and a phone phreak specializing in moving information via intermediaries. It might have significant research and development capabilities or even the backing of a government organization.[2] Conventional tools and techniques will only be marginally effective in this scenario. If your systems contain commercially or politically valuable secrets, be prepared to make substantial investments in security management, physical security, personnel security, and a significant investigative capability in addition to system and network security.

Rule 7: There Are No Turn-key Security Solutions

Businesses have been rushing to connect to the Internet with the expectation that they can buy complete turn-key security. Turn-key security is an illusion, with the possible exception of physical security solutions (like those PC Guardian sells at http://www.pcguardian.com/hardware/anti_theft.html) that literally work at the turn of a key (when you turn the key on PC Guardian's Notebook Guardian 2000, for example, it's secured). In general, there are too many variables to account for, too many variations in security policies, threat models, system configurations, and connectivity. You want to avoid the "Maginot Line Syndrome" (made famous by the French after WWI): i.e., relying on a singular safeguard like a firewall that can be systematically sidestepped. Security is not something you buy, invent or do as a one-time event; it's a continual process that requires ongoing planning, monitoring, and refinement.

A corollary to this rule: There's no checklist that will account for all vulnerabilities. Security checklists are a venerable way to check for errors and omissions, but don't be lulled by them. The checklist method of security will fail against an intelligent attacker, who has already seen the published checklists and works to devise attacks not covered by them.

Rule 8: Good and Evil Blend into Gray

All definitions of computer security contain an implicit conceit: that there are "good guys" and "bad guys" out there, or "white hats" and "black hats." Virtually every popular book and movie on the topic indulges in this conceit, from Clifford Stoll's _The Cuckoo's Egg_, where the wily Berkeley hacker hunts down international spies, to _The Net_, in which Sandra Bullock plays a system administrator stripped by identity thieves. In some ways, the adversarial nature of computer security reduces it to a kind of game. Unfortunately for the security practitioner, this game of judgment is "played against unknown adversaries plotting unknown harm at unknown times and places."[3] As someone concerned about the security of your systems, you might think of yourself as playing a part in grand drama, defending against nameless attackers who might be everywhere or nowhere at all.

I advise you not to fall into this conceit. You may be giving yourself too much credit and assigning too little to your "opponents." Don't overlook the fact that most security violations are perpetuated by company insiders. Your perpetrators might look a lot more like the non-descript nerds that sit together in the lunch room than the techno-pop-addled ravers in the movies. Never forget that between every white hat and black hat actor, there are hundreds that wear gray.

The computer security profession includes a wide variety of practitioners, including highly-credentialed academics, retired military personnel, snake-oil salespeople from commercial vendors, and reformed crackers who have now seen the light. There is no central certifying body overseeing the development of the skills necessary for computer security professionals, nor is there an accepted canon of ethics. In this way, the profession resembles that of its opponents: the extended, multi-national, multi- cultural cracking community, from those who develop complex "exploits" to the "script kiddies" who learn at a tender age that computer crime is as easy as a double-click.

Rather than make a dichotomous break between those who protect systems and those that compromise them, consider how intimately intertwined the two are and the large numbers of people who fall into the gray areas in between. Consider the case of the "tiger team" — computer security professionals who are hired to test the security of systems by attacking them. In some cases, these teams are composed of reformed system crackers whose former malevolence is generously rewarded. Even IBM advertises the services of its "ethical hackers." On the other hand, many a cracker has resorted to the educational defense -- claiming that, by cracking into systems, he is actually doing the victim a favor. Conversely, there are most likely more than a few professionals on the inside of almost any organization who have discovered that crime does pay.

Just as there is no clear line between the "white hats" and "black hats" in the computer security culture -- between the "ethical hackers" who find holes and the crackers who find holes -- there's no clear line between tools for improving security and tools that break it. A tool is just that. Any security tool can be used for good or evil just as a hammer can be used to build a house or break into one. A password-cracking program can be used to find weak passwords before an attacker does or it can be used by the attacker to gain entry. A security auditing program can help either a sysadmin or a system cracker to find holes.

Even tools and measures that appear to be purely defensive, like firewalls, are implemented by crackers in order to bolster their attacks. Only the most naive attacker doesn't account for the contingency that the victim may counter-attack. The most sophisticated crackers build sophisticated defenses to provide cover for their activities. Conversely, some organizations are adopting "strike-back" capabilities in order to bolster their defenses through deterrence.

Rule 9: Think Like the Enemy

This rule follows naturally from the dichotomous nature of computer security — where good and evil blur into gray, the "game theory" of computer security cited above, and the "There Are No Turn-key Security Solutions" rule. If computer security is a game, then the enemy makes the rules. This is why checklists and stock solutions like firewalls, which derive from set defensive rules, can prove to be ineffective against smart opponents. Assume that the other side has maximum capabilities, in accordance with the notion that "There's Always Someone Out There Smarter, More Knowledgeable, or Better-Equipped Than You." Identify those that could pose a threat to your systems and model their motives, capabilities, and worldviews. Surf to "hacker" sites that contain articles and tools for breaking into systems. Develop scenarios based on the threat model you face; if YOU were a UNIX systems programmer from a competing organization, how would you breach your organization's security?[4]

Rule 10: Trust is a Relative Concept

For the purpose of achieving the strongest possible computer security, "trust no one" is the strongest policy. Any piece of software or hardware could deliver a Trojan Horse or other malicious features. Of course, unless you're able to build your own hardware and code all your software, you're going to have to trust someone. Most computer and software companies are relatively trustworthy, even if they don't operate in full disclosure mode by publishing source code or exhaustive hardware specs. Most open source programs are relatively trustworthy as well. Even published source code, however, cannot provide complete protection from malicious code.

In a famous speech, Ken Thompson, one of the creators of UNIX, told of a frightening pair of bugs he was able to code.[5] He planted a Trojan Horse in the source of a C compiler that would find and miscompile the UNIX login command in such a way that it would accept either the correct password or one known to him. Once installed in binary, this C compiler would create a login command that enabled him to log into the system as any user. That's a security hole! Now, Thompson knew that another programmer looking at the source would likely see this gaping hole. So he created a second Trojan Horse aimed at the C compiler. He compiled the Trojaned source with the regular C compiler to produce a Trojaned binary and made this the official C compiler. Voila, Thompson could then remove the bugs from the source, knowing that the new (Trojaned compiler) binary would reinsert the bugs whenever it was compiled. Thus, the login command was Trojaned with no trace in the source code. Thompson pointed out the clear moral of the story: "You can't trust code that you did not totally create yourself." On the other hand, not many of us are Ken Thompson, with resume items like "Invented UNIX operating system." Perhaps a better moral would be: "Trust no one completely."

NEXT MONTH: Attack of the Email Snoops -- The Secret Attack on Email Privacy. 'Til then, keep your guard up.

REFERENCES

[1] See http://www.fish.com/security/admin-guide-to-cracking.html

[2] For an extension of this kind of scenario, see Fred Cohen's article "Anatomy of a Successful Sophisticated Attack" at http://all.net/journal/netsec/9901.html

[3] Walter A. Kleinschrod, as quoted by Charles F. Hemphill, Jr. and John M. Hemphill, Security Procedures for Computer Systems (Homewood, IL: Dow-Jones Irwin, 1973), 1.

[4] This approach is championed in Donn B. Parker, Computer Security Management (Reston, VA: Reston Publishing Company Inc., 1981), 158-161. Also see Fred Cohen's site, http://all.net/

[5] Communication of the ACM, Vol. 27, No. 8, August 1984, pp. 761-763. See http://www.acm.org/classics/sep95/

 



Subscribe to the Securius Newsletter
Please enter your email address:



Securius.com is a service of GuardianEdge Technologies.
Copyright © 2006 GuardianEdge. All rights reserved.
We will not share your personal information with third parties.
Nor will we contact you without your permission.