Common Criteria Part 2 Caveat Emptor
By Seth Ross
The Common Criteria (CC) provides a basis for evaluating IT security
products. It's thorough and flexible; and it's
important, particularly for the largest defense departments and
ministries in the Western world. Nevertheless, it's not a panacea.
Those that make purchasing decisions based on the Common Criteria
scheme must keep its limitations in mind.
As a purchasing tool, the scheme has several rough edges, each
of which introduces elements of risk into the IT security purchasing
The Common Criteria is built around its own "dictionary"
of security primitives that are, prima facie, unintelligible to
regular people (i.e., those that are not security engineers). While
it's true that computer security is complex and that it's difficult
to express complex concepts in natural language, the CC veers into
obscurantism or what one lab calls "CC speak". The assurance
class ACM covers "Configuration Management" but it's easy
to stumble on abbreviations like ACM_AUT, ACM_CAP, and ACM_SCP (configuration
management automation, capabilities, and scope, respectively). Tellingly,
the scheme provides a pair of usage guides that provide the precise
meanings of words like "check", "describe",
This issue may be unavoidable, but it presents a risk since buyers
may not be able to fully comprehend or contextualize the language
and thus the results of a CC evaluation.
In many cases, if you care about security, you do NOT want to purchase
and deploy the Common Criteria certified version of a product. Except
for a brief window of time right after certification of a product
is complete, the certified version will NOT be the latest version.
Given rapid change in IT product markets, the latest version will
almost always contain defect fixes and design elements that improve
A classic example of this phenomenon is the Windows operating system.
Windows 2000 is certified. Windows Server 2003
is not. Yet, the latter contains hundreds of new security features
and fixes that have been added to the platform since the 1999 release
of the former. While this timing problem is not unique to the Common
Criteria, the scheme's verbosity and complexity extend product certification
timelines -- a three-year cycle is not unusual -- thus guaranteeing
the obsolescence of the certified product. Perversely, certification
itself extends the product development cycle and thus reduces the
time-to-market for new security features and fixes.
There is no notion of cost-effectiveness in the Common Criteria.
A product either passes a particular test -- is every threat addressed
by a security objective?, for example -- or not. This is an odd
omission given the scheme's goal of aiding the purchasing process.
It's up to the buyer to analyze whether the $100 product provides
security almost as good as the $1 million product. While the scheme
requires definition of the IT assets that need protection, the threats
to the assets, and other elements of the security environment, it
falls to assist in basic risk analysis: does the probability of
harm from a successful attack on the assets exceed the cost of purchasing
the certified products that act as counter-measures? Thus, there
is the risk that buyers will be induced to purchase the infosec
equivalent of $2000 toilet seats.
4. Paperwork vs. Security
The Common Criteria provides four levels of assurance that are
mutually recognized by the sixteen participating countries, EAL1
through EAL4. Naively, one might assume that a product certified
to EAL4 is "more secure" than a product certified to EAL1,
just like an "A" in a college course indicates better
student performance than a "D". But the EAL1-EAL4 scale
is only superficially similar to grading systems like the classic
D-C-B-A report card. Each ascending level of assurance requires
more product _documentation_ rather than more product _security_
per se. EAL4, in particular, requires dozens of documents that can
add up to thousands of pages for even relatively simple products.
Many of these documents are created solely for the CC process; they
serve no other purpose. Often the highest "grades" go
to the product vendor with the biggest documentation budget, independent
of the real world assurance provided by the targets of evaluation
5. Setting a Low Bar
An important part of the CC is the Protection Profile, a standardized
statement of requirements for what a given kind of product should
do. In many cases, these standardized documents set a low bar for
security. Windows 2000, for example, was certified against the Common
Access Protection Profile, which
... provides for a level of protection, which is appropriate
for an assumed non-hostile, and well-managed user community requiring
protection against threats of inadvertent or casual attempts to
breach the system security. The profile is not intended to be
applicable to circumstances in which protection is required against
determined attempts by hostile and well-funded attackers to breach
system security. The CAPP does not fully address the threats posed
by malicious system development or administrative personnel.
Jonathan Shapiro at Johns Hopkins has done a great job of translating
that into colloquial English:
Don't hook this to the Internet, don't run email, don't install
software unless you can 100% trust the developer, and if anybody
who works for you turns out to be out to get you, you are toast.
In the real world, Windows 2000 systems require protections beyond
the low bar set by the CAPP. Nonetheless, defense buyers are free
to purchase and deploy off-the-shelf Windows boxes: They simply
check the box marked "EAL4". Checkbox security is fraught
6. Contradictory Requirements
Although the CC is designed for flexibility so that many different
kinds of security products can be evaluated against it, its hierarchy
of security functional requirements can break down when faced with
contradictory requirements. The scheme often works such that if
you do A, you must also do B, C, and D. But as Rebecca Mercuri points
out, it fails to provide a way to contra-indicate a function, so
that if you do X, you must NOT do Y and Z. Mercuri
is an expert in electronic voting systems, which must provide both
anonymity via the "secret ballot" and auditability to
ensure that fraud has not taken place. An evaluation of such a system
(Swiss bank accounts and AIDS testing systems have similar contradictory
requirements) cannot be completed entirely within the CC without
augmentation and extension of the CC schema.
None of these problem areas nullifies the value of the CC as a
method for evaluating both products and development processes. Like
any standard, the CC should be applied prudently, with ample consideration
of the risks imposed by difficult jargon, timing issues, varying
economics, paperwork requirements, least common denominator requirements,
As always, caveat emptor.