is a chip that resides on the motherboard of the device. The TPM
serves a number of purposes, including the storage and
management of keys used for full disk encryption (FDE) solutions.
The TPM provides the operating system with access to the keys,
preventing someone from removing the drive from one device and
inserting it into another device to access the drive’s data.
A wide variety of commercial tools are available that provide added
features and management capability. The major differentiators
between these tools are how they protect keys stored in memory,
whether they provide full disk or volume-only encryption, and whether
they integrate with hardware-based Trusted Platform Modules (TPMs)
to provide added security. Any effort to select encryption software
should include an analysis of how well the alternatives compete on
these characteristics.
Don’t forget about smartphones when developing your
portable device encryption policy. Most major smartphone and
tablet platforms include enterprise-level functionality that
supports encryption of data stored on the phone.
Email
We have mentioned several times that security should be cost
effective. When it comes to email, simplicity is the most cost-effective
option, but sometimes cryptography functions provide specific
security services that you can’t avoid using. Since ensuring security is
also cost effective, here are some simple rules about encrypting email:
If you need confidentiality when sending an email message,
encrypt the message.
If your message must maintain integrity, you must hash the
message.
If your message needs authentication, integrity and/or
nonrepudiation, you should digitally sign the message.
If your message requires confidentiality, integrity, authentication,
and nonrepudiation, you should encrypt and digitally sign the
message.
It is always the responsibility of the sender to put proper mechanisms
in place to ensure that the security (that is, confidentiality, integrity,
authenticity, and nonrepudiation) of a message or transmission is
maintained.
One of the most in-demand applications of cryptography is encrypting
and signing email messages. Until recently, encrypted email required
the use of complex, awkward software that in turn required manual
intervention and complicated key exchange procedures. An increased
emphasis on security in recent years resulted in the implementation of
strong encryption technology in mainstream email packages. Next,
we’ll look at some of the secure email standards in widespread use
today.
Pretty Good Privacy
Phil Zimmerman’s Pretty Good Privacy (PGP) secure email system
appeared on the computer security scene in 1991. It combines the CA
hierarchy described earlier in this chapter with the “web of trust”
concept—that is, you must become trusted by one or more PGP users
to begin using the system. You then accept their judgment regarding
the validity of additional users and, by extension, trust a multilevel
“web” of users descending from your initial trust judgments.
PGP initially encountered a number of hurdles to widespread use. The
most difficult obstruction was the U.S. government export regulations,
which treated encryption technology as munitions and prohibited the
distribution of strong encryption technology outside the United States.
Fortunately, this restriction has since been repealed, and PGP may be
freely distributed to most countries.
PGP is available in two versions. The commercial version uses RSA for
key exchange, IDEA for encryption/decryption, and MD5 for message
digest production. The freeware version (based on the extremely
similar OpenPGP standard) uses Diffie-Hellman key exchange, the
Carlisle Adams/Stafford Tavares (CAST) 128-bit
encryption/decryption algorithm, and the SHA-1 hashing function.
Many commercial providers also offer PGP-based email services as
web-based cloud email offerings, mobile device applications, or
webmail plug-ins. These services appeal to administrators and end
users because they remove the complexity of configuring and
maintaining encryption certificates and provide users with a managed
secure email service. Some products in this category include StartMail,
Mailvelope, SafeGmail, and Hushmail.
S/MIME
The Secure/Multipurpose Internet Mail Extensions (S/MIME)
protocol has emerged as a de facto standard for encrypted email.
S/MIME uses the RSA encryption algorithm and has received the
backing of major industry players, including RSA Security. S/MIME
has already been incorporated in a large number of commercial
products, including these:
Microsoft Outlook and Office 365
Mozilla Thunderbird
Mac OS X Mail
GSuite Enterprise edition
S/MIME relies on the use of X.509 certificates for exchanging
cryptographic keys. The public keys contained in these certificates are
used for digital signatures and for the exchange of symmetric keys
used for longer communications sessions. RSA is the only public key
cryptographic protocol supported by S/MIME. The protocol supports
the AES and 3DES symmetric encryption algorithms.
Despite strong industry support for the S/MIME standard, technical
limitations have prevented its widespread adoption. Although major
desktop mail applications support S/MIME email, mainstream web-
based email systems do not support it out of the box (the use of
browser extensions is required).
Web Applications
Encryption is widely used to protect web transactions. This is mainly
because of the strong movement toward e-commerce and the desire of
both e-commerce vendors and consumers to securely exchange
financial information (such as credit card information) over the web.
We’ll look at the two technologies that are responsible for the small
lock icon within web browsers—Secure Sockets Layer (SSL) and
Transport Layer Security (TLS).
SSL was developed by Netscape to provide client/server encryption for
web traffic. Hypertext Transfer Protocol Secure (HTTPS) uses port
443 to negotiate encrypted communications sessions between web
servers and browser clients. Although SSL originated as a standard for
Netscape browsers, Microsoft also adopted it as a security standard for
its popular Internet Explorer browser. The incorporation of SSL into
both of these products made it the de facto internet standard.
SSL relies on the exchange of server digital certificates to negotiate
encryption/decryption parameters between the browser and the web
server. SSL’s goal is to create secure communications channels that
remain open for an entire web browsing session. It depends on a
combination of symmetric and asymmetric cryptography. The
following steps are involved:
1. When a user accesses a website, the browser retrieves the web
server’s certificate and extracts the server’s public key from it.
2. The browser then creates a random symmetric key, uses the
server’s public key to encrypt it, and then sends the encrypted
symmetric key to the server.
3. The server then decrypts the symmetric key using its own private
key, and the two systems exchange all future messages using the
symmetric encryption key.
This approach allows SSL to leverage the advanced functionality of
asymmetric cryptography while encrypting and decrypting the vast
majority of the data exchanged using the faster symmetric algorithm.
In 1999, security engineers proposed TLS as a replacement for the SSL
standard, which was at the time in its third version. As with SSL, TLS
uses TCP port 443. Based on SSL technology, TLS incorporated many
security enhancements and was eventually adopted as a replacement
for SSL in most applications. Early versions of TLS supported
downgrading communications to SSL v3.0 when both parties did not
support TLS. However, in 2011, TLS v1.2 dropped this backward
compatibility.
In 2014, an attack known as the Padding Oracle On Downgraded
Legacy Encryption (POODLE) demonstrated a significant flaw in the
SSL 3.0 fallback mechanism of TLS. In an effort to remediate this
vulnerability, many organizations completely dropped SSL support
and now rely solely on TLS security.
Even though TLS has been in existence for more than a
decade, many people still mistakenly call it SSL. For this reason,
TLS has gained the nickname SSL 3.1.
Steganography and Watermarking
Steganography is the art of using cryptographic techniques to embed
secret messages within another message. Steganographic algorithms
work by making alterations to the least significant bits of the many bits
that make up image files. The changes are so minor that there is no
appreciable effect on the viewed image. This technique allows
communicating parties to hide messages in plain sight—for example,
they might embed a secret message within an illustration on an
otherwise innocent web page.
Steganographers often embed their secret messages within images or
WAV files because these files are often so large that the secret message
would easily be missed by even the most observant inspector.
Steganography techniques are often used for illegal or questionable
activities, such as espionage and child pornography.
Steganography can also be used for legitimate purposes, however.
Adding digital watermarks to documents to protect intellectual
property is accomplished by means of steganography. The hidden
information is known only to the file’s creator. If someone later creates
an unauthorized copy of the content, the watermark can be used to
detect the copy and (if uniquely watermarked files are provided to
each original recipient) trace the offending copy back to the source.
Steganography is an extremely simple technology to use, with free
tools openly available on the internet. Figure 7.2 shows the entire
interface of one such tool, iSteg. It simply requires that you specify a
text file containing your secret message and an image file that you
wish to use to hide the message. Figure 7.3 shows an example of a
picture with an embedded secret message; the message is impossible
to detect with the human eye.
FIGURE 7.2 Steganography tool
FIGURE 7.3 Image with embedded message
Digital Rights Management
Digital rights management (DRM) software uses encryption to
enforce copyright restrictions on digital media. Over the past decade,
publishers attempted to deploy DRM schemes across a variety of
media types, including music, movies, and books. In many cases,
particularly with music, opponents met DRM deployment attempts
with fierce opposition, arguing that the use of DRM violated their
rights to freely enjoy and make backup copies of legitimately licensed
media files.
As you will read in this section, many commercial
attempts to deploy DRM on a widespread basis failed when users
rejected the technology as intrusive and/or obstructive.
Music DRM
The music industry has battled pirates for years, dating back to the
days of homemade cassette tape duplication and carrying through
compact disc and digital formats. Music distribution companies
attempted to use a variety of DRM schemes, but most backed away
from the technology under pressure from consumers.
The use of DRM for purchased music slowed dramatically when,
facing this opposition, Apple rolled back their use of FairPlay DRM for
music sold through the iTunes Store. Apple co-founder Steve Jobs
foreshadowed this move when, in 2007, he issued an open letter to the
music industry calling on them to allow Apple to sell DRM-free music.
That letter read, in part:
The third alternative is to abolish DRMs entirely. Imagine a world
where every online store sells DRM-free music encoded in open
licensable formats. In such a world, any player can play music
purchased from any store, and any store can sell music which is
playable on all players. This is clearly the best alternative for
consumers, and Apple would embrace it in a heartbeat. If the big
four music companies would license Apple their music without the
requirement that it be protected with a DRM, we would switch to
selling only DRM-free music on our iTunes store. Every iPod ever
made will play this DRM-free music.
The full essay is no longer available on Apple’s website, but an
archived copy may be found at http://bit.ly/1TyBm5e.
Currently, the major use of DRM technology in music is for
subscription-based services such as Napster and Kazaa, which use
DRM to revoke a user’s access to downloaded music when their
subscription period ends.
Do the descriptions of DRM technology in this section
seem a little vague? There’s a reason for that: manufacturers
typically do not disclose the details of their DRM functionality due
to fears that pirates will use that information to defeat the DRM
scheme.
Movie DRM
The movie industry has used a variety of DRM schemes over the years
to stem the worldwide problem of movie piracy. Two of the major
technologies used to protect mass-distributed media are as follows:
High-Bandwidth Digital Content Protection (HDCP) Provides
DRM protection for content sent over digital connections including
HDMI, DisplayPort, and DVI interfaces. While this technology is still
found in many implementations, hackers released an HDCP master
key in 2010, rendering the protection completely ineffective.
Advanced Access Content System (AACS) Protects the content
stored on Blu-Ray and HD DVD media. Hackers have demonstrated
attacks that retrieved AACS encryption keys and posted them on the
internet.
Industry publishers and hackers continue the cat-and-mouse game
today; media companies try to protect their content and hackers seek
to gain continued access to unencrypted copies.
E-book DRM
Perhaps the most successful deployment of DRM technology is in the
area of book and document publishing. Most e-books made available
today use some form of DRM, and these technologies also protect
sensitive documents produced by corporations with DRM capabilities.
All DRM schemes in use today share a fatal flaw: the device
used to access the content must have access to the decryption key.
If the decryption key is stored on a device possessed by the end
user, there is always a chance that the user will manipulate the
device to gain access to the key.
Adobe Systems offers the Adobe Digital Experience Protection
Technology (ADEPT) to provide DRM technology for e-books sold in a
variety of formats. ADEPT uses a combination of AES technology to
encrypt the media content and RSA encryption to protect the AES key.
Many e-book readers, with the notable exception of the Amazon
Kindle, use this technology to protect their content. Amazon’s Kindle
e-readers use a variety of formats for book distribution, and each
contains its own encryption technology.
Video Game DRM
Many video games implement DRM technology that depends on
consoles using an active internet connection to verify the game license
with a cloud-based service. These technologies, such as Ubisoft’s
Uplay, once typically required a constant internet connection to
facilitate gameplay. If a player lost connection, the game would cease
functioning.
In March 2010, the Uplay system came under a denial-of-service
attack and players of Uplay-enabled games around the world were
unable to play games that previously functioned properly because their
consoles were unable to access the Uplay servers. This led to public
outcry, and Ubisoft later removed the always-on requirement, shifting
to a DRM approach that only requires an initial activation of the game
on the console and then allows unrestricted use.
Document DRM
Although the most common uses of DRM technology protect
entertainment content, organizations may also use DRM to protect the
security of sensitive information stored in PDF files, office productivity
documents, and other formats. Commercial DRM products, such as
Vitrium and FileOpen, use encryption to protect source content and
then enable organizations to carefully control document rights.
Here are some of the common permissions restricted by document
DRM solutions:
Reading a file
Modifying the contents of a file
Removing watermarks from a file
Downloading/saving a file
Printing a file
Taking screenshots of file content
DRM solutions allow organizations to control these rights by granting
them when needed, revoking them when no longer necessary, and
even automatically expiring rights after a specified period of time.
Networking
The final application of cryptography we’ll explore in this chapter is
the use of cryptographic algorithms to provide secure networking
services. In the following sections, we’ll take a brief look at two
methods used to secure communications circuits. We’ll also look at
IPsec and Internet Security Association and Key Management Protocol
(ISAKMP) as well as some of the security issues surrounding wireless
networking.
Circuit Encryption
Security administrators use two types of encryption techniques to
protect data traveling over networks:
Link encryption protects entire communications circuits by
creating a secure tunnel between two points using either a
hardware solution or a software solution that encrypts all traffic
entering one end of the tunnel and decrypts all traffic entering the
other end of the tunnel. For example, a company with two offices
connected via a data circuit might use link encryption to protect
against attackers monitoring at a point in between the two offices.
End-to-end encryption protects communications between two
parties (for example, a client and a server) and is performed
independently of link encryption. An example of end-to-end
encryption would be the use of TLS to protect communications
between a user and a web server. This protects against an intruder
who might be monitoring traffic on the secure side of an encrypted
link or traffic sent over an unencrypted link.
The critical difference between link and end-to-end encryption is that
in link encryption, all the data, including the header, trailer, address,
and routing data, is also encrypted. Therefore, each packet has to be
decrypted at each hop so it can be properly routed to the next hop and
then re-encrypted before it can be sent along its way, which slows the
routing. End-to-end encryption does not encrypt the header, trailer,
address, and routing data, so it moves faster from point to point but is
more susceptible to sniffers and eavesdroppers.
When encryption happens at the higher OSI layers, it is usually end-
to-end encryption, and if encryption is done at the lower layers of the
OSI model, it is usually link encryption.
Secure Shell (SSH) is a good example of an end-to-end encryption
technique. This suite of programs provides encrypted alternatives to
common internet applications such as File Transfer Protocol (FTP),
Telnet, and rlogin. There are actually two versions of SSH. SSH1
(which is now considered insecure) supports the Data Encryption
Standard (DES), Triple DES (3DES), and International Data
Encryption Algorithm (IDEA), and Blowfish algorithms. SSH2 drops
support for DES and IDEA but adds support for several other
algorithms.
IPsec
Various security architectures are in use today, each one designed to
address security issues in different environments. One such
architecture that supports secure communications is the Internet
Protocol Security (IPsec) standard. IPsec is a standard architecture set
forth by the Internet Engineering Task Force (IETF) for setting up a
secure channel to exchange information between two entities.
The entities communicating via IPsec could be two systems, two
routers, two gateways, or any combination of entities. Although
generally used to connect two networks, IPsec can be used to connect
individual computers, such as a server and a workstation or a pair of
workstations (sender and receiver, perhaps). IPsec does not dictate all
implementation details but is an open, modular framework that allows
many manufacturers and software developers to develop IPsec
solutions that work well with products from other vendors.
IPsec uses public key cryptography to provide encryption, access
control, nonrepudiation, and message authentication, all using IP-
based protocols. The primary use of IPsec is for virtual private
networks (VPNs), so IPsec can operate in either transport or tunnel
mode. IPsec is commonly paired with the Layer 2 Tunneling Protocol
(L2TP) as L2TP/IPsec.
The IP Security (IPsec) protocol provides a complete infrastructure for
secured network communications. IPsec has gained widespread
acceptance and is now offered in a number of commercial operating
systems out of the box. IPsec relies on security associations, and there
are two main components:
The Authentication Header (AH) provides assurances of message
integrity and nonrepudiation. AH also provides authentication and
access control and prevents replay attacks.
The Encapsulating Security Payload (ESP) provides confidentiality
and integrity of packet contents. It provides encryption and limited
authentication and prevents replay attacks.
ESP also provides some limited authentication, but not to
the degree of the AH. Though ESP is sometimes used without AH,
it’s rare to see AH used without ESP.
IPsec provides for two discrete modes of operation. When IPsec is
used in transport mode, only the packet payload is encrypted. This
mode is designed for peer-to-peer communication. When it’s used in
tunnel mode, the entire packet, including the header, is encrypted.
This mode is designed for gateway-to-gateway communication.
IPsec is an extremely important concept in modern
computer security. Be certain that you’re familiar with the
component protocols and modes of IPsec operation.
At runtime, you set up an IPsec session by creating a security
association (SA). The SA represents the communication session and
records any configuration and status information about the
connection. The SA represents a simplex connection. If you want a
two-way channel, you need two SAs, one for each direction. Also, if
you want to support a bidirectional channel using both AH and ESP,
you will need to set up four SAs.
Some of IPsec’s greatest strengths come from being able to filter or
manage communications on a per-SA basis so that clients or gateways
between which security associations exist can be rigorously managed
in terms of what kinds of protocols or services can use an IPsec
connection. Also, without a valid security association defined, pairs of
users or gateways cannot establish IPsec links.
Further details of the IPsec algorithm are provided in Chapter 11,
“Secure Network Architecture and Securing Network Components.”
ISAKMP
The Internet Security Association and Key Management Protocol
(ISAKMP) provides background security support services for IPsec by
negotiating, establishing, modifying, and deleting security
associations. As you learned in the previous section, IPsec relies on a
system of security associations (SAs). These SAs are managed through
the use of ISAKMP. There are four basic requirements for ISAKMP, as
set forth in Internet RFC 2408:
Authenticate communicating peers
Create and manage security associations
Provide key generation mechanisms
Protect against threats (for example, replay and denial-of-service
attacks)
Wireless Networking
The widespread rapid adoption of wireless networks poses a
tremendous security risk. Many traditional networks do not
implement encryption for routine communications between hosts on
the local network and rely on the assumption that it would be too
difficult for an attacker to gain physical access to the network wire
inside a secure location to eavesdrop on the network. However,
wireless networks transmit data through the air, leaving them
extremely vulnerable to interception. There are two main types of
wireless security:
Wired Equivalent Privacy Wired Equivalent Privacy (WEP)
provides 64- and 128-bit encryption options to protect
communications within the wireless LAN. WEP is described in IEEE
802.11 as an optional component of the wireless networking standard.
Cryptanalysis has conclusively demonstrated that
significant flaws exist in the WEP algorithm, making it possible to
completely undermine the security of a WEP-protected network
within seconds. You should never use WEP encryption to protect a
wireless network. In fact, the use of WEP encryption on a store
network was the root cause behind the TJX security breach that
was widely publicized in 2007. Again, you should never use WEP
encryption on a wireless network.
WiFi Protected Access WiFi Protected Access (WPA) improves on
WEP encryption by implementing the Temporal Key Integrity Protocol
(TKIP), eliminating the cryptographic weaknesses that undermined
WEP. A further improvement to the technique, dubbed WPA2, adds
AES cryptography. WPA2 provides secure algorithms appropriate for
use on modern wireless networks.
Remember that WPA does not provide an end-to-end
security solution. It encrypts traffic only between a mobile
computer and the nearest wireless access point. Once the traffic
hits the wired network, it’s in the clear again.
Another commonly used security standard, IEEE 802.1x, provides a
flexible framework for authentication and key management in wired
and wireless networks. To use 802.1x, the client runs a piece of
software known as the supplicant. The supplicant communicates with
the authentication server. After successful authentication, the network
switch or wireless access point allows the client to access the network.
WPA was designed to interact with 802.1x authentication servers.
Cryptographic Attacks
As with any security mechanism, malicious individuals have found a
number of attacks to defeat cryptosystems. It’s important that you
understand the threats posed by various cryptographic attacks to
minimize the risks posed to your systems:
Analytic Attack This is an algebraic manipulation that attempts to
reduce the complexity of the algorithm. Analytic attacks focus on the
logic of the algorithm itself.
Implementation Attack This is a type of attack that exploits
weaknesses in the implementation of a cryptography system. It
focuses on exploiting the software code, not just errors and flaws but
the methodology employed to program the encryption system.
Statistical Attack A statistical attack exploits statistical weaknesses
in a cryptosystem, such as floating-point errors and inability to
produce truly random numbers. Statistical attacks attempt to find a
vulnerability in the hardware or operating system hosting the
cryptography application.
Brute Force Brute-force attacks are quite straightforward. Such an
attack attempts every possible valid combination for a key or
password. They involve using massive amounts of processing power to
methodically guess the key used to secure cryptographic
communications.
For a nonflawed protocol, the average amount of time required to
discover the key through a brute-force attack is directly proportional
to the length of the key. A brute-force attack will always be successful
given enough time. Every additional bit of key length doubles the time
to perform a brute-force attack because the number of potential keys
doubles.
There are two modifications that attackers can make to enhance the
effectiveness of a brute-force attack:
Rainbow tables provide precomputed values for cryptographic
hashes. These are commonly used for cracking passwords stored
on a system in hashed form.
Specialized, scalable computing hardware designed specifically for
the conduct of brute-force attacks may greatly increase the
efficiency of this approach.
Salting Saves Passwords
Salt might be hazardous to your health, but it can save your
password! To help combat the use of brute-force attacks, including
those aided by dictionaries and rainbow tables, cryptographers
make use of a technology known as cryptographic salt.
The cryptographic salt is a random value that is added to the end of
the password before the operating system hashes the password.
The salt is then stored in the password file along with the hash.
When the operating system wishes to compare a user’s proffered
password to the password file, it first retrieves the salt and appends
it to the password. It feeds the concatenated value to the hash
function and compares the resulting hash with the one stored in
the password file.
Specialized password hashing functions, such as PBKDF2, bcrypt,
and scrypt, allow for the creation of hashes using salts and also
incorporate a technique known as key stretching that makes it
more computationally difficult to perform a single password guess.
The use of salting, especially when combined with key stretching,
dramatically increases the difficulty of brute-force attacks. Anyone
attempting to build a rainbow table must build a separate table for
each possible value of the cryptographic salt.
Frequency Analysis and the Ciphertext Only Attack In many
cases, the only information you have at your disposal is the encrypted
ciphertext message, a scenario known as the ciphertext only attack. In
this case, one technique that proves helpful against simple ciphers is
frequency analysis—counting the number of times each letter appears
in the ciphertext. Using your knowledge that the letters E, T, A, O, I, N
are the most common in the English language, you can then test
several hypotheses:
If these letters are also the most common in the ciphertext, the
cipher was likely a transposition cipher, which rearranged the
characters of the plain text without altering them.
If other letters are the most common in the ciphertext, the cipher is
probably some form of substitution cipher that replaced the
plaintext characters.
This is a simple overview of frequency analysis, and many
sophisticated variations on this technique can be used against
polyalphabetic ciphers and other sophisticated cryptosystems.
Known Plaintext In the known plaintext attack, the attacker has a
copy of the encrypted message along with the plaintext message used
to generate the ciphertext (the copy). This knowledge greatly assists
the attacker in breaking weaker codes. For example, imagine the ease
with which you could break the Caesar cipher described in Chapter 6 if
you had both a plaintext copy and a ciphertext copy of the same
message.
Chosen Ciphertext In a chosen ciphertext attack, the attacker has
the ability to decrypt chosen portions of the ciphertext message and
use the decrypted portion of the message to discover the key.
Chosen Plaintext In a chosen plaintext attack, the attacker has the
ability to encrypt plaintext messages of their choosing and can then
analyze the ciphertext output of the encryption algorithm.
Meet in the Middle Attackers might use a meet-in-the-middle
attack to defeat encryption algorithms that use two rounds of
encryption. This attack is the reason that Double DES (2DES) was
quickly discarded as a viable enhancement to the DES encryption (it
was replaced by Triple DES, or 3DES).
In the meet-in-the-middle attack, the attacker uses a known plaintext
message. The plain text is then encrypted using every possible key
(k1), and the equivalent ciphertext is decrypted using all possible keys
(k2). When a match is found, the corresponding pair (k1, k2)
represents both portions of the double encryption. This type of attack
generally takes only double the time necessary to break a single round
n
n
n
of encryption (or 2 rather than the anticipated 2 * 2 ), offering
minimal added protection.
Man in the Middle In the man-in-the-middle attack, a malicious
individual sits between two communicating parties and intercepts all
communications (including the setup of the cryptographic session).
The attacker responds to the originator’s initialization requests and
sets up a secure session with the originator. The attacker then
establishes a second secure session with the intended recipient using a
different key and posing as the originator. The attacker can then “sit in
the middle” of the communication and read all traffic as it passes
between the two parties.
Be careful not to confuse the meet-in-the-middle attack with
the man-in-the-middle attack. They may have similar names, but
they are quite different!
Birthday The birthday attack, also known as a collision attack or
reverse hash matching (see the discussion of brute-force and
dictionary attacks in Chapter 14, “Controlling and Monitoring
Access”), seeks to find flaws in the one-to-one nature of hashing
functions. In this attack, the malicious individual seeks to substitute in
a digitally signed communication a different message that produces
the same message digest, thereby maintaining the validity of the
original digital signature.
Don’t forget that social engineering techniques can also be
used in cryptanalysis. If you’re able to obtain a decryption key by
simply asking the sender for it, that’s much easier than attempting
to crack the cryptosystem!
Replay The replay attack is used against cryptographic algorithms
that don’t incorporate temporal protections. In this attack, the
malicious individual intercepts an encrypted message between two
parties (often a request for authentication) and then later “replays” the
captured message to open a new session. This attack can be defeated
by incorporating a time stamp and expiration period into each
message.
Summary
Asymmetric key cryptography, or public key encryption, provides an
extremely flexible infrastructure, facilitating simple, secure
communication between parties that do not necessarily know each
other prior to initiating the communication. It also provides the
framework for the digital signing of messages to ensure
nonrepudiation and message integrity.
This chapter explored public key encryption, which provides a scalable
cryptographic architecture for use by large numbers of users. We also
described some popular cryptographic algorithms, such as link
encryption and end-to-end encryption. Finally, we introduced you to
the public key infrastructure, which uses certificate authorities (CAs)
to generate digital certificates containing the public keys of system
users and digital signatures, which rely on a combination of public key
cryptography and hashing functions.
We also looked at some of the common applications of cryptographic
technology in solving everyday problems. You learned how
cryptography can be used to secure email (using PGP and S/MIME),
web communications (using SSL and TLS), and both peer-to-peer and
gateway-to-gateway networking (using IPsec and ISAKMP) as well as
wireless communications (using WPA and WPA2).
Finally, we covered some of the more common attacks used by
malicious individuals attempting to interfere with or intercept
encrypted communications between two parties. Such attacks include
birthday, cryptanalytic, replay, brute-force, known plaintext, chosen
plaintext, chosen ciphertext, meet-in-the-middle, man-in-the-middle,
and birthday attacks. It’s important for you to understand these
attacks in order to provide adequate security against them.
Exam Essentials
Understand the key types used in asymmetric cryptography.
Public keys are freely shared among communicating parties, whereas
private keys are kept secret. To encrypt a message, use the recipient’s
public key. To decrypt a message, use your own private key. To sign a
message, use your own private key. To validate a signature, use the
sender’s public key.
Be familiar with the three major public key cryptosystems.
RSA is the most famous public key cryptosystem; it was developed by
Rivest, Shamir, and Adleman in 1977. It depends on the difficulty of
factoring the product of prime numbers. El Gamal is an extension of
the Diffie-Hellman key exchange algorithm that depends on modular
arithmetic. The elliptic curve algorithm depends on the elliptic curve
discrete logarithm problem and provides more security than other
algorithms when both are used with keys of the same length.
Know the fundamental requirements of a hash function.
Good hash functions have five requirements. They must allow input of
any length, provide fixed-length output, make it relatively easy to
compute the hash function for any input, provide one-way
functionality, and be collision free.
Be familiar with the major hashing algorithms. The successors
to the Secure Hash Algorithm (SHA), SHA-1 and SHA-2, make up the
government standard message digest function. SHA-1 produces a 160-
bit message digest whereas SHA-2 supports variable lengths, ranging
up to 512 bits. SHA-3 improves upon the security of SHA-2 and
supports the same hash lengths.
Know how cryptographic salts improve the security of
password hashing. When straightforward hashing is used to store
passwords in a password file, attackers may use rainbow tables of
precomputed values to identify commonly used passwords. Adding
salts to the passwords before hashing them reduces the effectiveness
of rainbow table attacks. Common password hashing algorithms that
use key stretching to further increase the difficulty of attack include
PBKDF2, bcrypt, and scrypt.
Understand how digital signatures are generated and
verified. To digitally sign a message, first use a hashing function to
generate a message digest. Then encrypt the digest with your private
key. To verify the digital signature on a message, decrypt the signature
with the sender’s public key and then compare the message digest to
one you generate yourself. If they match, the message is authentic.
Know the components of the Digital Signature Standard
(DSS). The Digital Signature Standard uses the SHA-1, SHA-2, and
SHA-3 message digest functions along with one of three encryption
algorithms: the Digital Signature Algorithm (DSA); the Rivest, Shamir,
Adleman (RSA) algorithm; or the Elliptic Curve DSA (ECDSA)
algorithm.
Understand the public key infrastructure (PKI). In the public
key infrastructure, certificate authorities (CAs) generate digital
certificates containing the public keys of system users. Users then
distribute these certificates to people with whom they want to
communicate. Certificate recipients verify a certificate using the CA’s
public key.
Know the common applications of cryptography to secure
email. The emerging standard for encrypted messages is the S/MIME
protocol. Another popular email security tool is Phil Zimmerman’s
Pretty Good Privacy (PGP). Most users of email encryption rely on
having this technology built into their email client or their web-based
email service.
Know the common applications of cryptography to secure
web activity. The de facto standard for secure web traffic is the use of
HTTP over Transport Layer Security (TLS) or the older Secure Sockets
Layer (SSL). Most web browsers support both standards, but many
websites are dropping support for SSL due to security concerns.
Know the common applications of cryptography to secure
networking. The IPsec protocol standard provides a common
framework for encrypting network traffic and is built into a number of
common operating systems. In IPsec transport mode, packet contents
are encrypted for peer-to-peer communication. In tunnel mode, the
entire packet, including header information, is encrypted for gateway-
to-gateway communications.
Be able to describe IPsec. IPsec is a security architecture
framework that supports secure communication over IP. IPsec
establishes a secure channel in either transport mode or tunnel mode.
It can be used to establish direct communication between computers
or to set up a VPN between networks. IPsec uses two protocols:
Authentication Header (AH) and Encapsulating Security Payload
(ESP).
Be able to explain common cryptographic attacks. Brute-force
attacks are attempts to randomly find the correct cryptographic key.
Known plaintext, chosen ciphertext, and chosen plaintext attacks
require the attacker to have some extra information in addition to the
ciphertext. The meet-in-the-middle attack exploits protocols that use
two rounds of encryption. The man-in-the-middle attack fools both
parties into communicating with the attacker instead of directly with
each other. The birthday attack is an attempt to find collisions in hash
functions. The replay attack is an attempt to reuse authentication
requests.
Understand uses of digital rights management (DRM). Digital
rights management (DRM) solutions allow content owners to enforce
restrictions on the use of their content by others. DRM solutions
commonly protect entertainment content, such as music, movies, and
e-books but are occasionally found in the enterprise, protecting
sensitive information stored in documents.
Written Lab
1. Explain the process Bob should use if he wants to send a
confidential message to Alice using asymmetric cryptography.
2. Explain the process Alice would use to decrypt the message Bob
sent in question 1.
3. Explain the process Bob should use to digitally sign a message to
Alice.
4. Explain the process Alice should use to verify the digital signature
on the message from Bob in question 3.
Review Questions
1. In the RSA public key cryptosystem, which one of the following
numbers will always be largest?
A. e
B. n
C. p
D. q
2. Which cryptographic algorithm forms the basis of the El Gamal
cryptosystem?
A. RSA
B. Diffie-Hellman
C. 3DES
D. IDEA
3. If Richard wants to send an encrypted message to Sue using a
public key cryptosystem, which key does he use to encrypt the
message?
A. Richard’s public key
B. Richard’s private key
C. Sue’s public key
D. Sue’s private key
4. If a 2,048-bit plaintext message were encrypted with the El Gamal
public key cryptosystem, how long would the resulting ciphertext
message be?
A. 1,024 bits
B. 2,048 bits
C. 4,096 bits
D. 8,192 bits
5. Acme Widgets currently uses a 1,024-bit RSA encryption standard
companywide. The company plans to convert from RSA to an
elliptic curve cryptosystem. If it wants to maintain the same
cryptographic strength, what ECC key length should it use?
A. 160 bits
B. 512 bits
C. 1,024 bits
D. 2,048 bits
6. John wants to produce a message digest of a 2,048-byte message
he plans to send to Mary. If he uses the SHA-1 hashing algorithm,
what size will the message digest for this particular message be?
A. 160 bits
B. 512 bits
C. 1,024 bits
D. 2,048 bits
7. Which one of the following technologies is considered flawed and
should no longer be used?
A. SHA-3
B. PGP
C. WEP
D. TLS
8. What encryption technique does WPA use to protect wireless
communications?
A. TKIP
B. DES
C. 3DES
D. AES
9. Richard received an encrypted message sent to him from Sue.
Which key should he use to decrypt the message?
A. Richard’s public key
B. Richard’s private key
C. Sue’s public key
D. Sue’s private key
10. Richard wants to digitally sign a message he’s sending to Sue so
that Sue can be sure the message came from him without
modification while in transit. Which key should he use to encrypt
the message digest?
A. Richard’s public key
B. Richard’s private key
C. Sue’s public key
D. Sue’s private key
11. Which one of the following algorithms is not supported by the
Digital Signature Standard?
A. Digital Signature Algorithm
B. RSA
C. El Gamal DSA
D. Elliptic Curve DSA
12. Which International Telecommunications Union (ITU) standard
governs the creation and endorsement of digital certificates for
secure electronic communication?
A. X.500
B. X.509
C. X.900
D. X.905
13. What cryptosystem provides the encryption/decryption technology
for the commercial version of Phil Zimmerman’s Pretty Good
Privacy secure email system?
A. ROT13
B. IDEA
C. ECC
D. El Gamal
14. What TCP/IP communications port is used by Transport Layer
Security traffic?
A. 80
B. 220
C. 443
D. 559
15. What type of cryptographic attack rendered Double DES (2DES)
no more effective than standard DES encryption?
A. Birthday attack
B. Chosen ciphertext attack
C. Meet-in-the-middle attack
D. Man-in-the-middle attack
16. Which of the following tools can be used to improve the
effectiveness of a brute-force password cracking attack?
A. Rainbow tables
B. Hierarchical screening
C. TKIP
D. Random enhancement
17. Which of the following links would be protected by WPA
encryption?
A. Firewall to firewall
B. Router to firewall
C. Client to wireless access point
D. Wireless access point to router
18. What is the major disadvantage of using certificate revocation
lists?
A. Key management
B. Latency
C. Record keeping
D. Vulnerability to brute-force attacks
19. Which one of the following encryption algorithms is now
considered insecure?
A. El Gamal
B. RSA
C. Elliptic Curve Cryptography
D. Merkle-Hellman Knapsack
20. What does IPsec define?
A. All possible security classifications for a specific configuration
B. A framework for setting up a secure communication channel
C. The valid transition states in the Biba model
D. TCSEC security categories
Chapter 8
Principles of Security Models, Design, and
Capabilities
THE CISSP EXAM TOPICS COVERED IN THIS CHAPTER
INCLUDE:
Domain 3: Security Architecture and Engineering
3.1 Implement and manage engineering processes using secure
design principles
3.2 Understand the fundamental concepts of security models
3.3 Select controls based upon systems security requirements
3.4 Understand security capabilities of information systems
Understanding the philosophy behind security
solutions helps to limit your search for the best controls for specific
security needs. In this chapter, we discuss security models, including
state machine, Bell-LaPadula, Biba, Clark-Wilson, Take-Grant, and
Brewer and Nash. This chapter also describes Common Criteria and
other methods governments and corporations use to evaluate
information systems from a security perspective, with particular
emphasis on U.S. Department of Defense and international security
evaluation criteria. Finally, we discuss commonly encountered design
flaws and other issues that can make information systems susceptible
to attack.
The process of determining how secure a system is can be difficult and
time-consuming. In this chapter, we describe the process of evaluating
a computer system’s level of security. We begin by introducing and
explaining basic concepts and terminology used to describe
information system security concepts and talk about secure
computing, secure perimeters, security and access monitors, and
kernel code. We turn to security models to explain how access and
security controls can be implemented. We also briefly explain how
system security may be categorized as either open or closed; describe a
set of standard security techniques used to ensure confidentiality,
integrity, and availability of data; discuss security controls; and
introduce a standard suite of secure networking protocols.
Additional elements of this domain are discussed in various chapters:
Chapter 6, “Cryptography and Symmetric Key Algorithms,” Chapter 7,
“PKI and Cryptographic Applications,” Chapter 9, “Security
Vulnerabilities, Threats, and Countermeasures,” and Chapter 10,
“Physical Security Requirements.” Please be sure to review all of these
chapters to have a complete perspective on the topics of this domain.
Implement and Manage Engineering
Processes Using Secure Design Principles
Security should be a consideration at every stage of a system’s
development. Programmers should strive to build security into every
application they develop, with greater levels of security provided to
critical applications and those that process sensitive information. It’s
extremely important to consider the security implications of a
development project from the early stages because it’s much easier to
build security into a system than it is to add security onto an existing
system. The following sections discuss several essential security design
principles that should be implemented and managed early in the
engineering process of a hardware or software project.
Objects and Subjects
Controlling access to any resource in a secure system involves two
entities. The subject is the user or process that makes a request to
access a resource. Access can mean reading from or writing to a
resource. The object is the resource a user or process wants to access.
Keep in mind that the subject and object refer to some specific access
request, so the same resource can serve as a subject and an object in
different access requests.
For example, process A may ask for data from process B. To satisfy
process A’s request, process B must ask for data from process C. In
this example, process B is the object of the first request and the subject
of the second request:
First request process A (subject) process B (object)
Second request process B (subject) process C (object)
This also serves as an example of transitive trust. Transitive trust is
the concept that if A trusts B and B trusts C, then A inherits trust of C
through the transitive property—which works like it would in a
mathematical equation: if a = b, and b = c, then a = c. In the previous
example, when A requests data from B and then B requests data from
C, the data that A receives is essentially from C. Transitive trust is a
serious security concern because it may enable bypassing of
restrictions or limitations between A and C, especially if A and C both
support interaction with B. An example of this would be when an
organization blocks access to Facebook or YouTube to increase worker
productivity. Thus, workers (A) do not have access to certain internet
sites (C). However, if workers are able to access to a web proxy, virtual
private network (VPN), or anonymization service, then this can serve
as a means to bypass the local network restriction. In other words, if
workers (A) are accessing VPN service (B), and the VPN service (B)
can access the blocked internet service (C); then A is able to access C
through B via a transitive trust exploitation.
Closed and Open Systems
Systems are designed and built according to one of two differing
philosophies: A closed system is designed to work well with a narrow
range of other systems, generally all from the same manufacturer. The
standards for closed systems are often proprietary and not normally
disclosed. Open systems, on the other hand, are designed using
agreed-upon industry standards. Open systems are much easier to
integrate with systems from different manufacturers that support the
same standards.
Closed systems are harder to integrate with unlike systems, but they
can be more secure. A closed system often comprises proprietary
hardware and software that does not incorporate industry standards.
This lack of integration ease means that attacks on many generic
system components either will not work or must be customized to be
successful. In many cases, attacking a closed system is harder than
launching an attack on an open system. Many software and hardware
components with known vulnerabilities may not exist on a closed
system. In addition to the lack of known vulnerable components on a
closed system, it is often necessary to possess more in-depth
knowledge of the specific target system to launch a successful attack.
Open systems are generally far easier to integrate with other open
systems. It is easy, for example, to create a local area network (LAN)
with a Microsoft Windows Server machine, a Linux machine, and a
Macintosh machine. Although all three computers use different
operating systems and could represent up to three different hardware
architectures, each supports industry standards and makes it easy for
networked (or other) communications to occur. This ease comes at a
price, however. Because standard communications components are
incorporated into each of these three open systems, there are far more
predictable entry points and methods for launching attacks. In
general, their openness makes them more vulnerable to attack, and
their widespread availability makes it possible for attackers to find
(and even to practice on) plenty of potential targets. Also, open
systems are more popular than closed systems and attract more
attention. An attacker who develops basic attacking skills will find
more targets on open systems than on closed ones. This larger
“market” of potential targets usually means that there is more
emphasis on targeting open systems. Inarguably, there’s a greater
body of shared experience and knowledge on how to attack open
systems than there is for closed systems.
Open Source vs. Closed Source
It’s also helpful to keep in mind the distinction between open-
source and closed-source systems. An open-source solution is one
where the source code, and other internal logic, is exposed to the
public. A closed-source solution is one where the source code and
other internal logic is hidden from the public. Open-source
solutions often depend on public inspection and review to improve
the product over time. Closed-source solutions are more
dependent on the vendor/programmer to revise the product over
time. Both open-source and closed-source solutions can be
available for sale or at no charge, but the term commercial
typically implies closed-source. However, closed-source code is
often revealed through either vendor compromise or through
decompiling. The former is always a breach of ethics and often the
law, whereas the latter is a standard element in ethical reverse
engineering or systems analysis.
It is also the case that a closed-source program can be either an
open system or a closed system, and an open-source program can
be either an open system or a closed system.
Techniques for Ensuring Confidentiality, Integrity, and
Availability
To guarantee the confidentiality, integrity, and availability of data, you
must ensure that all components that have access to data are secure
and well behaved. Software designers use different techniques to
ensure that programs do only what is required and nothing more.
Suppose a program writes to and reads from an area of memory that is
being used by another program. The first program could potentially
violate all three security tenets: confidentiality, integrity, and
availability. If an affected program is processing sensitive or secret
data, that data’s confidentiality is no longer guaranteed. If that data is
overwritten or altered in an unpredictable way (a common problem
when multiple readers and writers inadvertently access the same
shared data), there is no guarantee of integrity. And, if data
modification results in corruption or outright loss, it could become
unavailable for future use. Although the concepts we discuss in the
following sections all relate to software programs, they are also
commonly used in all areas of security. For example, physical
confinement guarantees that all physical access to hardware is
controlled.
Confinement
Software designers use process confinement to restrict the actions of a
program. Simply put, process confinement allows a process to read
from and write to only certain memory locations and resources. This is
also known as sandboxing. The operating system, or some other
security component, disallows illegal read/write requests. If a process
attempts to initiate an action beyond its granted authority, that action
will be denied. In addition, further actions, such as logging the
violation attempt, may be taken. Systems that must comply with
higher security ratings usually record all violations and respond in
some tangible way. Generally, the offending process is terminated.
Confinement can be implemented in the operating system itself (such
as through process isolation and memory protection), through the use
of a confinement application or service (for example, Sandboxie at
www.sandboxie.com), or through a virtualization or hypervisor
solution (such as VMware or Oracle’s VirtualBox).
Bounds
Each process that runs on a system is assigned an authority level. The
authority level tells the operating system what the process can do. In
simple systems, there may be only two authority levels: user and
kernel. The authority level tells the operating system how to set the
bounds for a process. The bounds of a process consist of limits set on
the memory addresses and resources it can access. The bounds state
the area within which a process is confined or contained. In most
systems, these bounds segment logical areas of memory for each
process to use. It is the responsibility of the operating system to
enforce these logical bounds and to disallow access to other processes.
More secure systems may require physically bounded processes.
Physical bounds require each bounded process to run in an area of
memory that is physically separated from other bounded processes,
not just logically bounded in the same memory space. Physically
bounded memory can be very expensive, but it’s also more secure than
logical bounds.
Isolation
When a process is confined through enforcing access bounds, that
process runs in isolation. Process isolation ensures that any behavior
will affect only the memory and resources associated with the isolated
process. Isolation is used to protect the operating environment, the
kernel of the operating system (OS), and other independent
applications. Isolation is an essential component of a stable operating
system. Isolation is what prevents an application from accessing the
memory or resources of another application, whether for good or ill.
The operating system may provide intermediary services, such as cut-
and-paste and resource sharing (such as the keyboard, network
interface, and storage device access).
These three concepts (confinement, bounds, and isolation) make
designing secure programs and operating systems more difficult, but
they also make it possible to implement more secure systems.
Controls
To ensure the security of a system, you need to allow subjects to access
only authorized objects. A control uses access rules to limit the access
of a subject to an object. Access rules state which objects are valid for
each subject. Further, an object might be valid for one type of access
and be invalid for another type of access. One common control is for
file access. A file can be protected from modification by making it
read-only for most users but read-write for a small set of users who
have the authority to modify it.
There are both mandatory and discretionary access controls, often
called mandatory access control (MAC) and discretionary access
control (DAC), respectively (see Chapter 14, “Controlling and
Monitoring Access,” for an in-depth discussion of access controls).
With mandatory controls, static attributes of the subject and the object
are considered to determine the permissibility of an access. Each
subject possesses attributes that define its clearance, or authority, to
access resources. Each object possesses attributes that define its
classification. Different types of security methods classify resources in
different ways. For example, subject A is granted access to object B if
the security system can find a rule that allows a subject with subject
A’s clearance to access an object with object B’s classification.
Discretionary controls differ from mandatory controls in that the
subject has some ability to define the objects to access. Within limits,
discretionary access controls allow the subject to define a list of objects
to access as needed. This access control list serves as a dynamic access
rule set that the subject can modify. The constraints imposed on the
modifications often relate to the subject’s identity. Based on the
identity, the subject may be allowed to add or modify the rules that
define access to objects.
Both mandatory and discretionary access controls limit the access to
objects by subjects. The primary goal of controls is to ensure the
confidentiality and integrity of data by disallowing unauthorized
access by authorized or unauthorized subjects.
Trust and Assurance
Proper security concepts, controls, and mechanisms must be
integrated before and during the design and architectural period in
order to produce a reliably secure product. Security issues should not
be added on as an afterthought; this causes oversights, increased costs,
and less reliability. Once security is integrated into the design, it must
be engineered, implemented, tested, audited, evaluated, certified, and
finally accredited.
A trusted system is one in which all protection mechanisms work
together to process sensitive data for many types of users while
maintaining a stable and secure computing environment. Assurance is
simply defined as the degree of confidence in satisfaction of security
needs. Assurance must be continually maintained, updated, and
reverified. This is true if the trusted system experiences a known
change or if a significant amount of time has passed. In either case,
change has occurred at some level. Change is often the antithesis of
security; it often diminishes security. So, whenever change occurs, the
system needs to be reevaluated to verify that the level of security it
provided previously is still intact. Assurance varies from one system to
another and must be established on individual systems. However,
there are grades or levels of assurance that can be placed across
numerous systems of the same type, systems that support the same
services, or systems that are deployed in the same geographic location.
Thus, trust can be built into a system by implementing specific
security features, whereas assurance is an assessment of the reliability
and usability of those security features in a real-world situation.
Understand the Fundamental Concepts of
Security Models
In information security, models provide a way to formalize security
policies. Such models can be abstract or intuitive (some are decidedly
mathematical), but all are intended to provide an explicit set of rules
that a computer can follow to implement the fundamental security
concepts, processes, and procedures that make up a security policy.
These models offer a way to deepen your understanding of how a
computer operating system should be designed and developed to
support a specific security policy.
A security model provides a way for designers to map abstract
statements into a security policy that prescribes the algorithms and
data structures necessary to build hardware and software. Thus, a
security model gives software designers something against which to
measure their design and implementation. That model, of course,
must support each part of the security policy. In this way, developers
can be sure their security implementation supports the security policy.
Tokens, Capabilities, and Labels
Several different methods are used to describe the necessary
security attributes for an object. A security token is a separate
object that is associated with a resource and describes its security
attributes. This token can communicate security information about
an object prior to requesting access to the actual object. In other
implementations, various lists are used to store security
information about multiple objects. A capabilities list maintains a
row of security attributes for each controlled object. Although not
as flexible as the token approach, capabilities lists generally offer
quicker lookups when a subject requests access to an object. A
third common type of attribute storage is called a security label,
which is generally a permanent part of the object to which it’s
attached. Once a security label is set, it usually cannot be altered.
This permanence provides another safeguard against tampering
that neither tokens nor capabilities lists provide.
You’ll explore several security models in the following sections; all of
them can shed light on how security enters into computer
architectures and operating system design:
Trusted computing base
State machine model
Information flow model
Noninterference model
Take-Grant model
Access control matrix
Bell-LaPadula model
Biba model
Clark-Wilson model
Brewer and Nash model (also known as Chinese Wall)
Goguen-Meseguer model
Sutherland model
Graham-Denning model
Although no system can be totally secure, it is possible to design and
build reasonably secure systems. In fact, if a secured system complies
with a specific set of security criteria, it can be said to exhibit a level of
trust. Therefore, trust can be built into a system and then evaluated,
certified, and accredited. But before we can discuss each security
model, we have to establish a foundation on which most security
models are built. This foundation is the trusted computing base.
Trusted Computing Base
An old U.S. Department of Defense standard known colloquially as the
Orange Book/Trusted Computer System Evaluation Criteria (TCSEC)
(DoD Standard 5200.28, covered in more detail later in this chapter in
the section “Rainbow Series”) describes a trusted computing base
(TCB) as a combination of hardware, software, and controls that work
together to form a trusted base to enforce your security policy. The
TCB is a subset of a complete information system. It should be as
small as possible so that a detailed analysis can reasonably ensure that
the system meets design specifications and requirements. The TCB is
the only portion of that system that can be trusted to adhere to and
enforce the security policy. It is not necessary that every component of
a system be trusted. But any time you consider a system from a
security standpoint, your evaluation should include all trusted
components that define that system’s TCB.
In general, TCB components in a system are responsible for
controlling access to the system. The TCB must provide methods to
access resources both inside and outside the TCB itself. TCB
components commonly restrict the activities of components outside
the TCB. It is the responsibility of TCB components to ensure that a
system behaves properly in all cases and that it adheres to the security
policy under all circumstances.
Security Perimeter
The security perimeter of your system is an imaginary boundary that
separates the TCB from the rest of the system (Figure 8.1). This
boundary ensures that no insecure communications or interactions
occur between the TCB and the remaining elements of the computer
system. For the TCB to communicate with the rest of the system, it
must create secure channels, also called trusted paths. A trusted path
is a channel established with strict standards to allow necessary
communication to occur without exposing the TCB to security
vulnerabilities. A trusted path also protects system users (sometimes
known as subjects) from compromise as a result of a TCB interchange.
As you learn more about formal security guidelines and evaluation
criteria later in this chapter, you’ll also learn that trusted paths are
required in systems that seek to deliver high levels of security to their
users. According to the TCSEC guidelines, trusted paths are required
for high-trust-level systems such as those at level B2 or higher of
TCSEC.
FIGURE 8.1 The TCB, security perimeter, and reference monitor
Reference Monitors and Kernels
When the time comes to implement a secure system, it’s essential to
develop some part of the TCB to enforce access controls on system
assets and resources (sometimes known as objects). The part of the
TCB that validates access to every resource prior to granting access
requests is called the reference monitor (Figure 8.1). The reference
monitor stands between every subject and object, verifying that a
requesting subject’s credentials meet the object’s access requirements
before any requests are allowed to proceed. If such access
requirements aren’t met, access requests are turned down. Effectively,
the reference monitor is the access control enforcer for the TCB. Thus,
authorized and secured actions and activities are allowed to occur,
whereas unauthorized and insecure activities and actions are
denied and blocked from occurring. The reference monitor enforces
access control or authorization based on the desired security model,
whether Discretionary, Mandatory, Role Based, or some other form of
access control. The reference monitor may be a conceptual part of the
TCB; it doesn’t need to be an actual, stand-alone, or independent
working system component.
The collection of components in the TCB that work together to
implement reference monitor functions is called the security kernel.
The reference monitor is a concept or theory that is put into practice
via the implementation of a security kernel in software and hardware.
The purpose of the security kernel is to launch appropriate
components to enforce reference monitor functionality and resist all
known attacks. The security kernel uses a trusted path to
communicate with subjects. It also mediates all resource access
requests, granting only those requests that match the appropriate
access rules in use for a system.
The reference monitor requires descriptive information about each
resource that it protects. Such information normally includes its
classification and designation. When a subject requests access to an
object, the reference monitor consults the object’s descriptive
information to discern whether access should be granted or denied
(see the sidebar “Tokens, Capabilities, and Labels” for more
information on how this works).
State Machine Model
The state machine model describes a system that is always secure no
matter what state it is in. It’s based on the computer science definition
of a finite state machine (FSM). An FSM combines an external input
with an internal machine state to model all kinds of complex systems,
including parsers, decoders, and interpreters. Given an input and a
state, an FSM transitions to another state and may create an output.
Mathematically, the next state is a function of the current state and the
input next state; that is, the next state = F(input, current state).
Likewise, the output is also a function of the input and the current
state output; that is, the output = F(input, current state).
Many security models are based on the secure state concept.
According to the state machine model, a state is a snapshot of a system
at a specific moment in time. If all aspects of a state meet the
requirements of the security policy, that state is considered secure. A
transition occurs when accepting input or producing output. A
transition always results in a new state (also called a state transition).
All state transitions must be evaluated. If each possible state transition
results in another secure state, the system can be called a secure state
machine. A secure state machine model system always boots into a
secure state, maintains a secure state across all transitions, and allows
subjects to access resources only in a secure manner compliant with
the security policy. The secure state machine model is the basis for
many other security models.
Information Flow Model
The information flow model focuses on the flow of information.
Information flow models are based on a state machine model. The
Bell-LaPadula and Biba models, which we will discuss in detail later in
this chapter, are both information flow models. Bell-LaPadula is
concerned with preventing information flow from a high security level
to a low security level. Biba is concerned with preventing information
flow from a low security level to a high security level. Information flow
models don’t necessarily deal with only the direction of information
flow; they can also address the type of flow.
Information flow models are designed to prevent unauthorized,
insecure, or restricted information flow, often between different levels
of security (these are often referred to as multilevel models).
Information flow can be between subjects and objects at the same
classification level as well as between subjects and objects at different
classification levels. An information flow model allows all authorized
information flows, whether within the same classification level or
between classification levels. It prevents all unauthorized information
flows, whether within the same classification level or between
classification levels.
Another interesting perspective on the information flow model is that
it is used to establish a relationship between two versions or states of
the same object when those two versions or states exist at different
points in time. Thus, information flow dictates the transformation of
an object from one state at one point in time to another state at
another point in time. The information flow model also addresses
covert channels by specifically excluding all nondefined flow
pathways.
Noninterference Model
The noninterference model is loosely based on the information flow
model. However, instead of being concerned about the flow of
information, the noninterference model is concerned with how the
actions of a subject at a higher security level affect the system state or
the actions of a subject at a lower security level. Basically, the actions
of subject A (high) should not affect the actions of subject B (low) or
even be noticed by subject B. The real concern is to prevent the actions
of subject A at a high level of security classification from affecting the
system state at a lower level. If this occurs, subject B may be placed
into an insecure state or be able to deduce or infer information about a
higher level of classification. This is a type of information leakage and
implicitly creates a covert channel. Thus, the noninterference model
can be imposed to provide a form of protection against damage caused
by malicious programs such as Trojan horses.
Composition Theories
Some other models that fall into the information flow category
build on the notion of how inputs and outputs between multiple
systems relate to one another—which follows how information
flows between systems rather than within an individual system.
These are called composition theories because they explain how
outputs from one system relate to inputs to another system. There
are three recognized types of composition theories:
Cascading: Input for one system comes from the output of
another system.
Feedback: One system provides input to another system, which
reciprocates by reversing those roles (so that system A first
provides input for system B and then system B provides input
to system A).
Hookup: One system sends input to another system but also
sends input to external entities.
Take-Grant Model
The Take-Grant model employs a directed graph (Figure 8.2) to
dictate how rights can be passed from one subject to another or from a
subject to an object. Simply put, a subject with the grant right can
grant another subject or another object any other right they possess.
Likewise, a subject with the take right can take a right from another
subject. In addition to these two primary rules, the Take-Grant model
may adopt a create rule and a remove rule to generate or delete rights.
The key to this model is that using these rules allows you to figure out
when rights in the system can change and where leakage (that is,
unintentional distribution of permissions) can occur.
FIGURE 8.2 The Take-Grant model’s directed graph
Take rule Allows a subject to take rights over an object
Grant rule Allows a subject to grant rights to an object
Create rule Allows a subject to create new rights
Remove rule Allows a subject to remove rights it has
Access Control Matrix
An access control matrix is a table of subjects and objects that
indicates the actions or functions that each subject can perform on
each object. Each column of the matrix is an access control list (ACL).
Each row of the matrix is a capabilities list. An ACL is tied to the
object; it lists valid actions each subject can perform. A capability list
is tied to the subject; it lists valid actions that can be taken on each
object. From an administration perspective, using only capability lists
for access control is a management nightmare. A capability list method
of access control can be accomplished by storing on each subject a list
of rights the subject has for every object. This effectively gives each
user a key ring of accesses and rights to objects within the security
domain. To remove access to a particular object, every user (subject)
that has access to it must be individually manipulated. Thus,
managing access on each user account is much more difficult than
managing access on each object (in other words, via ACLs).
Implementing an access control matrix model usually involves the
following:
Constructing an environment that can create and manage lists of
subjects and objects
Crafting a function that can return the type associated with
whatever object is supplied to that function as input (this is
important because an object’s type determines what kind of
operations may be applied to it)
The access control matrix shown in Table 8.1 is for a discretionary
access control system. A mandatory or rule-based matrix can be
constructed simply by replacing the subject names with classifications
or roles. Access control matrixes are used by systems to quickly
determine whether the requested action by a subject for an object is
authorized.
TABLE 8.1 An access control matrix
Subjects Document Printer Network folder
file share
Bob Read No Access No Access
Mary No Access No Access Read
Amanda Read, Write Print No Access
Mark Read, Write Print Read, Write
Kathryn Read, Write Print, Manage Print Read, Write,
Queue Execute
Colin Read, Write, Print, Manage Print Read, Write,
Change Queue, Change Execute, Change