Get Prepared And Pass The CISSP Exam

Access Control Systems and Methodology

The Access Control Systems and Methodology domain in the Common Body of Knowledge (CBK) for the CISSP certification exam covers the topics related to controlling how resources are accessed so they can be protected from unauthorized modification or disclosure. The purpose of access control is to allow authorized users access to appropriate data and deny access to unauthorized users.

Access Control Overview

Generally, an access control is any hardware, software, or organizational administrative policy or procedure that grants or restricts access, monitors and records attempts to access, identifies users attempting to access, and determines whether access is authorized. Access control is about the relationships between subjects and objects.

Access is the transfer of information from on abject to a subject. Subjects are active entities that request information about or data from passive entities, called objects. A subject can be a user, program, process file that is accessing an object to accomplish a task. An object is a passive entity and can be a file, database, computer, program, process, printer, storage media, and so on. The subject is always the entity that provides or hosts information or data. The roles of subject and object can switch back and forth while two entities interact to accomplish a task.

Types of Access Control

Access controls are categorized based on the type of implementation into the following three groups:

      • 1. Logical/technical access control - Logical and technical access controls are hardware or software mechanisms used to manage access to resources and systems. Examples of logical or technical access control lists include encryption, smart cards, passwords, firewalls, routers, intrusion detections systems, biometrics.
      • 2. Physical access control - Physical access controls are physical barriers deployed to prevent direct contact with systems or areas within a facility. Examples of physical access control include guards, fences, motion detectors, locked doors, sealed windows, lights, cable protection, laptop locks, swipe cards, guard dogs, video cameras, mantraps, and alarms.
      • 3. Administrative access control - Administrative access controls (also called directive controls) are implemented by creating and following organizational policy, procedure, or guideline. User training and awareness also fall into this category.

Access controls can be further divided into the following types based on function or purpose. Some security mechanisms may fall into multiple categories.

      • 1. Preventive access control - A preventive access control is implemented to stop unwanted or unauthorized access or activity from occurring. Examples of preventive access control include fences, licks, mantraps, alarm systems, separation of duties, job rotation, data classification, encryption, smart cards, security policies, antivirus software and hiring practices.
      • 2. Deterrent access control - A deterrent access control is implemented to discourage violation of security policies. Examples of deterrent access controls are warning banners, security cameras, trespass or intrusion alarms, auditing, security awareness training, and so on.
      • 3. Detective access control - A detective access control is implemented to discover unwanted or unauthorized activity. Typically, detective controls operate after something happens rather that in real time. Examples of detective access controls include motion detectors, recording and reviewing of events captured be security cameras or CCTV, intrusion detection systems, mandatory vacations, audit trails, violation reports, incident investigations, and job rotation.
      • 4. Corrective access control - A corrective access control is implemented to restore systems to normal after an unwanted or unauthorized activity has occurred. Corrective controls have only minimal capability to respond to access violations. Examples of corrective access controls include intrusion detection systems, antivirus solutions, and business continuity planning.
      • 5. Recovery access control - A recovery access control is implemented to repair or restore resources, functions, and capabilities after a violation of security policies. Recovery controls have more advanced or complex capabilities to respond to access violations than corrective access controls. Examples of recovery access controls include backups and restores, fault-tolerant drives, server clustering, and database or VM shadowing.
      • 6. Compensating - A compensating access control is a second or third access control that kicks in in case the first one fails. Examples of compensating security control are VM shadowing, power supply.

Security Principles

One of the core principles of the information security is the CIA triad (confidentiality, integrity and availability). Issues such as Non-Repudiation do not fit well within the three core concepts and is also a key consideration for practical security installations.


Confidentiality stands for preventing unauthorized disclosure of data to individuals or systems. The elements of confidentiality are:

      • Data Protection - Control mechanisms need to be in place to dictate who can access data and what the subject can do with it once they have accessed it. These activities need to be controlled, audited, and monitored.
      • Data separation - Some information is more sensitive than other information and requires a higher level of confidentiality.
      • Traffic flow protection - Traffic flow protection includes not only characteristics, but also inference information such as command structure and even the instance of communication (e.g., a network communication).


Integrity security mechanisms are implemented for the prevention of unauthorized modification of data, detection and notification of unauthorized modification of data, and logging of all the changes to data. The elements of integrity are:

      • Single data integrity - provides integrity to a single data unit that is in transit. the sending entity calculates an additional data item that is bound to the originating data unit. Methods for calculating this data item include checksums, cyclic redundancy check (CRC) values, and hashes.
      • Multiple data integrity - provides integrity to a sequence of data units in transit. In this case ordering information is provided from within the communications protocol, such as sequence numbers and timestamps.


Information services, data, and network resources must be available to users in a timely manner when it is needed. Fault tolerance and recovery mechanisms are put into place to ensure the continuity of the availability of resources.


The non-repudiation security service provides implies that a party of a transaction cannot deny having participated in the transaction. Digital signatures are one method of providing non-repudiation. Public key certificates, verified by a third-party entity, can also be used for non-repudiation.

Access Control Principles

Need to Know Principle

Subjects only have access to resources that are necessary to perform their assigned duties and activities. Controls who can access the information.

Principle of Least Privilege

Subjects have only the minimum access permissions necessary to complete their assigned duties and activities. This principle controls what can be done to the resources.

Segregation of Duties Principle

The Segregation of Duties principle ensures the separation of different functions, so that no one individual is in the position to initiate, approve, and review the same action. The segregation of duties reduces both the risk of fraud and errors.

Default Deny

Security access control mechanisms should be set to default deny. This means that everything, not-explicitly permitted, is forbidden. Most access control lists (ACLs) that work on routers and packet-filtering firewalls default to no access.


Identification is used to provide an identity to a system. Providing a username, a logon ID, account number, or employee number represents an identification process. The claimed identity must be proven via authentication.

Identity Management Systems

Identity management (IdM) refers to set of technologies that are used to identify, authenticate, and authorize users implementing automated means.

Identity management is the process by which user identities are defined and managed in an enterprise environment.

The following aspects should be considered when provisioning and creating identities:

      • Uniqueness - each user ID value should be unique and should not be shared between users. Biometrics are considered unique things in distinguishing identity.
      • Non-descriptive - the user’s position or the purpose of the account should not be revealed from the user ID.
      • Issuance - standard naming scheme should be implemented and followed.

The main goal of the identity management solutions is to refine the management of identity, authentication, and authorization across multiple systems throughout the enterprise.

Typical identity management functionality includes the following:

      • Provisioning and coordination of user identities
      • Management of user roles, privileges and credentials
      • Workflow automation
      • Single-sign on for users
      • Password management
      • A scalable, secure, and standards-compliant directory service for storing and managing user information.

Directory Services

A directory service is a system that stores, organizes and provides access to information in a directory. It is a centralized database of objects that includes information about all the resources available to a network along with information about subjects such as users and computers.

      • Naming service - maps the names of network resources to their respective network addresses.
      • Namespaces - the entities in a directory are organized by using namespaces.
      • DN - Distinguished names - distinguished names or unique identifiers are assigned to each object in a directory. Each DN is a collection of attributes about a specific object, and is stored in the directory as an entry.

Types of directories depending on the location of the actual information:

      • Meta-directory - the identity data is physically located within the directory.
      • Virtual directory - the virtual directory does not contain the physical identity data and only points to where the information is actually located.

Most directories follow a hierarchical database format, based on the x.500 standard and a type of protocol. Popular directory protocols are:

      • LDAP - Lightweight Directory Access Protocol is an application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. The LDAP server, called Directory System Agent (DSA), runs on the default non-SSL TCP port 389. The default port for LDAP overSSL is 636.
      • AD - Active directory - is a directory service created by Microsoft for Windows domain networks. Active directory is included in most Windows Server operating systems. Server computers that run active directory are called domain controllers. An AD domain controller authenticates and authorizes all users and computers in a Windows domain type network—assigning and enforcing security policies for all computers and installing or updating software. Active Directory makes use of Lightweight Directory Access Protocol (LDAP) versions 2 and 3, Kerberos and DNS.
      • eDirectory This is Novell’s implementation of directory services.

Web Access Management (WAM)

Web access management (WAM) is a form of IdM that controls access to web resources. It provides authentication management, policy-based authorizations, audit and single sign-on functionality. This type of technology is becoming increasingly robust and is experiencing continuous development. This is due to the increased use of e-commerce, online banking, web services and other. There are many commercial and open source products available such as CA’s Siteminder, IBM’s Tivoli Access Manager, and Sun’s OpenSSO. Each of these solutions uses browser-based cookies to control access to resources that are being accessed. A typical WAN deployment contains the following few components:

      • A policy server
      • A policy store
      • Web agents that reside on the web server to control access

The WAM server accesses the same user directory, discussed earlier, so the user would be authenticating with the same credentials they are already familiar with. The following diagram shows what a simple WAM solution might look like:

The traffic between the user’s browser and the resources they are trying to access are represented by the blue lines. The red lines represent the WAM server intercepting the request and asking the WAM policy server if the whether or not the resource trying to be access is protected, whether the user is authenticated, and whether the user is authorized to access that resource. Purple lines indicate the policy server performing lookups against its policy store and lookup up the presented credentials in the user directory. When the user attempts to access a protected resource, the agent intercepts the request, and asks the policy server if it’s a protected resource. If the policy server returns “no”, then the user is presented with the requested page. If it returns “yes”, the agent then asks if the user is authenticated. If the policy server returns “no”, then the user is challenged for credentials. If the policy server had returned “yes”, the agent then would ask if the user was authorized to access that resource. If the policy server returns “yes”, then the user’s browser is issued a browser cookie and is granted access. That same browser cookie is passed in the HTTP header and is used to authenticate your identity to any other resources protected by the WAM infrastructure. There are two types of cookies - permanent, these cookies are stored on the user’s hard drive or session cookies, which are held in memory. If the cookie contains any type of sensitive information, then it should only be held in memory and get erased as soon as the session completes.

Account Management

Account management encompasses creating user accounts, modifying the accounts privileges, and decommissioning the accounts. Automatic account management is a common part of products that provide IdM solutions. Automatic account management reduces the potential errors and provides accountability if something goes wrong. While Web access control management is mainly used for external users, account management products are typically used for internal accounts.

      • Provisioning


Authentication is the process of proving that you are who you are claiming to be. Three general factors can be used for authentication:

      • Something you know.This type of authentication is called authentication by knowledge. Examples include a password, PIN, lock combination, passphrase, mother’s maiden name, and so on. Typically this method is the least expensive to implement.
      • Something you have. This type of authentication is called authentication by ownership. It is a physical device that a person must have at the time of the authentication. Examples include a smart card, token device, key, badge, or an access card. The disadvantage of this method is that this physical device can be lost or stolen.
      • Something you are. This type of authentication is called authentication by characteristic and it is based on a unique physical attribute. This authentication is labeled as biometrics. Examples of this factor include fingerprints, voice prints, retina patterns, iris patterns, face shapes, palm topology, hand geometry, and so on.
      • Where you are. Where you are is a location based authentication. This type of access control is using technologies such as GPS, IP address-based geo-location, or the physical location for a point-of-sale purchase. Access can be denied if the subject is in the incorrect location. Credit card companies employ this access control when monitoring a consumer’s activities for fraud. This method is typically used in conjunction with other authentication methods.
      • Multi-factor authentication or Strong Authentication. Multi-factor or strong authentication is an authentication that consist out of two or more of the above forms of authentication. Authentication systems using multi-factor authentication are harder to compromise than ones using a single factor.
      • Mutual authentication or a two-way authentication. Mutual authentication or a two-way authentication is a process is which both entities of a communication authenticate each other. For example in a network environment the client authenticates to the server and vice versa. Mutual authentication is gaining acceptance as a tool that can minimize the risk of online fraud in e-commerce.


Password is the most common authentication technique. It is also considered the weakest form of protection. There are three types of passwords:

      • Static. Static password always remain the same.
      • Dynamic. Dynamic passwords change after a specified interval of time or use. One-time or single-use passwords are type of dynamic passwords that change every time they are used. One-time passwords are considered to be the strongest password type and are often implemented with authentication by ownership devices, such as tokens.
      • Cognitive. Cognitive passwords are based on personal facts, interests, and opinions that are likely to be easily recalled by a user.

There are several reason behind the fact that password are considered the weakest security mechanisms:

      • Users typically select passwords that are easy to remember and therefore easy to guess or crack.
      • Users share, forget, and write down their passwords.
      • Randomly generated passwords are hard to remember.
      • Passwords are often transmitted in clear text or with easily broken encryption protocols.
      • Password files/databases are often stored in publicly accessible online locations.

Password Security

Passwords can provide effective security if they are properly generated, stored, updated, and kept secret. Some of the methods of improving password security are:

      • Using password generators. To prevent users from writing auto generated passwords the tool should be configured to create pronounceable, but non-dictionary words.
      • The number of failed login attempts should be limited.
      • Certain password requirements should be enforced:
        • Minimum password length should be set
        • Passwords should include numbers, and symbols if allowed by the system
        • Avoiding passwords based on repetition
        • Avoiding passwords based on dictionary words, letter or number sequences, usernames, relative or pet names, or biographical information
        • Avoiding using the same password for multiple sites or purposes
        • Protecting password files
        • Users should be forced to change their password periodically
      • Password checkers should be utilized to check the strength of the passwords. This same tool, if used by an attacker is called a password cracker.
      • Password hashing and encryption should be implemented.

Password Management Techniques

      • Password Synchronization This technique allows a user to maintain a single password across multiple IT systems. If all the systems implement similar password standards, a user can deploy a new password at any time, that will be synchronized across all the associated systems. This technique makes it easier for users to remember passwords and manage their access to multiple systems. When users have to remember only one password they are less likely to forget or write it down. Password synchronization is generally considered to be less secure than well-designed and implemented single sign-on or password vault solutions. If the single, synchronized password is compromised, all the systems that share that same password are becoming vulnerable to unauthorized access.
      • Self-Service Password Reset This technique allows users who have forgotten or got their password compromised to authenticate using different method and reset their password without calling help desk. Users establish their identity by answering a series of personal questions, using a hardware authentication token, responding to a password notification e-mail or, less often, by providing a biometric sample. Users can then either specify a new, unlocked password, or ask that a randomly generated one be provided. The vulnerabilities that are introduced with these methods are that the answers to the personal questions that are used during the password reset can be obtained by social engineering, phishing techniques, or simple research. Much of this information may be publicly available on some users’ personal home pages.
      • Assisted Password Reset An Assisted Password Reset is a password reset accomplished by interaction between the user and a support representative, typically over a telephone. The support representative should not know or ask the individuals for their passwords. This is a security risk, since only the owner of the password should know their value. The support representative should not change the password before authenticating the calling individual.
      • Single Sign-On This technology allows users to login only once and gain access to all systems without being prompted to log in again at each of them. While this technique is similar to password synchronization, it is not the same. Password synchronization takes and sets that same password on each different system. The SSO software intercepts the login prompts from network systems and applications and fills in the necessary identification and authentication information for the user. The password synchronization and the SSO technologies, however, share the same vulnerability. If the password gets compromised an attacker will gain access to each system that the legitimate user have access to.

Password Attacks

Several techniques can be employed by an attacker to obtain a password:

      • A Dictionary Attack In a dictionary attack the attacker uses a file with dictionary words and common passwords, which is compared to the user’s password until a match is found.
      • A Brute Force Attack This type of attack consists of systematically checking all possible combinations of characters, numbers, and symbols until the correct password is found. In the worst case, this would involve traversing the entire search space.
      • Hybrid Attacks Hybrid attacks attempt a dictionary attack and then perform a type of brute force attack. This brute force attack is used to add prefix or suffix characters to passwords from the dictionary. This attack is used to discover one-upped-constructed passwords, two-upped constructed passwords, and so on. A one-upped-constructed password is a password in which a single character differs from it original form in the dictionary. For example, “password1” is one-upped from “password,” and so are “Password,” “1password,” and “passXword.”
      • Rainbow Table Attacks This type of attack speeds up the attack that would attempt to find passwords by guessing them, hashing them and then comparing them. The time to perform the hash functions is reduced by using rainbow tables. Rainbow tables are large databases of pre-computed hashes for guessed passwords.
      • Network Traffic Analysis/Sniffer Attacks This is a process of listening to the network traffic, especially when a user enters a password for authentication. Once a password is detected, the attacker can reuse the password by executing a replay attack.
      • Social-Engineering Attack A social-engineering attack is an attempt by an attacker to obtain login credentials by using means of deceit and persuading a user to perform specific actions on a system, such as changing a password or creating a user account for a new fictitious employee.
      • Password File Compromise A lot of damage can be done if this file is accessed or compromised. This file should be protected with access control mechanisms and encryption.
      • Phishing Phishing scams are typically fraudulent email messages appearing to come from legitimate sources and attempting to acquire information such as usernames, passwords.
        • Spear phishing Spear phishing is a form of phishing targeted to a specific groups of users. It may appear to originate from a colleague or co-worker.
        • Whaling Form of phishing that targets senior or high-level executives.
        • Vishing Form of phishing that uses phone system of VOIP.

Password Alternatives

One-Time Passwords (OTP)

One-time passwords are also called dynamic passwords. OTP is only valid for one login session or transaction. Once the password is used it is no longer valid. Unlike the static passwords, these types of passwords are not vulnerable to replay attacks. There are a few methods of delivering an OTP:

Security Token

Each user is given a personal security token that generates one-time passwords.

      • 1. Synchronous tokens The token device is synchronized with the authentication service by using time or a counter, which is used as the main part of the authentication algorithm. If the synchronization is time-based, inside the token is an accurate clock that has been synchronized with the clock on the proprietary authentication server. If a counter-based or event-based synchronization is used the one-time password creation is initiated by an act (such as pressing a button) that causes the token device and the authentication service to advance to the next value of the counter. SecurID, from RSA Security, Inc., is one of the most widely used time-based tokens.
      • 2. Asynchronous In this method a nonce (random value) is send to the user requesting authentication. The user encrypts this value using the token device and then uses encrypted value for authentication. This method employs a challenge/response scheme.

      Like all tokens, these may be lost, damaged, or stolen; additionally there is an inconvenience as batteries die, especially for tokens without a recharging facility or a non-replaceable battery. However, tokens are not vulnerable to electronic eavesdropping, sniffing, or password guessing.

      Text Messaging

      Text messaging is a common methodology used for the delivery of OTP. This method is convenient for delivering OTPs, because of the fact that text messaging is a ubiquitous communication channel and is available to nearly all mobile handsets. Google is currently using Google has started offering OTP to mobile and landline phones for all Google accounts. The user can receive the OTP either as a text message or via an automated call using text-to-speech conversion.

      Mobile phones

      Mobile phone’s applications can support a number of tokens, allowing a user the ability to authenticate to multiple resources from one device. The downside is that a cellphone used as a token can be lost, damaged, or stolen.

      Web-based methods

      Authentication-as-a-service providers offer various web-based methods for delivering one-time passwords without the need for tokens.


      Passphrase is a sequence of words or other test that is much longer than a password, could be a sentence or a phrase that replaces the role of a password in the authentication process. After the passphrase is accepted by the application it is transformed into a virtual password by adjusting the passphrase to the format that is required by the authentication service. A passphrase is more secure than a password because it is longer and it is most likely to remember than a password.

      Digital Signatures

      Digital signatures can be used instead of passwords for authentications. They consists of a private and a public key. The private key is secret and available only to the person that posses it. The public key can be made available to anyone without compromising the associated private key. The digital signature uses the private key to sign/encrypt a hashed value of a message and the public key is used to decrypt this hashed value.

      Memory Cards

      Memory cards are used to hold information, however they do not have the ability to process this information. A memory card can hold the authentication credentials of a user, and can be used to compare the saved credentials with the credentials provided by the user during the time of authentication. If the data that the user entered matches the data on the memory card, the user is success- fully authenticated.

      Smart Cards

      Unlike memory card, a smart card hold information and has the ability to process this information. It has an integrated microprocessor and integrated circuits in the card itself. A smart card can provide a two-factor authentication method. A user can be prompted to provide a PIN (something she knows) in order to unlock the smart card (something she has). There are two general types of smart cards depending on the design:

          • Contact smart cards Contact smart cards have gold-plated contact pads, which provide electrical connectivity when inserted into a readerwhich is used as a communications medium between the smart card and a host. These types of cards do not contain batteries and power is supplied by the card reader.
          • Contactless smart cards Contactless smart have antenna wires that surround the perimeters of the cards. These cards require only proximity to an antenna to communicate. Like smart cards with contacts, contactless cards do not have an internal power source. Contactless memory cards can be divided into two categories - hybrid and combi. The hybrid card has two chips and the ability to utilize both the contact and the contactless formats. The combi card has one microprocessor and can communicate to contact or contactless readers.

      Smart cards are resistant to reverse-engineering and tampering attacks. Smart cards are more tamperproof than memory cards, but are also more expensive. Some smart cart attack methods include:

          • Fault generation Computational errors are introduced into the smart cards, which can allow an attacker to reverse-engineer the encryption process.
          • Side-channel attacks These are non-invasive types of attacks, where the attacker observers on how something works and reacts in different situations. Some examples of side-channel attacks are differential power analysis, electromagnetic analysis, and timing.
          • Software attacks Also non-invasive attack, disguised behind an equipment that acts like a card reader. The goal is to provide a set of instructions that can trick the card to disclose account information.
          • Microprobing This is an intrusive type of attack which uses needleless and ultrasonic vibration to remove the outer protec- tive material on the card’s circuits. Once this process completes the information inside the card can be accessed by directly accessing the ROM chip of the card.



      Biometrics is a form of authentication that uses a behavioral or physiological characteristic that is unique to an individual. This type of authentication falls into the third type “something you are” authentication category. Biometrics can be divided into two broad categories: physiological and behavioral.

      Physiological Physiological biometrics are based on a person’s physical characteristics which are considered to be unchanging such as fingerprintss, iris patterns, retina paterns, facial features, palm prints, or hand geometry. Physiological biometrics are “what you are”.

      Behavioral Behavioral biometrics are based on a behavior of a person such as walking, talking, signing their name, or typing on a keyboard (speed, rhythm, pressure on the keys, etc). Behavioral biometrics is “what you do”.

      Physiological biometrics are considered to be more accurate because physical attributes typically do not change. The behavioral characteristics can change over time or be forgotten.

      Biometric system components

      A simple biometric system has four important components.

          • Sensor module which acquires the biometric data of an individual. For example, a Fingerprint sensor captures fingerprint impression of a user.
          • Feature extraction module where the acquired data is processed to extract feature values. For example, the position and orientation of minutiae points in a fingerprint image would be extracted in the feature extraction module of a fingerprint system.
          • Matching module in which feature values are compared against those in the template by generating a matching score. For example, in this module, the number of matching minutiae points between the query and the template will be computed and treated as a matching score.
          • Decision making module in which the user’s identity is established or a claimed identity is either accepted or rejected based on the matching score generated in the matching module.

      Identification Vs Verification

      Identification involves comparing the acquired biometric information against templates corresponding to all users in the database, while verification involves comparison with only those templates corresponding to claimed identity. Thus identification and verification are two distinct problems having their own inherent complexities.

      Performance analysis

      The performance of a Biometric system can be measured by reporting its False Reject Rate (FRR) and False Accept Rate (FAR) at various thresholds.

      False Reject Rate, also called Type 1 error is the event when a biometric system fails to detect a match between the input pattern and a matching template in the database. In other words, a valid user is not authenticated. The False Rejection Rate may be estimated as follows:
      FRR = NFR / NEIA or FRR = NFR / NEVA
      FRR is the false rejection rate
      NFR is the number of false rejections
      NEIA is the number of enrollee identification attempts
      NEVA is the number of enrollee verification attempts

      False Accept Rate, also called Type 2 error is the event when a biometric system incorrectly matches an input pattern to a non-matching template in the database. In other words, an unauthorized subject is authenticated. The False Acceptance Rate may be estimated as follows:
      FAR = NFA / NIIA
      FAR = NFA / NIVA
      FAR is the false acceptance rate
      NFA is the number of false acceptances
      NIIA is the number of impostor identification attempts
      NIVA is the number of impostor verification attempts

      The sensitivity of a scanner in a biometric system can be adjusted. If the sensitivity is reduced the scanner becomes more flexible in accepting input patterns. However, if the sensitivity is reduced beyond a certain point the system may start producing incorrect matches and allow access to unauthorized users. The sensitivity is considered to be high, when an authorized users are denied access. The sensitivity is considered to be low, when an unauthorized users are allowed access.

      The accuracy of a biometric device is measured by plotting the FRR and FAR rates on a graph. The point at which the FRR and FAR are equal is known as the crossover error rate (CER) or the equal error rate (ERR).

      The CER value measures the accuracy of the biometric device. A device with lower CER will return more accurate results.

      Biometric acceptance requirement

      Any human physiological or behavioral trait serves as a Biometric characteristic as long as it satisfies the following requirements:

          • Universality. Everyone should have it.
          • Distinctiveness. No two should be same.
          • Performance. It should be invariant over a given period of time.
          • Collectability. The trait should be easy to acquire or measure.

      Biometric Methods

      Fingerprint Recognition

      This recognition process takes an image of the fingerprint either using ink or a digital scanner and records the characteristics such as whorls, arches, loops, patterns of ridges, furrows and minutiae. This information is then stored or processed as an encoded algorithm. There are software programs that can be used to map the minutiae points to their relative placements on the finger and then search for similar minutiae information in the database. To save time during the search process, the image is converted into a character string. Therefore, most of the time the image of the fingerprint is never created, just a string of characters that can be used for comparison. The finger print reader requires the person to leave his finger on the reader for just a few seconds, during which time it can identify or verify a person. To make sure that a fake finger is not used, these days the fingerprint reader also checks for blood flow or correctly arrayed ridges at the edges of the fingers. Some of the advantages of this process is that it is a mature technology and user friendly. This method has a very high accuracy, long term stability and supports the option of enrolling multiple fingers to increase the anti-spoofing property. Some of the disadvantages of this method is that the finger print reader might get dirty (because of the contact with the finger), which might affect the quality of the image, or the registered data may vary depending on the skin conditions.

      Palm Scan

      Palm scan recognition implements many of the same matching characteristics that are used in fingerprint recognition. Palm scans utilize the whole area of the hand, including palm and fingers. These characteristics include ridge flow, ridge characteristics, and ridge structure of the raised portion of the epidermis. The palm scan also includes the fingerprints of each finger.

      Hand Geometry

      This recognition system is a fairly simple procedure but it is not very accurate. The user places his hand on a metal surface, which has guidance pegs on it, to help the user in aligning his hand properly. The device reads about 90 distinct features of the hand such as the length, width, thickness, surface area of the finger along with the palm size. These features are used to form a template and are stored in the database or can be verified against other template stored into the database. One of the main advantages of hand geometry technology is that it has one of the smallest templates in the biometrics field, generally under ten bytes. And also has a high user acceptance and it is non intrusive. The disadvantage of this method is that it has low accuracy and it has a relatively large reader. Some people like children, people with arthritis, missing finger, or large hands might find it difficult to enroll. It is used only for the verification process as of now.

      Retina Scan

      This method examines the blood vessel patterns of the retina which are supposed to be unique for every individual. Basically retina consists of sensory tissue with multiple layers, along with photoreceptors like cones and rods which gather the light rays that are sent to it and transforms it into electrical impulses which are in turn converted by the brain into images . The blood vessel in the retina can observe and reflect IR light better than the surrounding tissues in the eye. So the device uses IR light to illuminate the retina and when the IR is reflected back, the retinal scanning device uses it to extract unique features of the retina using different algorithms. The size of the pupil, which determines the amount of light that enters the retina, determines the quality of the image. The advantages of this technology are that the retina pattern doesn’t change over the life time of a person. Compared to other technologies, it has a good verification rate and it consists of rich unique features. The disadvantage of this method is that there is a public perception that the device might harm the retina, uneasiness of the user when the eye is scanned at a very close distance, the need for the individual to take off glasses on contact lenses, and the requirement of much patience and cooperation by the user.

      Iris Scans

      This Technology recognizes individuals by analyzing the features that exist in the colored tissue surrounding the pupil which includes rings, furrows and freckles. This is one of the most accurate biometric technologies that are available. In a recent article in, it was stated that in an experiment conducted by the government, which involved facial recognition, iris recognition and fingerprint recognition, it was found that iris recognition was the best among the three for verification process, though its enrollment process was difficult. The iris image can be obtained using a regular video camera and it can be done from further away than that of a retinal scan. In this technology it is very important that the user cooperates to get a clear image. So the device is such that when the user places his head in front of the device, he would be able to see the reflection of his iris in the device, which shows that a clear image can be obtained. The device may vary the light that is shone into the eye and observe the pupil dilate to make sure that system is not fooled by some fake eye. The advantage of this system is that it has a good verification rate and resistance to imposter. Iris data is stable with age, and identical for left and right eyes. The disadvantage of this method is that it is highly intrusive; the enrollment process is somewhat difficult because not all people would be comfortable with the system. When using an iris pattern biometric system, the optical unit must be positioned so the sun does not shine into the aperture.

      Signature Recognition

      Signature Recognition examines the behavioral aspects in which we sign our name. This technology is based on the behavioral characteristics like change in timing, pressure, speed, overall size and various directions of strokes during the course of the signing. Even though duplicating signatures seems easy visually, it is not easy to duplicate the behavioral characteristics. The device consists of a pen (stylus) and specialized writing tablet which is connected to a local computer. The user has to sign on the tablet using the pen. This system collects all the information about the features and forms a template, which is then stored in a database. The user has to sign multiple times to make sure a proper enrollment is done. The advantage of this technique is that it is a noninvasive tool, widely accepted and difficult to mimic. The disadvantage is that the signature should not be too long or too short because if it is too long then it might contain too many behavioral data and it will be difficult for the system to identify the consistent and unique data points. If the signature is too short, there might not be enough data point for the system to develop a unique template. It should be made sure that the enrollment process is done under the same environmental condition such as standing up, sitting down or resting one’s arm. The system is also difficult for the user to get acclimated, which leads to signature inconsistency.

      Keystroke Recognition

      Keystroke Recognition is based on the fact that the manner in which we type our computer keyboard varies from person to person. This technique consists of a keypad or a key board that is connected to a computer. The user is asked to type a set word at different periods of time. Certain conditions need to be met such as one has to type without corrections; in case of a mistype one he has to start again, etc. Some the feature that are extracted are the way a person types, the cumulative typing speed, time that elapses between consecutive strokes, time that each key is held down, frequency of the individual in using other keys on the keyboard, such as the number pad or function keys and the sequence that is utilized when typing a capital letter for example, and whether the individual release the shift key or the letter key first. Using these features a template is created forming a statistical profile of the individual’s behavioral characteristics. The advantages of these methods is that they are purely software based, do not require additional hardware, can be integrated with other biometrics, and minimal training is required. Another advantage is that if the template is created for a specific word and the template is tampered, then the word can be changed and a new template can be formed. The disadvantage is that the method does not ease the burden of remembering the passwords. In addition the technology is in its very early stage and has not been tested on a wide scale.

      Voice Recognition

      Voice originates from the vocal cords. The gap in the vocal cords contracts and expands as we attempt to communicate. So as the gap narrows when we exhale and widens when the breath passes through, unique sounds are created. Vocal tract basically consists of the laryngeal pharynx, oral pharynx, oral cavity, nasal pharynx and the nasal cavity. In this technique the user is asked to recite a part of text or a list of numbers, using a microphone or any telephone connected to a computer. The computer records the user’s voice and converts it from analog to digital format. This format is stored and unique features are extracted from them to form a template. More than one sample is taken and comparing various samples and determining the various repeating patterns make a statistical profile. So comparing the statistical profiles does the verification. The advantage of this method is that the existing telephony infrastructure or simple microphone can be used; it is non intrusive and easy to use. The disadvantages are that the system is affected by background noise; it has very low accuracy and it might be affected by the change of voice due to aging, illness or drinking.

      Facial Recognition

      This recognition process measures the overall facial structure such as distance between the eyes, nose, mouth and jaw edges. Using the features, a template is formed and stored in the database. During the verification or identification process the, a person image is obtained using a normal camera or a video camera and a template is formed using the facial features. This template is then compared with the one stored in the database. This system of recognition is currently used only in verification process with much success. To prevent people from faking the system, nowadays the system requires the user to smile, blink or move in a way that is human before verifying. The advantage of this system is that it is non intrusive; it can be operated at a low cost because the normal security cameras can be used to extract images. In addition, in places such as airports where a high level of security is required, the method can operate covertly; i.e. the image is taken without the knowledge of the individual). Facial recognition has a disadvantage of being highly dependent on the quality of the image obtained, which might be affected by appearance and the environment. This system is generally has low accuracy and might cause problems in case of identical twins. Furthermore, due to its ability to work in a covertly fashion, it has the potential of privacy abuse.

      Access Control and Markup Languages

      Identity access control and identity management Extensible Markup Languages (XMLs):

      Service Provisioning Markup Language (SPML)

      An XML based framework, being developed by OASIS, that allows the automation of user management tasks such as - account creation, profile edition, revocation. The goal of this language is to allow organizations to securely and quickly set up user interfaces for Web services and applications, by letting enterprise platforms such as Web portals, application servers, and service centers generate provisioning requests within and across organizations. This functionality of SPML allows for multiple accounts with various access rights to be set up simultaneously across multiple different systems and applications. The SPML is made up of three main entities:

          • Requesting Authority (RA) - Party or system that is authorized to request a resource for the party.
          • Provisioning Service Provider (PSP) - software that responds to the software requests.
          • Provisioning Service Target (PST) - entity that carries out the provisioning activities on the requested system.

      Whenever a new employee gets hired, a given Requesting Authority (client) sends the Provisioning Service Provider (PSP) a set of requests in the form of a well formed SPML document. Based on a pre-defined service execution model, the Provisioning Service Target (PST) takes the operations specified within the SPML document and executes provisioning actions against pre-defined service targets or resources.

      The figure bellow shows the high-level schematic of the request components in an SPML model system.

      A: The RA constructs an SPML document subscribing to a pre-defined service offered by Provisioning System One (PS One). B: PSP One takes the data passed in this SPML document and constructs its own SPML document and sends it to PST One, which is an independent resource that provides an SPML-compliant service interface. C: To fully service the request PSP One then forwards a provisioning request to a second network service - Provisioning System Provider Two (PSP Two). D:Two is anonymously offering a provisioning services to different resource (Resource E).

      Figure 1.1. SPML Provisioning Flow

      Security Assertion Markup Language (SAML)

      SAML is an XML-based open standard data format for exchanging authentication and authorization data between parties, more specifically between an identity provider and a service provider. SAML is based on the concept of Assertions (statements about a user) which can be passed around. It provides a standard request/response protocol for exchanging XML messages. A user whose identity is established and verified in one domain, can invoke services in another domain.

          • Cross-Domain Single Sign-On (SSO)

      When a user authenticates to one domain (Web site) and then is able to access resources at some other domains (Web sites). Figure 1.2 shows a user Joe, who is authenticated at and can access resources at both and

      Figure 1.2. Single Sign-On

          • Federated Identity

      Service providers agree on a way to refer to a single user even if he/she is know to each of them under a different name. Figure 1.3 shows the user Joe authenticated at as johndoe and can access resources at both (jdoe) and (johns) without being re-authenticated.

      Figure 1.3. Federated Identity

      SAML provides authentication assertions to federated identity management systems to allow business-to-business (B2B) and business-to-consumer (B2C) transactions. In the previous example the user is considered the principal, would be considered the identity provider, and is considered the service provider. SAML sends authentication information, such as password, digital certificate, or key to the authentication system, however it does not specify how the authentication data should be used. Both, the sending and the receiving systems have to be configured to use the same type of authentication data.

          • Web services - provide means by which security assertions (authentication, authorization, attributes) about messages and service requesters can be exchanged. Web services is a collection of technologies and standards that allow services (weather updates, email, stock tickers) to be provided and served from one place.

      Extensible Access Control Markup Language (XACML)

      XACML is an open standard XML-based language designed to express security policies and access rights to information for Web services, digital rights management (DRM), and enterprise security applications. XACML was developed by a team that included people from Entrust Inc., IBM,, Quadrasis Inc., Sterling Commerce Inc., Sun and BEA Systems Inc. XACML provides for fine-grained control of activities (such as read, write, copy, delete) based on several criteria, including the following:

          • Attributes of the user requesting access (e.g., “Only division managers and above can view this document.”)
          • The protocol over which the request is made (e.g., “This data can be viewed only if it is accessed over HTTPS.”)
          • The authentication mechanism (e.g., “The requester must have been authenticated using a digital device.”)

      Simple Object Access Protocol (SOAP)

      SOAP is an XML based protocol specification for exchanging structured information in the implementation of Web Services in computer networks. It implements XML for its message format and usually relies on the Application Layer Protocol, mostly onHTTP or SMTP for message negotiation and transmission. Authentication request and data are packaged up in a SAML format and encapsulated into a SOAP message. TheSOAP message then gets transmitted over an HTTP connection.

      Service Oriented Architecture (SOA)

      SOA is a set of principles and methodologies for providing independent service residing on different systems in different domains in a consistent manner.
      The Organization for the Advancement of Structured Information Standards (OASIS) develops and keeps track of all these standardized languages.


      Identification answers the question “Who is the user?”. Authentication answers the question “Is the user really who he/she claims to be?”. Authorization answers the questions “What resources is the user allowed to access? What actions is the user allowed to perform on these resources?”.
      The authorization specifies what level of access a particular authenticated user should have to secured resources controlled by the system. Authorization systems depend on secure authentication systems to ensure that users are who they claim to be and thus prevent unauthorized users from gaining access to secured resources.

      Auditing and Accountability

      Access Control Technologies

      Single Sign-On (SSO) Systems

      This technology allows users to login only once and gain access to multiple, different systems without being prompted to log in again at each of them. It also allows security administrators to add, change, or revoke user privileges on one central system. The advantages of SSO are:

          • Improved user productivity
          • Improved developer productivity
          • Simplified Administration

      The disadvantages of SSO are:

          • Single point of attack
          • Unattended/unlocked workstations
          • Difficult to retrofit


      Kerberos is an open source authentication protocol that may be used to support Single Sign-On. Kerberos is the name of a three-headed dog that guards the entrance to the under- world in Greek mythology. Kerberos was developed under Project Athena at the Massachusetts Institute of Technology (MIT). More information on Kerberos is available on

      Kerberos Characteristics

      Kerberos relies upon symmetric-key cryptography, more specifically, Advanced Encryption Standard (AES), and it provides end-to-end security for authentication traffic between the client and the key distribution center (KDC). Kerberos provides confidentiality and integrity for authentication traffic. Kerberos uses TCP port 88 by default. Since Kerberos is an open source protocol it can be manipulated to work properly within different products and environments. There are different “flavors” of Kerberos available, to fulfill the requirement for different functionality.

      Kerberos has the following components:

          • Principal Client or service
          • Realm A logical Kerberos network
          • Ticket Data that authenticates a principal’s identity
          • Credentials A ticket and a service key
          • Key Distribution Center (KDC) KDC provides authentication service and key distribution
          • Ticket Granting Service (TGS)
          • Ticket Granting Ticket (TGT)

      Kerberos Authentication Process

      In a Kerberos environment, the authentication process begins at login.

          • 1. A Principal Alice enters her user name and password into her client workstation.
          • 2. The Kerberos software passes the authentication information to the authentication service (AS) on the KDC, which scans the database and based on Alice’s password, locates her master key.
          • 3. As a result the KDC sends back to Alice a session key (Sa) to share with Alice, encrypted with
          • 4. Alice’s secret key (Sa) and a Ticket-Granting Ticket (TGT). The TGT is encrypted with TGS’s secret key(Ktgs) and containes a second copy of the Alices session key (Sa), ticket validity time and Alice’s username.
          • 5. Alice decrypts the session key and uses it to request permission to print from the TGS (Ticket Granting Service) by sending the TGT(Ktgt) and the decrypted session key(Sa)
          • 6. After confirming that Alice has a valid session key, the TGS sends Alice a C/S session key (second session key) to use to print. The TGS also sends a service ticket, encrypted with the printer’s key. Alice connects to the printer. The printer, after seeing a valid C/S session key, knows that Alice is authenticated and has permission to print.

      Kerberos Advantages

          • Kerberos prevents replay and eavesdropping attacks by utilizing sequence numbers and timestamps
          • Kerberos provides mutual authentication

      Kerberos Disadvantages

          • Single point of failure. When the Kerberos server is down, no one can log in. Redundancy is mandatory for the KDC.
          • Authentication fails if the clocks of the involved hosts are not synchronized. Usually Network Time Protocol daemons are used to keep the host clocks synchronized.
          • KDC stores the cryptographic keys of all principals. A compromise of the KDC can lead to the compromise of every key in the Kerberos realm.
          • KDC is not able to detect and is vulnerable to password guessing and brute-force attacks. Typically these attacks should be mitigated by the operating system.
          • Kerberos is not able to mitigate a compromised host. Decrypted session keys can be captured from memory or cache.


      SESAME is Secure European System for Applications in a Multi-vendor Environment, a single sign-on system designed to extend Kerberos functionality and minimize its weaknesses. SESAME adds to Kerberos: heterogeneity, sophisticated access control features, scalability of public key systems, better manageability, audit and delegation. The addition of the public key (asymmetric) encryption is the biggest improvement and it addresses the Kerberos weakness of the plaintext storage of symmetric keys.
      Instead of tickets SESAME uses Privileged Attribute Certificates (PACs), which are being issued by the Privileged Attribute Server (PAS). SESAME can be either implemented as a stand alone technology or it can server as an extension for Kerberos. The authentication functionality of Kerberos and SESAME can be used by any application via the Generic Security Services Application Programming Interface (GSS-API). More information on SESAME is available at:

      Thin Clients

      Thin clients are diskless client systems that connect over a network to a server to access network resources. Thin clients have originated in multi-user systems, mainframes that were accessed dumb terminals. The idea of thin clients has been replicated on modern client-server environments using interface software applications that act as clients, while all the storage and processing takes place on the server.
      The advantages of thin clients are:

          • Strong access control and SSO is implemented
          • Thin clients are less expensive that fully powered PCs
          • Allows easier administration

      Directory Services

      Directory service is a centralized database of objects that includes information about resources available to a network along with information about subjects such as users and computers. Directory service is considered a Single Sign-On technology. Directory services were described earlier in this domain: Directory Services

      Access Control Models

      Access control models are frameworks that define how subjects access objects. There are three most widely recognized access control models.

      Discretionary Access Control

      This method is based on the data owner discretion. The data owner decides who is allowed to access the object and what privileges they have. Most of the operating systems, such as Windows, Linux, Mac, and most of UNIX are based on the DAC model. This method provides a fine level of granularity. Discretionary access control models do not offer centrally controlled management system. The controls are typically deployed by using ACLs or filters that are controlled by the owners of data. The DAC model cannot be implemented in a multi-layered security (MLS) environment. DAC systems are typically used by general purpose computers.

      Mandatory Access Control (MAC)

      Mandatory access control rely upon the use of data classification labels or labels for clearance. Subjects are labeled by their level of clearance and objects are labeled by their level of classification. Multi-layered security is implemented using this domain. In a mandatory access control system, subjects are able to access objects that have the same or a lower level of classification. An extension of this model is known as “need to know” - subjects with higher clearance levels are granted access to highly sensitive resources only if their work tasks require such access. MAC is less flexible and scalable than DAC, but it is more secure. In systems that are based on the MAC model, users do not have the right to install software, change file permissions, or add new users. This type of systems are typically very specialized and used for specific purposes, mainly used by government agencies to protect top secret information.

      Bell-LaPadula Model

      The Bell-LaPadula model is a multilevel security system model. This model is based on access control rules which use security labels on objects and clearances for subjects. This model focuses primarily on protecting data confidentiality and controlled access to classified information. The Bell–LaPadula model is built on the concept of a state machine with a set of allowable states in a computer network system. The transition from one state to another state is defined by transition functions. The main security rules that apply to this model are:

          • 1. Subjects cannot read data that is at a level higher than the subject’s clearance. This is also called the No Read Up or the Simple Security property.
          • Subjects cannot write to data that is lower that the subject’s clearance. This is also called the No Write Down or the Strong * property.
          • Classification of a subject or an object cannot be changed while it is being referenced. This is also called the Tranquillity Principle.
            • When security levels cannot be changed during the normal operation of a system the Strong Tranquillity Principle is applied.
            • If security levels can never be changed in a was that would violate a defined security principle a Weak Tranquillity Principle is being applied.

      This rules prevent information leakage. For example, if the subject is working at a Secret level, he can’t copy and paste the secret data into a Confidential document.
      Based on this model subjects can only write to objects that are at the same of higher level as the subject. Subjects can read objects at the same or lower level as the subject. Subjects have read/write access only to objects at the same level as the subject.
      The disadvantages of this model is that it does not handle how the data is classified. This model has no defined policy for changing data access rights. It does not address covert channels.

      Biba Integrity Model

      Unlike the Bell-LaPadula model that addresses only data confidentiality the Biba model focuses on data integrity. Based on this model subjects and objects are grouped into hierarchical levels of integrity. This model prevent subjects from corrupting objects in higher levels, or from being corrupted by objects from lower levels. This is called as the invocation property and it is added to circumvent a weakness in the Bell–LaPadula model which only addresses data confidentiality. The security rules that apply to this model are the reverse of the Bell-LaPadula rules:

          • 1. Subjects from higher levels cannot read objects that are at lower levels. This is also called No Read Down or Simple Security property.
          • 2. Subjects from a given integrity level cannot modify objects from higher levels. This is also called No Write Up or Star * Security property.
      This model was designed for military environment, however is not being used much in the real world.

      Non-Discretionary Access Control

      Non-discretionary access controls are based on a role-based access control systems. Access control in this models is based on the subject’s role or position in an organization. This type of systems are centrally administered and are based on the security policy of a company.

      Role-Based Access Control (RBAC)

      The Role-Based Access Control is the most popular model used in various organizations. RBAC associates permissions with functions, jobs, or roles within an organization. Every role is a collection of transactions, which are the activities that someone in that role is permitted to carry out.
      The main security rules that apply to this model are:

          • A subject can execute a transaction only if the subject has an active role. This rule is called Rule Assignment.
          • A subject’s active role must be an authorized role for that subject. This rule is called Role authorization.
          • A subject can execute a transaction only if the transaction is authorized for one of the subject’s active roles. This rule is called Transaction authorization.
      Based on this model an individual posses a set of authorized roles, which this individual is allowed to fill at various times; and a a set of active roles, which the individual currently occupies.
      This model is very flexible and easy to administer. RBAC recognizes that a subject often has various functions within the organization and allows the subject to transition between roles without the need of changing identities.

      Lattice-Based Access Control (LBAC)

      LBAC is an access control model based on the complex interactions between combination of objects and subjects. LBAC is known as a label-based access control (or rule-based access control) restriction as opposed to role-based access control (RBAC). This model mainly focuses on confidentiality, however it addresses integrity as well. In this model, the subjects and objects have labels. Each subject’s label contains the clearance and need-to-know categories that this subject can access. In addition to that each object and subject have a greatest lower bound (meet) and least upper bound (join) of access rights. If subjects A and B need access to an object, the security level is defined as the meet of the levels of A and B. If two objects X and Y are combined, they form another object Z, which is assigned the security level formed by the join of the levels of X and Y.

      Task-Based Access Control

      Task-based access control is another non-discretionary access control model, related to RBAC. Task-based access control is based on the tasks each subject must perform. The difference between RBAC and Task-based access control is that it is focusing on specific tasks, instead of roles. TBAC is well suited for distributed computing and information processing activities with multiple points of access, control, and decision making such as that found in workflow and distributed process and transaction management systems.
      More information on Task-Based Access control is available at the following

      Other Security Models

      Clark-Wilson Integrity Model

      This model attempts to capture security requirements of commercial applications, as opposed to military applications. This model focuses on data integrity and based on that focuses on the following concepts:

          • Separation of Duties Each user is associated with a valid set of programs that they can run on a system. This prevents unauthorized modifications and preserves the data integrity and consistency.
          • Authentication Each user’s identity must be properly authenticated.
          • Audit Every modification should be logged.
          • Well-formed transactions Subjects are allowed to manipulate data only in constrained ways, they no longer have access to objects but instead must access them through programs.
      The Clark-Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computer system. The integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. An integrity policy describes how the data items in the system should be kept valid from one state to the next and specifies the capabilities of the various principals in the system. The policy is constructed in terms of the following categories:
          • Constrained Data Items CDIs are the objects whose integrity is protected
          • Unconstrained Data Items UDIs are objects not covered by the integrity policy
          • Transformation Procedures TPs are the only procedures allowed to modify CDIs, or take arbitrary user input and create new CDIs. Designed to take the system from one valid state to another.
          • Integrity Verification Procedures IVPs are procedures meant to verify maintenance of integrity of CDIs.
      There are two kinds of rules: Certification and Enforcement.
          • C1 All IVPs must ensure that CDIs are in a valid state when the IVP is run.
          • C2 All TPs must be certified as integrity-preserving.
          • C3 Assignment of TPs to users must satisfy separation of duty.
          • C4 The operation of TPs must be logged.
          • C5 TPs executing on UDIs must result in valid CDIs.
          • E1 Only certified TPs can manipulate CDIs.
          • E2 Users must only access CDIs by means of TPs for which they are authorized.
          • E3 The identify of each user attempting to execute a TP must be authenticated.
      The following are the three main goals of the Clark-Wilson Integrity model:
          • Prevent unauthorized users from making modifications
          • Prevent authorized users from making improper modifications (separation of duties)
          • Maintain internal and external consistency (well-formed transaction)

      Chinese Wall Model

      The Chinese Wall policy model was proposed by Brewer and Nash in 1989. This model addresses conflicts of interest and inadvertent disclosure of information by a consultant or contractor. This model focuses on confidentiality access control. Based on this model a subject is allowed to access information from any company as long as that subject has never accessed information from a different company in the same conflict class. The access rights that any subject is given depends on the history of past accesses of this subject.

      Non-Interference Model

      Non-Interference Model is a multilevel security model first described by Goguen and Meseguer in 1982. A computer has the non-interference property if and only if any sequence of low inputs will produce the same low outputs, regardless of what the high level inputs are. In other words a security domain A is non-interfering with domain B if no input by A can influence subsequent outputs seen by B. This model addresses the illicit flow of information by requiring complete control of the information-flow process and by creating a policy which specifies the allowed information flows between security domains.

      State Machine Model

      In state machine model, the state of a machine is captured in order to verify the security of a system. A given state consists of all current permissions and all current instances of subjects accessing the objects. The system is considered secure if the subject can access objects only by means that are concurrent with the security policy. Since the state of a system is never constant and is always in the process of transition, the security has to be maintained in all states of transition.

      Harrison-Ruzzo-Ullman Model (HRU) or Access Matrix Model (AMM)

      The Harrison-Ruzzo-Ullman Model model is an operating system level computer security model and it addresses the access rights of a subject and the integrity of those rights. It that address the issues regarding changing access rights and creation and deletion of subjects and objects. Based on a matrix of rows for each subjects and columns for each object and entries of generic rights (r,w,x = read, write, execute).

      Information Flow Model

      This type of models focus on the flow of information and are designed to prevent unauthorized information flow. It prevents all unauthorized information flows, whether within the same classification level or between classification levels.

      McLean (Tranquility) Model

      There are two properties which define how the system will issue security labels for objects. The Strong Tranquility Property states that security labels cannot be changed while the system is operating. The Weak Tranquility Property states that security labels will not change in a way that conflicts with defined security properties.

      Graham-Denning Model

      The Graham-Denning Model focuses on how subjects and objects should be securely created and deleted, and how to assign specific access rights. It is mainly used in access control mechanisms for distributed systems. The model has eight basic protection rules (actions) that outline:

          • How to securely create an object.
          • How to securely create a subject.
          • How to securely delete an object.
          • How to securely delete a subject.
          • How to securely provide the read access right.
          • How to securely provide the grant access right.
          • How to securely provide the delete access right.
          • How to securely provide the transfer access right.
      This model is based on the Access Control Matrix model where rows correspond to subjects and columns correspond to objects and subjects.

      Access Control Protocols and Frameworks

      Centralized Access Control

      Centralized access control concentrates access control in one logical point for a system. A small team or individual manages centralized access control. Administrative overhead is lower because all changes are made in a single location and a single change affects the entire system. However, centralized access control also presents a single point of failure. Examples of centralized access control protocols are Remote Authentication Dial-In User Service (RADIUS), Terminal Access Controller Access Control System (TACACS) and Diameter.

      Remote Authentication Dial-In User Service (RADIUS)

      RADIUS is a centralized AAA client/server network protocol. This protocol is often used by ISPs and enterprises to manage access to the Internet or internal networks, wireless networks, and integrated e-mail services. RADIUS is an application level protocol that uses the User Datagram Protocol (UDP) ports 1812 (authentication) and 1813 (accounting).
      RADIUS serves three functions:

          • Authenticates a subject’s credentials against an authentication database.
          • Authorizes users by allowing specific users’ access to specific data objects.
          • Accounts for each data session by creating a log entry for each RADIUS connection made.

      Terminal Access Controller Access Control System (TACACS, XTACACS, and TACACS+)

      TACACS is a centralized access control protocol used to communicate with an authentication server commonly used in UNIX networks. It allows a remote access server to communicate with an authentication server in order to determine if the user has access to the network.

      TACACS combines its authentication and authorization processes. TACACS uses fixed passwords for authentication. TACACS uses either TCP or UDP on port 49 by default.

      XTACACS is a later version of TACACS introduced by Cisco in 1990. It separates authentication, authorization, and auditing processes. XTACACS allows users to employ dynamic (one-time) passwords.

      TACACS+ is an entirely new protocol and not compatible with TACACS or XTACACS. TACACS provides two-factor user authentication. It provides a very similar functionality as the RADIUS protocol. TACACS+ uses the Transmission Control Protocol (TCP) and RADIUS uses the User Datagram Protocol (UDP). Whereas RADIUS combines authentication and authorization in a user profile, TACACS+ separates the two operations. TACACS+ encrypts the body of the messages as well as the password, unlike the RADIUS protocol that just encrypts the password and send the message in clear text. The RADIUS protocol combines the authentication and authorization functionality, while the TACACS protocol separates the authentication, authorization, and accounting functionalities. This provides more flexibility on how the remote users are authenticated. RADIUS works over PPP connections, while TACACS+ supports other protocols, such as AppleTalk, NetBIOS, and IPX.

      RADIUS is more appropriate for environments that implement simple username and password authentication and users need only Allow or Deny permissions for access, such as ISPs. TACACS+ is more appropriate for environments that require more complex authentication and authorization techniques, such as corporate networks.


      Diameter is an AAA network protocol that is build upon RADIUS to overcome its multiple limitations. Diameter’s protocol name derived from the meaning that a diameter is twice the radius. The Diameter protocol is not directly compatible with the RADIUS protocol. Diameter is designed to be compatible with the Extensible Authentication Protocol (EAP). It uses reliable transport protocols such as TCP and STCP. Diameter implements IPSec or TLS for network layered security. The IANA has assigned TCP and SCTP port number 3868 to Diameter.

      Decentralized Access Control

      Decentralized access control, also called distributed access control provides more local power: each site has control over its data. With this model different sites may employ different access control models, different policies, and have different levels of security, leading to an inconsistent view.