CISSP101

Get Prepared And Pass The CISSP Exam

Security Architecture and Models

Security Architecture and Design describes the components of the logical hardware, operating system, and software security components, and how to implement those components to architect, built and evaluate the security of computer systems. Security architecture is a view of the overall system architecture from a security point and how the system is put together to satisfy the security requirements.

Computer Hardware Architecture

The main hardware components of a computer system are the CPU, primary and secondary memory, and input/output devices. A group of conductors called a bus interconnects these computer elements connected to the bus. A bus can be organized into subunits, such as the address bus, the data bus, and the control bus.

Random Access Memory (RAM)

Computer’s primary storage and it is very fast. This memory stores program instructions and data, and is accessible directly by the CPU. Volatile type of memory – lost when the power is off.
The types of RAM are:
Dynamic RAM (DRAM) - holds data for a short period of time. DRAM stores each bit in a storage cell that consists of capacitor and transistor. Since the capacitors tend to lose their charge quickly, this storage cells need to be refreshed periodically or given a new electronic charge every few milliseconds.
Static RAM (SRAM) - it uses a different type of technology and it does not require the storage cells to be refreshed. Since the storage cells do not need to be constantly refreshed, the SRAM is much faster than the DRAM.
Synchronous RAM (SDRAM) - DRAM that is synchronized with the clock speed of the CPU, resulting in synchronized timing of the CPU and the RAM activities.
Extended Data Output RAM (EDO RAM) - faster than the DRAM as it has an additional feature that a new access cycle can be started while keeping the data output of the previous cycle alive. This allows a certain amount of overlap in operation (pipelining) which improves performance.
Burst EDO RAM (BEDO DRAM) - as it sends data back to the computer from one read operation at the same time that it is reading in the address of the next data to be sent.
Double Data Rate SDRAM (DDR SDRAM) - operates transactions on both the rising and falling edges of the clock cycle rather than on just the rising edge. This basically doubles the output of the transactions.

Read-Only Memory

ROM is nonvolatile, built-in memory that contains data that cannot be altered. The software stored in this memory is called firmware. The data in this memory is sustained by a small long-life battery. There a several types of ROM:
Programmable Read-Only Memory (PROM) - can be modified only once after it is manufactured. During the modification process an electrical current to specific cells in the ROM is supplied, this process blows a fuse in the cells and it is also known as “burning the PROM”. There is no margin for errors when modifying PROM.
Erasable and Programmable Read-Only Memory (EPROM) is PROM that can be erased and reused.
Electrically Erasable Programmable Read-Only Memory is a ROM that can be erased and reprogrammed repeatedly through an application of higher than normal voltage.

Virtual Memory

The system’s hard drive is used to extend the RAM space, called swap space. Swapping involves moving the entire memory region associated with the process or application.

Central Processing Unit (CPU)

The CPU is the brain of the computer and it carries out the instructions of a computer program and it causes the processing to occur. The CPU has its own specific architecture and the operating system must be designed to work with this CPU architecture.
The chips of the CPU are composed of millions of transistors. The CPU is composed of the: Arithmetic Logic Unit - performs all the arithmetic and logical operations. The ALU is the brain of the CPU and the CPU is the brain of the computer.
Registers - temporary storage locations. There are two types of registers: general-purpose and dedicated registers. The program counter pointing to the memory location that contains the next instruction to be executed is an example of a dedicated register.
Control Unit – extracts instructions from memory, interprets and oversees their execution, referring to ALU when necessary.
The CPU has two different modes of operation - the user and the privileged mode (also called kernel or supervisor mode). The program status word (PSW) indicates which mode the CPU should be working in.
Prefetch Unit - queues instructions in cache or RAM to assure that the CPU is in continuous operation. The prefetch unit tries to predict what data and instructions will be needed and retrieves them ahead of time in order to help avoid delays in processing.
Decode Unit - takes the instructions from the prefetch unit and translates them into a form that can be understood by the control unit and the ALU. The decoded instructions go to the control unit for processing.
Instruction cache - he instruction cache works as an “input cache” and it is located close to the fetch unit and this is where the fetch unit looks for instructions required by the CPU.
Data Cache - the data cache is referred to as “output cache”. When a program completes executing the result is send to the data cache.
Bus Interface Unit - allows the core to communicate with other CPU components, such as the memory controller (which controls the instructions and data going between the CPU and the RAM) and other cores.

Application Architecture

Client-Server Model

Client server model is a distributed application structure. In the client/server model, the clients are represented by applications requesting services and servers are the programs that provide services. A server also, called a host is running one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server’s content or service function. Servers await incoming requests and clients initiate communication sessions.

Distributed/Decentralized Computing

A distributed system consist of multiple computers that interact with each other to accomplish a common goal. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers. A distributed system allows parts of the system to be located on separate computers in different locations. Some examples of distributed computing are aircraft control systems, industrial control systems, network file systems, distributed databases.

Grid Computing

Grid computing is the federation of loosely coupled, heterogenous, and geographically dispersed computer resources from multiple locations to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of file processing. The opposite of grid computing is the conventional high performance cluster computing. While a grid can be dedicated to a particular application, commonly a grid is used for variety of purposes.
Some examples of grid computing projects:
SETI@home - search for extraterrestrial intelligence in which PC users worldwide donate unused processor cycles to help the research for signs for extraterrestrial life by analyzing signals comping from outer space.
Folding@home from Stanford - disease research that simulates protein folding, computational drug design, and other types of molecular dynamics. http://folding.stanford.edu/

Cluster Computing

A computer cluster consists of loosely or tightly connected computers that work together and they can be viewed as a single system. Usually the components of a cluster are connected to each other through fast local area networks. Clusters are deployed to improve performance and availability.

Peer to Peer (P2P)

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or work loads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes. This is a very powerful concept and the only requirement for a computer to join a peer to peer network are an internet connection and P2P software. Examples of P2P software are are BitTorrent, Gnutella, Kazaa, Winny, et cetera.

Web Services

Web Services are used to convert and application into a web application. They are published, found and used through the web. The basic components of the web services are XML + HTTP or JSON + HTTP. XML and JSON are languages for data exchange between clients and web services. The elements of the web services platform are:

      • Simple Object Access Protocol (SOAP) is used to transport XML data. It is a communication protocol and a format for sending messages.
      • Universal Description Discovery and Integration (UDDI) is a directory service where companies can register and search for Web services. UDDI is a directory of web service interfaces described by WSDL.
      • Web Services Description Language (WSDL) is an XML-based language for locating and describing Web services.

JSON

JavaScript Object Notation, or JSON is a lightweight data-interchange format. It is easy for humans to read and write and for machines to parse and generate. JSON based on a subset of the Javascript programming language. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.
JSON is built upon two structures:
1. A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array.
2. An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.
JSON is the best framework used for data exchange between client and server today.

REST

Representation State Transfer, or REST is a style of software architecture for distributed systems such as the World Wide Web. REST has emerged and a predominant web service design model. REST-style architectures consist of clients and servers. Clients initiate requests to servers. Servers process requests and return appropriate responses. Requests and responses are built around the transfer of representations of resources. REST exemplifies four components of the Web:
1. Data originating server.
2. Gateways within the network
3. Proxies
4. Clients (browsers, mobile APPs or thick clients)
REST essentially governs the proper behavior of participants as it established relationship architecture characteristics and the macro-interactions of web components without imposing limitations on the individual participants.
REST facilitates the transactions between web servers by allowing loose coupling between the different services. REST is less strongly typed than its counterpart, SOAP. The REST language is very easy for humans to read as it uses nouns and verbs. Unlike SOAP, rest does not require XML parsing and does not require a message header. Because of its architecture REST uses less bandwidth. REST provides scalability of component interactions, generality of interfaces, independent deployment of components and intermediary components to reduce latency.
REST has the following six constraints, however the implementation of the individual components are left free to the designer:

      • Client-server - uniform interface separates clients from servers
      • Stateless - no client context is being stored on the server between requests
      • Cacheable - responses must be implicitly of explicitly define themselves as cacheable or not.
      • Layered system - a client cannot tell whether it is connected directly to the end server or to an intermediary along the way.
      • Code on demand - the functionality of the servers can be temporarily extended or customized.
      • Uniform interface - each part are allowed to evolve independently as the uniformed interface simplifies and decouples the architecture.

Service Oriented Architecture (SOA)

SOA is an evolution of distributed computing. It is an architecture for defining, linking, and integrating reusable business services. It is a method for organizing the business services in business processes. Basically, SOA makes it possible for a business to add new features and services without having to create them from scratch. Instead, they can be added or modified as needed, making it simple and efficient to expand the business. Because the services are made on the same reused components the processes are more consistent. SOA uses a high level of obstruction that is independent of application or infrastructure IT platform. It is build on standards that are supported by the major IT providers. SOA provides a strong ability to change and align IT with business, the change is very easy and it is like disassembling and reassembling services into new business aligned processes.
The components of an SOA architecture are:
Services are reusable components that represent business or operation tasks or processes. The reusability property allows new business processes to be created based on these services. Service interface definitions are available in some form of service registry.
SOA Infrastructure is the set of technologies that connects service consumers to services through a previously agreed-upon communication model. The communication model can be based on Web services, message-oriented middleware (MOM), Common Object Request Broker Architecture (CORBA).
Service Consumers are the clients that use the functionality provided by the services. The consumers are programmatically bound to the services.

Operating System Architecture

The operating system is a collection of software that manages computer hardware resources and provides common services for computer programs. The OS is initially loaded into the computer by a boot program, which loads the operating system in the computer’s random access memory.

The two main goals of the operating system are:
1. System goal - control the use of the system’s resources. The OS has to share the computer’s resources between a number of simultaneous users and multiple tasks. It should be easy to design, implement and maintain.
2. User goal - the operating system should be convenient to use , easy to learn, reliable, safe, secure and fast.

The core software components of an operating system are collectively known as kernel. The kernel is a bridge between applications and the processing performed at the computer hardware level. The kernel has unrestricted access to all the resources in the system. System calls are the interface between a processes and the operating system kernel and are explicit request to the kernel made via software interrupts. System calls provide access to the operating system services. Each system call is identified by a system call number and its execution takes place in kernel mode.

Kernel Architectures

Monolithic kernels

In the monolithic kernel each component of the operating system is contained within the kernel, has unrestricted system access and can communicate directly with any other component. It is implemented as a single process and all the components share the same address space. The problems of this architecture are:

      • the kernel components are not protected from each other
      • it is not easy to extend or modify
      • provides no information hiding (as opposed to modules, packages, classes)
      • each procedure can call any other procedure
      • errors are difficult to isolate
MS-DOS is an example of monolithic operating system.

Layered approach

As operating systems became larger and more complex, the monolithic approach was largely abandoned in favor of a modular approach which grouped components with similar functionality into layers to help operating system designers to manage the complexity of the system. In this model the operating system is divided into a number of layers (levels) each built on top of the lower layers. The bottom layer (layer 0) is the hardware. The highest layer (layer N) is the user interface. Each layer only uses only the functions and services provided by the lower layer. The lower layer provide service to the higher levels using an interface that hides their implementation. This structure allows the implementation of each layer to be modified without requiring the modification of the adjacent layers. The benefits of the layered approach are:

      • simplified debugging and modification
      • imposes structure and consistency to the OS

Since a service request from a user process may pass through many layers of system software before it is serviced the performance compares unfavorably to that of a monolithic kernel.
Many of today’s operating systems including Linux and Windows implement some level of layering.

Microkernel

A microkernel is a reduced operating system core that contains only essential OS functions. The idea behind this approach is to minimize the kernel’s functionality by executing as much as functionality as possible in user mode. The kernel services typically include low level memory management, inter-process communications, and basic process synchronization. Microkernels are highly modular, making them extensible, portable and scalable. Program failures occurring in user mode do not affect the rest of the system. The problems associated with the microkernel are performance overhead.

Modular Kernel design

The modular kernel design is a hybrid between the layered and the microkernel approach. Most of the modern operating systems implement kernel modules. The modular kernel design implements an object-oriented approach. Each core component is separated and is loadable on demand. Since the modules are located inside the kernel, they do not require the overhead of the message passing which improves performance. The Mac OSX implements a hybrid approach, it has a Mach micro kernel, combined with a BSD kernel. The BSD kernel provides support for command line interface, networking, file system, POSIX API and threads. The Mach kernel manages memory, Remote Procedure Call, Inter Process Communication and message passing.

OS Architecture Concepts

Process/Task Management

When an instance of an application is executed by the operating system, a unit called process is created to manage the instance. The process management encompasses the process creation, destruction, and basic interprocess communication and synchronization. The OS performs these functions by allocating resources to processes, enabling the processes to share and exchange information, protect the resources from other processes and enable synchronization among processes. The accomplish these tasks the OS must maintain a data structure for each process, which describes the state and resource ownership of that process, and which enables the OS to exert control over each process.
Three-state process management model
The three states of this process management model are:
Running: the process is currently being executed.
Ready: a process is queued and prepared to be executed when given the opportunity.
Blocked: a process that cannot execute until some event occurs, such as the completion of an I/O operation.

Task Scheduling Strategies:

Generally the task scheduling strategies that an OS may adopt fall into the following categories:

      • In multiprogramming systems: the running task keeps running until it performs an operation that requires waiting for an external event (e.g. reading from a DVD) or until the computer’s scheduler forcibly swaps the running task out of the CPU. Multiprogramming systems are designed to maximize the CPU usage.
      • In time-sharing systems: the running task is required to relinquish the CPU, either voluntarily or by external event such as hardware interrupt. Time sharing systems allow several programs to execute apparently simultaneously.
      • In real-time systems: Some waiting tasks are guaranteed to be given CPU when an external event occurs. Real time systems are designed to control mechanical devices such as industrial robots, which require timely processing.

Thread Management

Every process contains a thread of execution. The thread is the series of programming instructions performed by the application code. Simple applications may have a single set of instructions and there is only one threat or execution path created by this application. More complex applications may have several set of instructions that may be performed simultaneously, instead of serially. This is done by starting separate threads for each task. More detailed explanation on threads is available from the following URL: Thread and Task Architecture

Multithreading
Multithreading allows multiple threads to exist within a single process. These threads share the process’ resources, but they execute independently. This feature allows a program to operate faster on computer systems that have multiple CPUs, CPUs with multiple cores, or across a cluster of machines. In order for the data to be correctly manipulated the threads will often need to rendezvous in time in order to process the data in the correct order. Mutually exclusive operations have to be implemented to prevent data from being simultaneously modified. Programmers need to be careful to avoid race conditions, deadlocks and other non-intuitive behaviors.
Operating systems schedule threads in one of two ways:
1. Preemptive multitasking allows the operation system to determine and enforce rules when exactly each program will be kicked off the CPU once its time slice is up. The main advantage of the preemptive scheduling is the real-time response on the task level. The main disadvantage is that the program may get kicked off at an inappropriate time causing various negative effects that could be avoided with cooperative multithreading.
2. Cooperative multitasking
The CPU relies on each program to give up voluntarily the CPU after its time slice is up. This can create problems if a thread is waiting for a resource to become available.

Memory Management

Just as processes share CPU, they also share physical memory.
Logical vs. Physical Address Space
A logical address space, also called a pointer is bound to a separate physical/absolute address it is generated by the CPU and also referred to as virtual address.
Physical address is generated by the memory management unit. The logical and physical addresses are the same in compile-time and load-time address-binding schemes. Logical and physical addresses differ in execution-time address-binding scheme.
CPU being one of the most trusted components in a computer system has a direct access to the memory, using a physical address and it has physical wires which are connecting the CPU to the memory chips. Unlike the CPU, software uses logical memory addresses. The reason for the software to use logical memory addresses is to implement an access control between the memory and the software for protection and efficiency. Index tables and pointers are used to access the memory indirectly. Memory management is not properly implemented if an attacker is able to gain direct access to the memory through a flaw in an operating system.
Memory management requirements:
Relocation
A programer does not know where the program will be placed in memory when it is executed. While the program is executing it might be swapped to disk and returned to main memory at a different location (relocated). Memory references must be translated in the code to actual physical memory address.
Protection
Processes should not be able to reference memory locations in another process without permission. The absolute addresses must be checked at run time and it must be impossible to check them at compile time.
Sharing
Several processes are allowed to access the same portion of memory. It is a better practice to allow a process to access the same copy of a program rather than have their own separate copy.
Logical Organization
A program is separated into modules and each of the modules is separately compiled. Different permissions are given to the individual modules and they are shared among processes.
Physical Organization
Programer does not know how much space will be available and the memory available for a program and its data may be insufficient.

Buffer Overflows
Memory Leaks

Input/Output Device Protection

Stack

A stack is used for temporary value storage during program execution. A common model of a stack is a plate or coin stacker. Plates are “pushed” onto the top and “popped” off the top. The relation between the push and the pop is also referred to as Last-In-Fist_Out (LIFO) data structure. Stack operations are:

      • Push: Add an element to the top of the stack
      • Pop: Remove the top element
      • Peek: Look at the top element
      • Check: Check if the stack is empty

Interrupt

Interrupt is a suspension of a process such as the execution of a computer program, caused by an event that is external to the process and performed in a way that the process can be resumed. Interrupts are a way to improve process utilization. The following types of interrupts exist:

      • Program interrupt is generated as a results of an instruction execution, such as devision by zero, arithmetic overflow, attempt to execute and illegal instruction, et cetera.
      • Timer interrupt is generated by a timer within the processor. This allows an operating system to perform certain functions on a regular basis.
      • I/O interrupt is generated by an I/O controller, it may signal a normal completion of an operation or to signal a variety of error conditions.
      • Hardware Failure interrupt is generated by a failure, such as power failure for example.

Virtual machines

Virtualization enables a single PC or server to simultaneously run multiple operating systems or multiple sessions of a single OS or a single platform. A virtual machine takes the layered approach a step further - it creates the illusion of virtual hardware environment (processor, memory, I/O) implemented in software. The instance of an operating system - the virtual machine is running as an application on the underlying operating system kernel. The virtual machine is referred to as a guest os that runs on in the host environment. The hypervisor controls the execution of various operating systems, it provides a layer of abstraction between the virtual machines and the host environment and it is responsible for the management of the resources.
The benefits of the virtual machines are:

      • Manageability - VMs provide ease of maintenance, administration and provisioning
      • Performance - the overhead of the virtualization is typically very small
      • Isolation - the activity of one VM should not impact other active VMs, the data in one VM is not accessible by another VM
      • Scalability - the cost per VM is minimized
The following is a list of security threats in virtualized environment:
Virtualization Based Malware The virtualization based malware could be software or hardware based malware.
Denial of Service Attack An attacker takes over a guest VM and is then able to gain control over the physical resources of the other guest VMs on the same physical host.
Communication Attack among guest VM and the host Isolation should be carefully configured and closely monitored in a VE to avoid the interference or unwanted accessibility among the various guest VMs themselves or between the guest VMs and the physical host.
VM Escape The process where a guest VM can jailbreak and directly interact with the hypervisor is known and “VM escape”. Any attacker who has the ability to escape the guest VM environment and directly interact with the hypervisor will gain access to every other VM on the physical host. The compromise of the hypervisor by VM escape is know as hyperjacking.
Network Blind Spots Network blind spots occur when traditional network security solutions are blinded and cannot detect the malicious communication between the guest VMs and the host or another VM residing on the same physical machine.

Operating System Protection

Security Kernel

The security kernel model is implemented to enforce the security mechanisms of the entire operating system. The security kernel enforces the reference monitor between hardware, software, and the firmware.

Trusted Computing Base

Trusted Computing Base is the combination of security mechanisms within a computer system. The TCB includes the hardware, software, firmware, processes, inter-process communications that are responsible for enforcing a security policy. Part of the system that are located outside of the TCB should not be able to breach the security policy enforced by the TCB.
Basic interactions monitored by the TCB:

      • Process Activation - Changing from one process to another requires complete change of registers, file access lists, process status information, process status information.
      • Execution Domain Switching - TCB ensures that processes from one domain that invoke processes in another domain are accomplishing this in a secure manner.
      • Memory Protection
      • I/O Operations

Reference Monitor

The reference monitor is the part of the security kernel that controls the access to objects, such as devices, memory, interprocess communications, files, and et cetera. The reference model must be tamperproof, must always participate whenever access is required to any object, it must be easy to test and verify its completes. The reference monitor must be small, simple, and understandable for correct policy implementation to be enforced.

Cloud Computing

Cloud computing is the practice of companies to keep some form of their computing resources with an external 3rd party provider. The computing resources provided by the third party include application, middleware, database, hypervisor/virtual machine technology, base os, server hardware, storage, facility (data center along with HVAC), network and associated bandwidth. All the listed computing resources could be provided as a service in various of combinations. The top 3 most popular combinations are:

      • Saas (Software as a Service) - also referred to as “on-demand software” is a software distribution model in which the applications are hosted by the cloud provider and made available to consumers on-demand.
      • PaaS (Platform as a Service) - this category of cloud computing services provides a computing platform and a solution stack as a service, typically including operating systems, programming language execution environment, database, and web server.
      • IaaS (Infrastructure as a Service) - cloud computing services that provide equipment, such as facilities, hardware, networking components to consumers.
The benefits of the cloud computing are:
Achieve economies of scale - increase the productivity with fewer people.
Reduce spending on technology infrastructure - maintain easy access to your information with minimal upfront spending. Pay as you go, based on demand.
Globalize your workforce on the cheap - the cloud resources can be accessed worldwide from the Internet.
Streamline processes - get more work done in less time with less people.
Reduce capital costs - there is no need to spend big money on hardware, software or licensing fees.
Improved accessibility - cloud services can be accessed easily anytime from anywhere.
Monitor projects more effectively - stay within budget and ahead of completion cycle times
Less personnel training is needed - more work can be done with fewer people.
Minimize licensing new software - stretch and grow without the need to buy expensive software licenses or programs.
Improve flexibility - you can change direction easily.
Resource Capacity
Speed-up the Resource spun time
On demand IT service
Self-Service IT Service
Pay-by-use IT Service

The disadvantages of Cloud Computing are:
True Data Ownership - you are still the custodian of your data but only on paper and not physically.
Loss of Control and Knowledge - this leads to dependency.
Genuine (APP/System) Integration Challenges
Inflexibility

Types of cloud computing models:
Private Cloud is internal within a company datacenter or dedicated leased from a Cloud Service Provider (CSP).
Public Cloud is a multi-tenancy cloud service provider where single instance of hardware and software application serves multiple customers.
Hybrid Cloud offers possibility to mix private and public cloud service models.

Security Operation Modes

Dedicated Security Mode

In this mode all users accessing information must have:

      • Signed Non-disclosure agreement for ALL information on the system
      • Proper clearance for ALL information on the system
      • Formal access approval for ALL information on the system
      • A valid need to know for ALL information on the system

System High Security Mode

Just as the dedicated security mode the system high security mode requires signed NDA agreement, proper clearance and formal access approval for ALL the information on the system. The main difference is that this mode requires a need to know only for SOME of the information on the system, and only the information approved by the need to know may be accessed. Also this mode requires that the users have the highest security clearance that any of the data on the system requires.

Compartmented Security Mode

This mode requires signed NDA agreement and proper clearance for ALL the information on the system, however need to know and formal access approval is required only for SOME of the information. All the user can access is only SOME data, based on their need to know and formal access approval.

Multi-Level Security Mode

The main difference between the multi-level and the compartmented security modes is that the user only need security clearance only for the information that they are accessing and not for all the information residing on the system. The users have access only to SOME data based on their need to know, formal approval and clearance.

Guard

Software or hardware guards mandate the flow of information between high and low security levels. All requests from low to higher security levels are reviewed and authorized by the guard before being allowed.

Trust and Assurance

Trust is a measure of trustworthiness. Trustworthiness is provided with sufficient credible evidence leading one to believe that a system will meet a set of given requirements. Assurance is the confidence that a system or a product meets certain level of security requirements based on evidence provided by assurance techniques.

Certification, Accreditation, Licensing

Certification is the prove accomplished by a formal process that an individual has achieved or exceeded certain standard or quality. Certification is a prove and recognition that an individual has demonstrated a certain level of mastery of a specific body of knowledge and skills within the same field. Certification always involves individuals and it is a voluntary process and it is granted by non-government organizations.

Licensing, unlike certification is an non-voluntary process, by which a government organization grants permission to an individual to engage in a profession. Licensing just like certification involves individuals.

Accreditation is a voluntary process that evaluates institutions, agencies, and educational programs.

System Evaluation Methods

Web Trust

Web Trust Seal of Assurance developed by the American Institute of Certified Public Accountants (AICPA) and Canadian Institute of Chartered Accountants (CICA), it provides consumers with the confidence that Web site business meets high standards of business practice as per CPA Web Trust Principles and Criteria. The Web Trust seal demonstrates that the website has been examined by a qualified CPA who has verified that it complies with the Web Trust Principles and Criteria.

SSAE16

SSAE16 or Statement on Standards for Attestation Engagements framework is an auditing standard created by the Auditing Standards Board of the American Institute of Certified Public Accountants. This standard is an enhancement to the current standard called SAS70. SSAE16 brings the organizations up to date with the new international service organization reporting standard, called the ISAE 3402.
SOC1, SOC2, and SOC3 are different levels of SSAE16 auditing reports.
SOC1 - reports on controls relevant to Internal Controls over Financial Reporting (ICFR). SOC2 - reports on controls at a service organization relevant to security, availability, processing integrity, confidentiality, or privacy in accordance with AT Section 101.
SOC3 - Web Trust and Systrust. Reports on controls relevant to security, availability, processing integrity, confidentiality, or privacy in accordance with General Trust Service Principles.

Orange Book or TESEC

The Orange book also known as the Orange Book was the first major computer security evaluation methodology. The Orange Book was part of a series of books developed by the Department of Defense in the 1980’s and called the Rainbow Series. The purpose of this book series was the protection of government classified information.
The TCSEC defines 6 evaluation classes identified by the rating scale from lowest to highest: D, C1, C2, B1, B2, B3, and A1. An evaluated product could use the appropriate rating based on the evaluation class.
D. Minimal Protection

      • No security characteristics
      • Evaluated at higher level and failed
C1. Discretionary Protection
      • DAC
      • Require identification & authentication
      • Assurance minimal
      • Nothing evaluated after 1896
C2. Controlled Access Protection
      • Auditing capable of tracking each individuals access or attempt to each object
      • More stringent security testing
      • Most OSs at the end of the TCSEC incorporated C2 requirements
B1. Labeled Security Protection
      • MAC for specific set of objects
      • Each controlled object must be labeled for a security level & that labeling is used to control access
      • Security testing requirements are more stringent
      • Information security model for hierarchical and non-hierarchical categories
      • Informal model of security policy
B2. Structured Protection
      • MAC for all objects
      • Labeling expanded
      • Trusted path for login
      • Requires use of principle of least privilege
      • Covert channel analysis
      • Configuration management
      • Formal model of security policy
      • B3 Security Domains
            • High-level design includes layering, abstraction, information hiding
            • Tamperproof security functions
            • Increased trusted path requirements
            • Significant assurance requirements
            • Administrator’s guide
            • Design Documentation
            • DTLS - Descriptive Top Level Specifications
        A1. Verified Protection
            • Assurance
            • Formal Methods - Covert Channel analysis and Design specification and verification
            • Trusted Distribution
            • Increased test and design documentation
            • FTLS - Formal Top Level Specification
        This evaluation methodology has three fundamental evaluation problems, which are:
        1. Criteria creep - as new products were developed, the expansion of the TCSEC evaluation classes was inevitable as the criteria had to be applied to those products.
        2. Time consumption of the process - the process took too much time, the free evaluation lacked motivation and there were scheduling problems and misunderstanding between the evaluation teams.
        3. Focused on OS - the security issues are now expanded beyond the OS.

        Common Criteria

        The Common Criteria is the successor of the Orange book (TCSEC). The CC is an international standard secure systems evaluation criteria (ISO/IEC 15408-1, -2, and -3). It was established in 1998 with signing of Common Criteria Recognition Agreement.

        The Common Criteria originated out of the following three standards:

            • ITSEC - European standard which was developed by France, Germany, Netherlands, and UK
            • CTCPEC - the Canadian standard followed from the US DoD standard, published in 1993
            • TCSEC - the United States Department of Defence DoD standard, called the Orange book, which was part of the Rainbow series
        The Common Criteria was produced by combining the above three standards, mostly with the purpose for companies selling computer products for the government would only need to have their products evaluated against one set of standards. CC was developed and originally signed by the governments of US, UK, Canada, France, and Germany.

        The Common Criteria describes a framework where security security requirements can be specified, claimed, and evaluated.

        The CC contains 11 classes of functional requirements and each class contains one or more families. The 11 classes are: Security Audit, Communication, Cryptographic Support, User Data Protection, Identification and Authentication, Security Management, Privacy, Protection of Security Functions, Resource Utilization, TOE Access, Trusted Path.

        The key concepts of the Common Criteria are:

            • Target of Evaluation (TOE): the product or system which is the target of the evaluation
            • Protection Profile (PP): a document that identifies security requirements relevant to a user community for a particular purpose
            • Security Target (ST): a document that identifies the security properties one wants to evaluate against
            • Evaluation Assurance Level (EAL): a numerical rating (1-7) reflecting the assurance requirements fulfilled during the evaluation

        Protection Profile (PP)
        “A CC protection profile (PP) is an implementation-independent set of security requirements for a category of products or systems that meet specific consumer needs.” The protection profiles include:
        Descriptive elements: includes the name of the protection profile and identifies the information protection problem that needs to be solved.
        Rationale: fundamental justification of the protection profile description of the security policies that can be supported by the product.
        Functional requirements: establishes the protection boundaries that the system or the product must enforce.
        Development assurance requirements: identifies the requirement for all the phases of development.
        Evaluation assurance requirements: the type and the intensity of the evaluation is specified.

        Security Target (ST)
        “A security target (ST) is a set of security requirements and specifications to be used for evaluation of an identified product or system”. The specific security functions and mechanisms are described.

        Evaluation Assurance Level (EAL)
        EAL1 Functionally tested: Review of functional and interface specifications
        EAL2 Structurally tested: Analysis of security functions and the high level design
        EAL3 Methodically tested and Checked: Testing of development environmental controls
        EAL4 Methodically Designed, Tested and Reviewed: More detailed design description
        EAL5 Semi-formally Designed and Tested: Vulnerability search, covert channel analysis
        EAL6 Semi-formally Verified Design and Tested: Structured development process
        EAL7 Formally Verified Design and Tested: Formal presentation of functional specification
        A higher EAL meant nothing more, or less, than the evaluation completed a more stringer set of quality requirements. Anything below EAL4 does not mean much. Anything above EAL4 is very difficult to accomplish for complex systems such as OS.

        CC version 3.1 consists of the following parts:
        Part 1: Introduction and general model
        A security environment is described and then security objectives are determined based on the indicated security environment. The confidentiality, integrity, and availability of the system are enforced through the security specifications of the TOE.
        Part 2: Security functional components
        The security functional requirements establish a set of functional components as a standard to express the TOE security functional requirements. The CC contains 11 classes of functional requirements and each class contains one or more families. The 11 classes are: Security Audit, Communication, Cryptographic Support, User Data Protection, Identification and Authentication, Security Management, Privacy, Protection of Security Functions, Resource Utilization, TOE Access, Trusted Path.
        Part 3: Security assurance components
        The security assessment components are established as a standard by the security assessment requirements. The CC contains ten classes of security assessment requirements. The 10 classes are: Protection Profile Evaluation, Security Target Evaluation, Configuration Management, Delivery & Operation, Development, Guidance Documents, Guidance Documents, Life Cycle Support, Vulnerability Assessment, Maintenance of Assurance, Tests.

        Defense-in-Depth

        Defense-in-Depth is an Information Assurance (IA) concept in which multiple layers of security are placed throughout an information system. The main purpose of this strategy is to provide redundancy in case one security control fails or a vulnerability is exploited. Defense-in-Depth is similar to the layered security concept, however it is a more comprehensive security strategy and it originates from the military term “defense in depth”.

        PCI-DSS

        Credit Card Transaction Process

        The credit card transaction process starts when a cardholder presents their credit card to pay the retailer for the provided goods and services. The process is broken down into two parts: front-end and back-end.
        Front-end process:
        1. Authorization. This is the process of requesting an authorization from the bank that issued the credit card. For web merchants, the card is processed through a Payment Gateway. For a retail merchant the card is swiped through Point of Sale Terminal. The Payment Gateway or the POS then connects to the Front-end Processor. The Front-end Processor transmits the authorization to the corresponding credit card association (VISA, MasterCard, Discover, American Express) who then route it to the Issuing Bank. The Issuing Bank then authenticates the cardholder and approves or declines the transaction. During this steps no money are actually being moved. The transaction gets stored in the Payment Gateway or the POS to be later re-presented in order to receive the payment.
        2. Merchant Balancing. This is the process, which is typically performed at the end of the day automatically by the Payment Gateway or POS of combining the transactions by card type and transmitting them to the Front-end processor.
        3. Capture. This is the next step where a payment is requested from the Issuing Bank. The Front-end processor transmits the captured data file to a Back-end processor of the appropriate credit card issuer (VISA, MasterCard, Discover, American Express).
        Back-end process:
        4. Clearing. During this stage the Back-end processor performs verification and compliance checks and then sends the transaction to the appropriate card issuer.
        5. Interchange (Visa and MasterCard only). During this stage the credit card association transmits the transaction to the appropriate Issuing Bank for settlement.
        6. Settlement. The Issuing Bank calculates fees and deductions and sends the funds to the appropriate credit card associations who then transmit them to the appropriate acquiring bank for payment to the merchant.
        7. Merchant ACH. The acquiring bank transmits the deposit to the merchant’s account.

        Self-Assessment Questionnaire (SAQ)

        SAQ is a validation tool for self-evaluating compliance with PCI-DSS. The results of the questionnaire are typically shared with the acquiring bank. SAQ consists of two components: a set of questions corresponding to the PCI DSS compliance and an Attestation of Compliance.
        There are 5 SAQ categories:
        ACard-not-present merchants - all the credit card data is outsourced (e-commerce, mail, phone orders)
        B: Imprint-only merchants with no electronic cardholder data storage
        C-VT: Merchants only using web-based virtual terminals and there is no electronic data storage
        C: Merchants with payment application systems connected to the Internet and there is no electronic data storage
        D: All other merchants not defined by SAQ types A through C and all service providers defined as eligible to complete a SAQ

        Payment Application DDS (PA-DSS)

        PA-DSS is a global security standard created by the Payment Card Industry Security Standard Council (PCI SSC) to provide data standard for software vendors that develop payment applications. This standard dictates that the software vendors develop applications that are compliant with PCI-DSS. The goal is to prevent third parties from storing secure data such as magnetic stripe, CVV2, or PIN. A list of validated payment applications can be found from the following URL: List of Validated Payment Applications

        Qualified Security Assessors (QSAs)

        Individuals who are certified by the PCI Security Standards Council to audit service providers for PCI-DSS compliance.

        Approved Scanning Vendors (ASVs)

        ASVs are approved by the PCI Security Standards Council requirements organizations to validate PCI-DSS compliance of service providers by performing vulnerability scans of Internet facing applications. AVSs have to be re-approved by the Council each year. A list of approved ASVs can be found from the following URL: Approved Scanning Vendors

        Internal Security Assessor (ISA)

        Large organizations may consider PCI SSC Internal Security Assessor program to build their internal PCI DSS expertise. The ISA program trains qualified personnel in appropriate data security techniques to help the organization with internal audit and self-assessment.

        Goals and requirements of PCI DSS

        The six goals of PCI-DSS are:
        1. Build and maintain a secure network:

            • Requirement 1: Install and maintain a firewall configuration to protect cardholder data
            • Requirement 2: Do not use vendor supplied defaults for system passwords and other security parameters
        2. Protect cardholder data:
            • Requirement 3: Protect stored cardholder data
            • Requirement 4: Encrypt transmission of cardholder data across open, public networks
        3. Maintain a vulnerability management program:
            • Requirement 5: Use and regularly update antivirus software and programs
            • Requirement 6: Develop and maintain secure systems and applications
        4. Implement strong access control measures:
            • Requirement 7: Restrict access to cardholder data by business need to know
            • Requirement 8: Assign a unique ID to each person with computer access
            • Requirement 9: Restrict physical access to cardholder data
        5. Regularly monitor and test networks:
            • Requirement 10: Track and monitor all access to network resources and cardholder data
            • Requirement 11: Regularly test security systems and process
        6. Maintain an information security policy:
            • Requirement 12: Maintain a policy that addresses information security for all personnel