ISO 27002 is an information security standard published by the International Organization for Standardization (ISO) and by the International Electrotechnical Commission (IEC). The standard provides best practice recommendations on information security management for use by those responsible for initiating, implementing or maintaining, and for Information Security ManagEment Systems (ISMS). ISO/IEC 27005:2005 contains best practices of control objectives in the following areas of information security management:
- Security Policy
- Organization of Information Security
- Asset Management
- Human Resources Security
- Physical and Environmental Security
- Communications and Operations Management
- Access Control
- Information Systems Acquisition, Development and Maintenance
- Information Security Incident Management
- Business Continuity Management
The Software Life Cycle Development Process (SDLC)
SDLC is a project management tool used to plan, execute and control a software development project. The SDLC provides a general framework for the phases of a software development project, from defining the functional requirements to implementation. The system development control phases that each software development method deals with are: Phase 1: Planning and Initiation The idea is expressed and the concept is developed during this phase. The feasibility is determined from a cost-benefit perspective and security considerations. The project charter deliverable is created. Phase 2: Definition The purpose and the functional specifications are described. Standards and guidelines are set. Feasibility study from the perspective of building-in security from the beginning and sensitivity assessment is performed. A project statement deliverable is created. Phase 3: Analysis and design The business processes and the data related to these business processes are identified. A design walk through is performed. Feasibility study is performed from design review and multi-layer security stand point. The following deliverables are created: business process model, systems analysis document, designer repository, logical data model, detailed systems design. Phase 4: Build or Programming The design is translated into real programs and a code review is performed. Feasibility study from the stand point of threats and vulnerabilities that need to be addressed is performed. The following deliverables are created: developer/2000 repository, application forms and reports, field data capture application, tested application, data conversion application. Phase 5: Transition/Implementation/Operation In this phase testing, conversions, user acceptance, start of the new system and end of the old one, and maintenance are performed. Threats and vulnerability testing is done and penetration testing is performed. Functionality testing, auditing are also performed and system security assurance is issued. Periodic re-assessments are done. Deliverables: user procedures, on-line help text, user acceptance test standards, data conversion, user training plan, transition plan, application migration, ISB migration QA report. Phase 6: Warehousing This is one of the most recent concepts. Another term is “Big Data”. In this phase the data is loaded in the warehouse. The following deliverables are created: business process model, systems analysis document, logical data model, designer/2000 repository, detailed system design, application forms and reports, acceptance test plan, acceptance testing, user testing plan, transition plan, application migration.
Software Development Methods
Waterfall Development Method
The waterfall method was developed in the early 1970s and it is probably the oldest known model for developing software systems. It implements a linear and sequential approach to application design and systems development. After each phase is finished, it proceeds to the next one. Each stage is assigned to a separate team to ensure greater project and deadline control. The waterfall model is not flexible to accommodate changes. The following phases are followed in the waterfall model: 1. Requirement specification - the business requirements are determined and prioritized by the project management team 2. Software design - business requirements are translated into IT solutions and decisions are made about underlying technology to be used 3. Implementation and Integration - code implementation takes place in this stage 4. Testing - fully tested solution for implementation and evaluation by the end user 5. Operations and Maintenance - ensuring that everything runs smoothly.
Agile Development Method
The agile development model is more flexible, fast and active methodology. It is currently the biggest buzz word in the IT development industry. This methodology uses iterative development as its basis. The 10 key principles of the agile methodology are: 1. Active involvement is imperative. 2. The team must be empowered to make decisions. 3. Requirements evolve but the timescale is fixed. 4. Capture requirements at a high level; lightweight and visual 5. Develop small, incremental releases and iterate 6. Focus on frequent delivery of product 7. Complete each feature before moving on to the next 8. Apply the 80/20 rule 9. Testing is integrated throughout the project lifecycle - test early and often 10. A collaborative & cooperative approach between all stakeholders is essential
Scrum Development Method
The scrum development model improves upon the agile method. It is more simple and it emphases empirical feedback. This model eliminates the project manager and team self-management is implemented. The scrum consists of three roles and the responsibilities of the project manager are split up among these roles: 1. Product Owner 2. Development Team 3. Scrum Master Scrum meetings: Daily Scrum - daily stand up meeting occur every day right on time. All members of the development team come prepared with the updates. The meeting happen at the same time and at the same location every day. The length of the meeting is timeboxed to 15 minutes. Backlog grooming - the process of estimating the existing backlog using for points, refining the acceptance criteria and breaking larger stories into smaller stories. Sprint planning meeting - during this meeting the work to be done is selected and the time that it will take to do the work is detailed. Sprint Review Meeting - review the work and was completed and that was not. Present the completed work to the stakeholders. Sprint Retrospective - this meeting is facilitated by the scrum master. What went during the meeting and what could be improved is discussed.
Build and Fix Model
The development of the software starts without proper specifications and design steps. The software is built and modified as many times as possible until it satisfies the client. Problems are dealt with as they occur and typically the cost of this approach is much greater that if specifications are drawn up and design is carefully developed. Developers are strongly discouraged from using this approach.
Considered to be an extension of the waterfall model. Instead of moving down in a linear way the process steps are bend upwards after the coding phase. The difference between the waterfall and the V-shape model is that the latter requires that testing is performed throughout all the development phases. Like in the waterfall model each phase must be completed before the next phase begins.
This model was designed to overcome the weaknesses of the waterfall model. A simplified version of the application, called prototype is released for review and the user’s feedback is used to build a better version. Rapid Application Development (RAD) RAD is a form of prototyping. It requires strict time limits on each phase and relies on quick development tools. The disadvantage is that decisions are made rapidly and this leads to poor design. Joint Analysis Development (JAD) Key players are communicating during at critical phases of the project. The main focus is on having the people who actually perform the job to work together with those who have the best understanding of technologies available to design a solution. Modified Prototype Model (MPM) MPM is best for Web application development. The main goal is to have the process be flexible so the application is not based on the state of the organization, the application should evolve as the organization develops and the environment changes.
Multi-waterfall cycles of multiple development cycles are performed on the software throughout the development stages. In this model the project is fragmented into smaller components and each component represents are regular waterfall model. Constant refinement of requirements, design and coding is performed in this model. Feed back is proved early in the process (empirical feedback).
This is combination of the waterfall and the prototyping models, with the addition of risk assessment. The initial version of the application is developed and each iterated version is carefully designed and using the waterfall method and risk assessment in performed on each stage.
Rapid App Development Model (RAD)
RAD relies on rapid prototyping instead of extensive planning. The processes of developing and improving the software are interleaved. This allows for the software to be developed quickly. The goal of this model is to accelerate the software development process.
Capability Maturity Model Integration (CMMI)
The CMMI model was developed by a group of experts from industry, government, and the Software Engineering Institute (SEI) at Carnegie Mellon University. CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization. The five maturity levels of the CMMI model are: 1. Initial No effective management procedures and plans are used and the development process is somewhat chaotic. There is no assurance of consistency. 2. Repeatable Change control, formal management structure, and quality assurance are in place. There is formal process model defined. 3. Defined Formal procedures are in place and the processes that are carried out in each project are in place. 4. Managed Formal processes are put in place to collect and analyze quantitative data. 5. Optimizing Budgeted and integrated plans for continuous process improvement are put in place. Each of the maturity levels builds upon the previous one. The CMMI is continually being updated and improved upon. The latest copy can be viewed at the following website: http://www.sei.cmu.edu/library/abstracts/reports/10tr033.cfm
Software/Network/Computer System Techniques
Active attacks involves and attempt, which can be both successful and unsuccessful to change information or stop a service by taking an advantage of a vulnerability. Active attacks cannot be prevented but they are detectable and controllable. Examples of active attacks are: DoS or DDoS, Masquerading, Message modification, Brute Force, and et cetera.
An attacker intercepts a message but does not make any modifications to the data. Passive attacks are preventable but are not easily detectable. Password sniffing and traffic analysis are good examples of passive attacks. The data is typically captured for later analysis. Passive attacks could be averted by securing network cables and other components, using encryption, using traffic padding techniques.
Viruses/worms/logic bombs/trojan horses
Application System Development and Access Control Techniques
1. Accuracy Control
- Preventive Controls - validation checks, system integrated serial numbers
- Detective Controls - integrity checks, peer reviews, walk-through, check digits, overflow checks, range checks, format checks, balance control, rounding techniques, key-verifications.
- Correction Controls - audit trails, reports, error totals, validations
2. Authorization Control
- Preventive Controls - management approvals, Kerberos, need-to-know basis
- Detective Controls
- Corrective Controls
3. Consistency Controls
- Preventive Controls - data dictionary (DD), structure techniques like CASE, standards like programming and documentation
- Detective Controls - cross field and record editing, comparisons, test ratios, relationship tests
- Correction Controls - programming standards, programming errors
4. Continuity Control
- Preventive Controls - online prompts, cross table reference in databases
- Detective Controls - data checks, label checks, record count, summary checks
- Corrective Controls - automatic error correction, error messages, backup and restores, recovery logs, journaling, checkpoints, fall-back procedure, contingency plans
5. Security Controls
The security controls consists of Administrative, Technical and Physical controls.
- Preventive Controls - encryption, firewalls, reference monitor, security labels, traffic padding, data classification, passwords
- Detective Controls - penetration testing, IPS, virus checks, functionality tests, key-validations, hash validations, log checking transaction logging
- Correction Controls - IDS
6. Completes Controls
- Preventive Controls - testing of transaction authorization, pre numbered forms
- Detective Controls - debugging, peer reviews, walk-through, size check, duplicate checks, suspense files, one-to-one checks
- Corrective Controls
Application testing techniques
White-box testing also referred to as clear box testing, glass box testing, transparent testing, static testing and structural testing, it is a method of testing the internal structure and workings of an application. While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. White-box test designs include: control flaw testing, data flow testing, branch testing, path testing, statement coverage, decision coverage.
Black-box testing, also called dynamic testing is a method of software testing that examines the functionality of an application, with other words what the software does, without peering into its internal structures or workings of the application.
Grey-box testing is all about the back end data structures, databases, or the internal data structures. It has nothing to do with manipulating the input and output data, which is done in Black-box testing. The tester is not required to have full access to the software’s source code. Grey box testing may also include reverse engineering.
Individual software components or modules are tested. This type of testing is typically performed by a programmer and not by testers, as it requires detailed knowledge of the internal program design and code and it might require the development of test driver modules or test harnesses.
Incremental Integration Testing
Implements a bottom-up approach for testing or continuous testing of an application as new functionality is added. Application functionality and modules should be independent enough to test separately. This type of testing can be performed by programmers or by testers.
Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network. This type of testing is especially relevant to client/server distributed systems.
This approach of testing typically ignores the internal parts and focuses on the output is as per requirement or not. This is basically a black-box testing geared to functional requirements of an application.
The entire system is tested as per the requirements. Black-box type of testing is based on overall requirements specifications, covers all combined combined parts of a system.
This approach is similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
In this approach testing is performed to determine if an application is performing well enough and if it is ready for a major testing. If an application is crashing for initial use then the system is not stable enough for further testing.
Same as stress testing. Typically automation tools are used for these testing types.
This approach is a testing that is performed by the end user and it is to determine if the system meets the customer specified requirements and expectations.
This approach is a performance testing to check system behavior under load. The goal is to determine at what point the system’s response time degrades of fails.
System is stressed to check when it fails. Performed under heavy load by for example performing continuous input to system or database load, performing complex database queries and other methods.
Another name for stress and load testing. The purpose of this testing is to check whether the system meets the performance requirements.
In this approach the testing is performed by the end user to determine the user-friendliness of an application. The flow of the application is tested, whether the application is easy to understand, and if there is a clear and easy to understand documentation in place.
The application is tested on different operating systems, platforms and environments for full, partial, or upgrade install/uninstall processes.
The ability of a system to recover from crashes, hardware failures, or other disastrous events.
Penetration testing is performed on the application to test how the system is protected against unauthorized internal or external access.
The performance of the application is being tested on different combinations hardware/software/operating system/network environments.
The strengths and the weaknesses of a product are compared with other similar products or with the previous versions.
In this approach testing is performed in house on a simulated user environment. As a result of this testing some minor development changes may be made.
Typically is performed by end users and this is the final testing before releasing the product for commercial purpose.
In this approach testing starts from the top of the application (main module) and continues to the bottom of the application (sub modules). Stubs also called temporary modules are created whenever a sub-module is still not developed.
In this approach the testing starts from the bottom of the application (sub-modules) and continues to the top of the application (main module). A temporarily module called a driver is used to represent the main module in case this module is still not developed.
This approach is a combination of top-down and bottom-up testing.
Big Bang testing
The entires system is tested at once without using stubs and drivers. This approach is very risky.
Other types of application testing
Smoke and sanity testing Regression testing Destructive testing Software performance testing Usability testing Accessibility testing Internationalization and localization testing Development testing
Categories of programming languages
Assemblers, Compilers and Interpreters
Assembler is a program that translates assembly language into a machine language. Assembly language is a low level programming language. Compiler is a program that translates a high-level language into a machine language. It is more intelligent than an assembler as it checks for error, limits, ranges and other similar things. Compiler is slower because it goes through the whole program and then translates it into a machine code. Interpreter is a program that translates statements of a program into a machine code at run time. It reads one one statement of the program at a time, translates it, executes it and then moves on to the next statement.
Object Oriented Programming
Object Oriented Programming (OOP) is different from the previously prevailing procedural programming paradigm (C, Pascal, etc.). Generally procedural programming is the process of decomposing a large problem into smaller, more manageable units called procedures that define the necessary operations. Modern programming languages like Java emphasize a different approach, called object oriented paradigm. Everything in OOP is grouped into conceptually independent units called objects. An object can be considered as an instance, that can sometimes correspond to physical objects in the real world or abstract concepts. Objects perform a set of related activities. Object is an instance of a class. Class is a pattern or template for objects that share a common behavior and collection of state attributes.
All the resources, such as methods and data that are needed for an object to function are included and hidden within the program object. This is mainly accomplished by creating classes, which expose public methods and properties. This is a concept enforced to protect variables, functions from outside of class. This allows the for better code management and having least or no impact on other parts of the program due to change inside the protected code.
Is an approach of simplifying complex reality by modeling classes appropriate to the problem, and working at the most appropriate level of inheritance for a given aspect of the problem. Abstraction is also achieved through Composition. Abstraction is basically a representation of the essential features of the system without getting involved in the complexity of the entire system.
Polymorphism literally means “a state of having many shapes” or “the capacity to take on different forms”. Basically this refers to the ability of the language to process various types of objects and classes through a single, uniform interface.
Classes are created in hierarchies, and inheritance allows the characteristics in one class to be passed down the hierarchy.
Java was developed by Sun in 1991 for embedded applications and it became the most successful programming language of the century. Java generates applications that can run on all hardware platforms without any modification. It is very popular in a client-server application. Java programs can be called from within HTML documents of launched stand alone. When a Java program runs from a Web page it is called an applet and when it runs on the server it is called a servlet. Java is an interpreted language, which means that the bytecode language must be converted into machine code at runtime by the Java Virtual Machine. This means that Java programs can run on any computer that runs a Java Virtual Machine software and that they are hardware independent.
Service Oriented Architecture
Service Oriented Architecture is a style for architecting applications in such a way that they are comprised of encapsulated, reusable modules. The modules of the application are designed is such a way that they can be combined in multiple different useful ways. The components of an SOA architecture are: Services are reusable components that represent business or operation tasks or processes. The reusability property allows new business processes to be created based on these services. Service interface definitions are available in some form of service registry. SOA Infrastructure is the set of technologies that connects service consumers to services through a previously agreed-upon communication model. The communication model can be based on Web services, message-oriented middleware (MOM), Common Object Request Broker Architecture (CORBA). Service Consumers are the clients that use the functionality provided by the services. The consumers are programmatically bound to the services.
Mobile App. Platform vs. Mobile Development Tools
Mobile application platform is an integrated development platform, which provides tools to allow a developer to write, test and deploy applications into the target platform environment. Example of development platforms are Java, iOS platform, Microsoft .net, Adobe Flash Platform. Mobile development tools - there are different tools for the various development platforms, some examples are MobBase, MobiCart, MyAppBuilder and so on.
The relational database was born in 1970 when Edgar Codd, a researcher at IBM, wrote a paper outlining the process. Since then, relational databases have grown in popularity to become the standard. A relational database consist of multiple tables of data that relate to each other through special key field, called primary keys. The data structures are called either tables or relations. Each table consists of set of attributes (columns) and tuples (rows). Attributes are unordered and are referenced by name and not position. Tuples are unordered as well because a relation is a mathematical set and not a list. A primary key is an attribute that uniquely identifies a specific instance of an entity. Each table is a database must have a primary key that is unique to that table. A foreign key is a reference to an entry in some other table. Typically a foreign key in one table is a primary key in another table.
The hierarchical model is the oldest of the database models. This type of database model stores the data in a series of records that have field values attached. It collects all the instances of specific record as a record type. The record types are equivalent of tables in the relational model, with the individual records being the equivalent of rows. Parent/child relationships are used to create links between the record types. The disadvantage of this model is that it can only hold data of a single tree, and it is not able to link between branches or over multiple layers.
This database model is an extended version of the hierarchical database model. It refers to the method the data is linked to other data. The data is represented as a network of records and sets that are related to each other, forming a network of links.
Object-Oriented Database Model
The Object-Oriented Database model stores data as objects. The logical model that this type of databases use is closely aligned with an application program’s object model. Object-Oriented Database Management System is one that manages objects which are abstract data types. OODMS is suited for data with complex relationships that are difficult to model and process in a relational DBMS. It also capable of handling multimedia data types (images, audio and video). This databases clearly use the concept of OODMS.
Distributed Database Models
Distributed Databases use of the above concepts of data modeling but it is implemented is a geographically diverse area. The advantages of these types of database models is that existing applications can evolve into a DDMS without undergoing major changes. These types of database models also increase the ability to share data over a geographical region. The disadvantages are related to management and technical problems.
Data Mining vs. Data Warehousing
Data mining is the process of finding meaningful patterns is a given data set. Data mining is used today in multiple occasions, such as marketing campaigns, studying consumers, fraud prevention. Data warehousing is the aggregation and collection of data from multiple sources into one centralized location.
Web Application Firewalls in SDLC
WAFs play a role in the SDLC to address the security challenges that cannot be addressed by development. WAFs can be used in concept with code frameworks and platform to fill in security functions
ISO 27001 vs. ISO 27002
ISO 27002 is much more detailed than ISO27001, however it is not a managEment standard. ISO 27001 defines the Information Security Management System (ISMS). You can get certified against 27001, but not against 27002. The management system elements that information security must be planned, implemented, monitored, and improved are defined in ISO 27001, but not in 27002. ISO 27002 does not make a distinction between controls applicable to a particular organization, and those which are not. On the other hand, ISO 27001 prescribes a risk assessment to be performed in order to identify for each control whether it is required to decrease the risks, and if it is, to which extent it should be applied. Bottom line is if you want to build the foundations of information security in your organization, and devise its framework, you should use ISO 27001; if you want to implement controls, you should use ISO 27002, if you want to carry out risk assessment and risk treatment, you should use ISO 27005.