CRAMM (CCTA Risk Analysis Management & Methodology)

CRAMM stands for CCTA Risk Analysis Management & Methodology.   The main reasons for the development of CRAMM were the need for a rigorous methodology and the deficient methodologies at the time, that were subjective, vulnerability driven and needed experienced personnel to operate them, while their results were less than impressive.  
The new methodology should be easy to understand and use, be able to be used for system development, consist an automated tool, it should contain a threat checklist ανδ have the countermeasures built in. CRAMM does risk analysis by combining assets, threats and vulnerabilities to evaluate the risk involved and then it does risk management by suggesting a list of countermeasures.   The theoretical model of the system, that CRAMM uses, contains assets Ak, threats vulnerabilities Vi and impacts Ij.  CRAMM has thirty-one generic threats and eight impacts.  First we assign values to asset/impact pairs, then we identify threat/impact/asset triples, we evaluate threats and vulnerabilities (low, medium, high) and calculate the security requirement (risk) of each threat/impact/asset triple.

CRAMM consists of three stages: the first stage, where we scope the security problem, the second stage, where we evaluate the risk and the third stage, where we select suitable countermeasures.

First stage:  The evaluation of the scope of security consists of three steps.       

I.  The preparation of the project framework takes place.  As it is the case with the whole CRAMM procedure, the security consultant that conducts the CRAMM procedure, interviews selected staff to get the information needed. At this point, the arrangement of the initial management meeting takes place, followed by the preparation of the functional specification of the system.  The project boundaries are agreed and the physical (hardware, communications, environmental, software, documentation) and data (organised interrelated data) assets are identified and documented.  Then, the organisation's structure is documented; the data users and three time periods for unavailability are identified.  At the end of all this, the project schedule is prepared, which is the objective of this phase.
II. The security consultant tries to assign values to assets.  Assigning values to physical assets is not difficult, as their price is known.  What can be difficult is the assignment of values to data assets.  This happens because the data is only valuable to somebody during some defined period of time.  At this point, the personnel are interviewed, so that the consultant can value the data assets.  Questionnaires and tables are used along with worst case scenarios.  It is very important that existing countermeasures are ignored and that the interviewees provided accurate and relevant numerical input (not vague descriptions of the impact due to the lack of the data assets). When valuing assets, one should take into account the impact from political embarrassment, personal safety matters, infringement of personal privacy, failure to meet legal obligations, financial loss, disruption of activities, commercial confidentiality.  Under certain threats these impacts can become reality, causing from minor losses up to imprisonment and public humiliation.  At the end of that phase, a data assets value summary is created.
III. The data results are reviewed just in case some of the value assignments do not correspond to reality.  This can happen if the interviewees were not the appropriate ones, or if the interviewer was not experienced. At this point, the CRAMM report is printed and the consultant writes his/her own report that will be given to the management.  In the report the consultant's understanding of the client's business must be clearly stated.  Then, all the asset valuations have to be agreed upon.
The first stage can pose a series of problems such as the lengthy period it takes to complete.  Moreover, bad data grouping can occur if the interviewees or the interviewer are not the appropriate ones.  The first stage can also be bogged down in useless detail or the unavailability periods can be incorrect.

Second stage:  It is involved with the evaluation of the risk and it consists of four steps.

I. The threat, asset, impact relationships are identified.  CRAMM has thirty one generic threats that cover all possible threats form accidents to malicious misconduct.  During that step, all meaningful threat/asset combinations are found and impacts are assigned to them.  Time and space can be saved by grouping together assets.
II. The threats and vulnerabilities are measured by calculating the threat and vulnerability ratings.  The threat rating reflects the likelihood of a threat occurring and takes into account if the threat has happened in the past and who is interested on the assets involved.  The vulnerability rating shows if the system makes a threat more likely to happen and also if the system's nature increases the possible extent of damage.  This rating takes into account the redundancy built into the system and how easy it is to eavesdrop.
III. The security requirement is calculated.  A fixed three dimensional lookup table (matrix) is used; whose elements represent the security requirement under different settings of threat rating, vulnerability rating and asset value.  These elements are in the range 1-5 and give the security requirement for every threat/impact/asset triple.
IV. The security requirement values are reviewed to avoid any errors, that would either impose unnecessary expenses for unneeded extra security or would leave the system unprotected.  Also, in case there is a limited budget factor a reasonable compromise between the cost and risk must be reached.
The problems imposed by stage two are mainly generated by the fact, that there are too many questions to be asked (approximately 600).  The interviewees tend to get bored or be uncooperative. Also, sometimes the answers are objective, so the interview process has to be repeated.

Third stage: It is the last stage of CRAMM, where the appropriate countermeasures are selected. 

I. The required countermeasures are identified.  The calculated security requirement is a pointer to a set of applicable countermeasures from which "sufficiently powerful" countermeasures are selected.  CRAMM contains fifty three countermeasure groups, categorised according to strength (1-5), "cost", security aspect (hardware, software, communications, procedural, physical, personnel, environmental) and sub-group  type (to reduce threat, to reduce vulnerability, to reduce impact, to detect, to recover).
II. We compare the required countermeasures with the countermeasures already installed, to find out how many new countermeasures we need to install.
III. We recommend and confirm with the management the new countermeasures and here the work with CRAMM ends.

The problems we get with the third stage are that it generates a lot of output and that it is really hard to identify the already installed countermeasures, because the interviewees' knowledge is sometimes inadequate, or the countermeasures are not truly installed.
The typical time scale for a CRAMM cycle ranges from six days for a small system (one computer, one application), to seventeen days for a medium system (one mini  computer, several applications), to thirty days for a large system (a mainframe with sites on several geographic locations).
One problem one can face with CRAMM, is that it requires expert knowledge, the right interviewees and to get the right balance between cost and risk, because even idiots can throw in numbers and get impressive but not appropriate results. It is also time consuming, not particularly green (consumes too much paper) and the reports are sometimes inadequate. Moreover, it doesn't really take into account the security policy of a company, the existing products and the cost of products, and the organisation culture of the company. On the other hand, CRAMM is a rigorous methodology that is becoming the Defacto standard, it is applicable to most systems, it is regularly updated and has a countermeasure database of impressive quality.
To get the best of CRAMM, one must identify the correct people, obtain useful information, avoid getting bogged down in detail, avoid being driven by CRAMM, identify key equipment to the company, start threats' and vulnerabilities' identification and evaluation early and finally start the countermeasures' process early.

ISO/IEC 27005

ISO/IEC 27005, part of a growing family of ISO/IEC ISMS standards, the 'ISO/IEC 27000 series', is an information security standard published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). 

The purpose of ISO/IEC 27005 is to provide guidelines for information security risk management. It supports the general concepts specified in ISO/IEC 27001 and is designed to assist the satisfactory implementation of information security based on a risk management approach. It does not specify, recommend or even name any specific risk analysis method, although it does specify a structured, systematic and rigorous process from analyzing risks to creating the risk treatment plan.

At around 60 sides, ISO/IEC 27005 is a heavyweight standard although the main part is just 24 pages, the rest being mostly annexes with examples and further information for users.  There is quite a lot of meat on the bones, reflecting the complexities in this area.
Although the standard defines risk as “a combination of the consequences that would follow from the occurrence of an unwanted event and the likelihood of the occurrence of the event”, the risk analysis process outlined in the standard indicates the need to identify information assets at risk, the potential threats or threat sources, the potential vulnerabilities and the potential consequences (impacts) if risks materialize.  Examples of threats, vulnerabilities and impacts are tabulated in the annexes; although incomplete, these may prove useful for brainstorming risks relating to information assets under evaluation.  It is clearly implied that automated system security vulnerability assessment tools are insufficient for risk analysis without taking into account other vulnerabilities plus the threats and impacts: merely having certain vulnerabilities does not necessarily mean your organization faces unacceptable risks if the corresponding threats or business impacts are negligible in your particular situation.
The standard includes a section and annex on defining the scope and boundaries of information security risk management which should, I guess, are no less than the scope of the ISMS. The standard doesn't specify, recommend or even name any specific method (such as those listed in the ISO27k FAQ), although it does specify a structured, systematic and rigorous method of analyzing risks through to creating the risk treatment plan. 
The standard deliberately remains agnostic about quantitative and qualitative risk assessment methods, essentially recommending that users choose whatever methods suit them best, and noting that they are both methods of estimating, not defining, risks.  Note the plural -  'methods' - the implication being that different methods might be used for, say, a high-level risk assessment followed by more in-depth risk analysis on the high risk areas.  The pros and cons of quantitative vs qualitative methods do get a mention, although the use of numeric scales for the qualitative examples is somewhat confusing.
The steps in the process are (mostly) defined to the level of inputs -> actions -> outputs, with additional “implementation guidance” in similar style toISO/IEC 27002. The standard incorporates some iterative elements e.g. if the results of an assessment are unsatisfactory, you loop-back to the inputs and have another run through.  For those of us who think in pictures, there are useful figures giving an overview of the whole process and more detail on the risk assessment -> risk treatment -> residual risk bit.
Managing and measuring risk with ISO 27005

The process of managing information security risk includes many overlapping and poorly differentiated steps (or clauses, to use ISO-speak):
  • Context establishment
  • Risk assessment
  • Risk treatment
  • Risk acceptance
  • Risk communication
  • Risk monitoring and review
What, for example, is the context of risk management if not the sum of all the other steps? Does not communication of risk include monitoring and reviewing? The most aggressively confusing section of ISO 27005 is the one on risk assessment, which includes risk analysis and risk evaluation. Risk analysis in turn is made up of risk identification and risk estimation. Some (but not all) of these terms are defined in the glossary, but in so arbitrary a manner that a perfectly valid alternative approach could use the same terms in a different way or use different terms altogether and still achieve the same objective: managing risk.

Missing from ISO 27005: Risk estimation

What does not appear in the standard is the measurement of risk. It is axiomatic that what cannot be measured cannot be managed. The omission of risk measurement from the standard is significant enough that, whether mentioned or not, it must be performed by anyone seriously attempting to manage risk. Measurement is addressed indirectly by risk estimation, in the same sense that all estimates are measurements of a sort, but not vice versa "About a foot" is not the same as "12 1/2 inches," as anyone who has ever had to cut window glass can testify.

It doesn’t really add anything remarkable or special that we don’t already have in place in any number of other documents and standards.  It would seem that its only demonstrative use is for the purposes of auditing to standard compliance.  And I have to think that this is really what this document is all about, something more to serve the ISMS and the cottage industry that surrounds it.   And that’s a shame, because the field of risk management could really use someone like the ISO really putting forth a significant and good effort.

TARA (the Threat Agent Risk Assessment)

TARA (the Threat Agent Risk Assessment) is a relatively new risk-assessment framework that was created by Intel in order to help companies manage risk by distilling the immense number of possible information security attacks into a digest of only those exposures that are most likely to occur. The points here is that it would be prohibitively expensive and impractical to defend every possible vulnerability. By using a predictive framework to prioritize areas of concern, organizations can proactively target the most critical exposures and apply resources efficiently to achieve maximum results.

The TARA methodology identifies which threats pose the greatest risk, what they want to accomplish and the likely methods they will use. The methods are cross-referenced with existing vulnerabilities and controls to determine which areas are most exposed. The security strategy then focuses on these areas to minimize efforts while maximizing effect. Intel says awareness of the most exposed areas allows the company to make better decisions about how to manage risks, which helps with balancing spending, preventing impacts and managing to an acceptable level of residual risk. The TARA methodology is designed to be readily adapted when a company faces changes in threats, computing environments, behaviors or vulnerabilities.

TARA relies on three main references to reach its predictive conclusions. One is Intel's threat agent library, which defines eight common threat agent attributes and identifies 22 threat agent archetypes. The second is its common exposure library, which enumerates known information security vulnerabilities and exposures at Intel. Several publicly available common exposure libraries are also used to provide additional data. The third is Intel's methods and objectives library, which lists known objectives of threat agents and the methods they are most likely to use to accomplish these goals.

A main benefits deriving from TARA is that the threat agent library and the methods and objectives library can be easily used within other risk-assessment methodologies, especially if there is a need to standardize on common threat agents and corresponding methods. TARA appears to be a good tool for identifying, predicting and prioritizing threats against your infrastructure and can be used to create common libraries that can be shared among different groups.

The framework focuses on threats rather than assets, identifying more or less on what bad things can happen. This is both good and bad because by focusing on threats rather than asset value, an assessor may miss the mark in identifying true infrastructure risks. It also seems to make the assumption that the only way to view risk is from the perspective of 'What's the worst thing that could happen?'A drawback of TARA is that only addresses the likelihood of threat events, but doesn't take into account the risk's impact. Another drawback of the framework is that it's new and untested. It is not very common and not widely used as a single risk management methodology but usually in conjunction with other framework.

TARA also appears to be yet another qualitative methodology rather than one that can be used for quantitative analysis.

NIST RMF (National Institute of Standards and Technology's Risk Management Framework)

The NIST RMF (National Institute of Standards and Technology's Risk Management Framework)  described here comprises a mature process that has been applied in the field of risk management for almost ten years. This RMF is mostly designed to manage software-induced business risks. Through the application of five simple activities, analysts use their own technical expertise, relevant tools, and technologies to carry out a reasonable risk management approach.
The purpose of an RMF like this is to allow a consistent and repeatable expertise-driven approach to risk management. Progressing on and description of the software risk management activities in a consistent manner, the basis for measurement and common metrics emerges. Such metrics are sorely needed and should allow organizations to better manage business and technical risks given particular quality goals; make more informed, objective business decisions regarding software and improve internal software development processes so that they in turn better manage software risks.  

1.    Five Stages of Activity

The RMF consists of the five fundamental activity stages:
Understand the business context.
Identify the business and technical risks.
Synthesize and prioritize the risks, producing a ranked set.
Define the risk mitigation strategy.
Carry out required fixes and validate that they are correct.

NIST RMF also outlines a series of activities related to managing organizational risk. These can be applied to both new and legacy information systems, according to the NIST.
The activities include:

·    Categorizing information systems and the information within those systems based on impact.
·    Selecting an initial set of security controls for the systems based on the Federal Information Processing Standards (FIPS) 199 security categorization and the minimum security requirements defined in FIPS 200.
·    Implementing security controls in the systems.
·    Assessing the security controls using appropriate methods and procedures to determine the extent to which the controls are implemented correctly, operating as intended and producing the desired outcomes with respect to meeting security requirements for the system.
·    Authorizing information systems operation based on a determination of the risk to organizational operations and assets, or to individuals resulting from the operation of the systems, and the decision that this risk is acceptable.
·    Monitoring and assessing selected security controls in information systems on a continuous basis, including documenting changes to the systems, conducting security-impact analyses of the associated changes, and reporting the security status of the systems to appropriate organizational officials on a regular basis.

One of the primary strengths of RMF is that it was developed by the NIST, which is charged by Congress with ensuring that security standards and tools are researched, proven and developed to provide a high level of information security infrastructure. Because government agencies and the businesses that support them need their IT security standards and tools to be both cost-effective and highly adaptable, the framework is constantly being reviewed and updated as new technology is developed and new laws are passed. Furthermore, independent companies have developed tools that support the NIST standards, knowing that the basis for applications is stable; software development companies are more willing to develop application tools to support the framework. The model also helps companies determine when something exceeds a certain threshold of risk.

As for weaknesses, like any of these frameworks, you have to make sure that the people who are doing the risk assessment have the discipline to input reasonable data into the model so you get reasonable data outputs. After all, you cannot manage what you cannot measure and most of all, what you cannot see. Additionally, since it is not an automated tool but a documented framework, meaning that it apart from input and output dependencies, it has to do with people’s aspirations that sometimes are quite more subjective.

To sum up, the activities of identifying, tracking, storing, measuring, and reporting software risk information cannot be overemphasized. Successful use of the RMF depends on continuous and consistent identification and storage of risk information as it changes over time. A master list of risks should be maintained during all stages of RMF execution and continually revisited. Measurements regarding this master list make excellent reporting information. For example, the number of risks identified in various software artifacts and/or software life-cycle phases can be used to identify problematic areas in software process. Likewise, the number of software risks mitigated over time can be used to show concrete progress as risk mitigation activities unfold. Links to descriptions or measurements of the corresponding business risks mitigated can be used to clearly demonstrate the business value of the software risk mitigation process and the risk management framework.

FAIR (Factor Analysis of Information Risk)

FAIR (Factor Analysis of Information Risk) is a framework for understanding, analyzing and measuring information risk. Information security practices to date have generally been inadequate in helping organizations effectively manage information risk since there is a heavy reliance on practitioner intuition and experience.  While these are valuable, they don't consistently allow management to make effective, well-informed decisions. 

FAIR is designed to address security practice weaknesses. The framework aims to allow organizations to speak the same language about risk; apply risk assessment to any object or asset; view organizational risk in total; defend or challenge risk determination using advanced analysis; and understand how time and money will affect the organization's security profile. 

The FAIR vernacular allows IT people and the business lines to talk about risk in a consistent manner. One of the advantages of the framework is that it doesn't use ordinal scales, such as one-to-10 rankings, and therefore isn't subject to the limitations that go with ordinal scales, for example, 'high, medium and low' is an example of an ordinal scale, as is 'red, yellow and green' and 'one, two and three. Imagine what the result will be if you add or multiply two medium values, or add or multiply yellow plus green. It will definitely have no meaning at all, yet we see many risk calculations in our industry that do exactly that when they use addition and/or multiplication with numeric ordinal scales.

FAIR uses dollar estimates for losses and probability values for threats and vulnerabilities. Combined with a range of values and levels of confidence, it allows for true mathematical modeling of loss exposures. Another plus is that FAIR has more detailed definitions of threats, vulnerabilities and risks. FAIR has a taxonomy that breaks down the terms on a more granular level. The taxonomy enables describing more easily and credibly how conclusions are made and that they are not based on assumptions but on actual measurable results.
The most important downside of FAIR is the fact that it can be difficult to use and it's not as well documented as some other methodologies.

Basic FAIR analysis is comprised of ten steps in four stages:

Stage 1 – Identify scenario components
1. Identify the asset at risk
2. Identify the threat community under consideration

Stage 2 – Evaluate Loss Event Frequency (LEF)
3. Estimate the probable Threat Event Frequency (TEF)
4. Estimate the Threat Capability (TCap)
5. Estimate Control strength (CS)
6. Derive Vulnerability (Vuln)
7. Derive Loss Event Frequency (LEF)

Stage 3 – Evaluate Probable Loss Magnitude (PLM)
8. Estimate worst-case loss
9. Estimate probable loss

Stage 4 – Derive and articulate Risk
10. Derive and articulate Risk

To sum up FAIR allows organizations to:

·        Speak in one language concerning their risk
·        Be able to consistently study and apply risk to any object or asset
·        View organizational risk in total
·        Defend or challenge risk determination using an advanced analysis framework.
·        Understand how time ad money will impact security profile

OCTAVE (Operationally Critical Threat, Asset and Vulnerability Evaluation)

OCTAVE (Operationally Critical Threat, Asset and Vulnerability Evaluation), developed at the CERT Coordination Center at Carnegie Mellon University, is a suite of tools, techniques and methods for risk-based information security strategic assessment and planning.OCTAVE defines assets as including people, hardware, software, information and systems. There are three models, including the original, which CERT says forms the basis for the OCTAVE body of knowledge and is aimed at organizations with 300 or more employees; OCTAVE-S, similar to the original but aimed at companies with limited security and risk-management resources; and OCTAVE-Allegro, a streamlined approach to information security assessment and assurance. 

The framework is founded on the OCTAVE criteria—a standardized approach to a risk-driven and practice-based information security evaluation. These criteria establish the fundamental principles and attributes of risk management. The OCTAVE methods have several key characteristics. One is that they're self-directed: Small teams of personnel across business units and IT work together to address the security needs of the organization. Another is that they're designed to be flexible. Each method can be customized to address an organization's particular risk environment, security needs and level of skill. A third is that OCTAVE aims to move organizations toward an operational risk-based view of security and addresses technology in a business context. 

Among the strengths of OCTAVE is that it's thorough and well documented, the people who put it together are very knowledgeable, and it's been around a while and is very well-defined and freely available. Because the methodology is self-directed and easily modified, it can be used as the foundation risk-assessment component or process for other risk methodologies. The original OCTAVE method uses a small analysis team encompassing members of IT and the business. This promotes collaboration on any found risks and provides business leaders [with] visibility into those risks. To be successful, the risk assessment-and-management process must have collaboration. In addition, OCTAVE  looks at all aspects of information security risk from physical, technical and people viewpoints, If you take the time to learn the process, it can help you and your organization to better understand its assets, threats, vulnerabilities and risks. You can then make better decisions on how to handle those risks.

Experts say one of the drawbacks of OCTAVE is its complexity and the fact that it doesn't allow organizations to mathematically model risk makes it a clearly a qualitative methodology.