FAIR Risk Management

What is FAIR?

Factor Analysis of Information Risk (FAIR) is a framework that provides understanding, analyzing, and measuring of information risk.  While other risk assessment methodologies are more focused on qualitative risk management, FAIR model prefers a financial approach to enterprise risk management.

FAIR risk assessment helps organizations take advantage of a new approach to risk management based on models and methods of measurement. The FAIR method shows an organization what to measure, how to measure, and how to derive meaning from those measurements.

FAIR risk assessment is being used to prioritize risk issues for metric development and analysis as well as identify and compare risk mitigation cost-benefit propositions It also provide opportunity risk analysis and breakdown communication barrier between business units and IT enabling well-informed decision making.

FAIR can reasonably articulate risk to the decision-makers who need this information in following ways. (Jones, 2005)


  • A classification of the factors that make up information risk. This classification provides an understanding of information risk, without which we couldn’t reasonably do the rest.
  • A method of measuring the factors that drive information risk, including threat event frequency, vulnerability, and loss.
  • A Computational engine that derives risk by mathematically simulating the relationships between the measured factors.
  • A simulation model that allows us to apply the taxonomy, measurement method, and computational engine to build and analyze risk scenarios of virtually any size or complexity.

What are the FAIR Risk Assessment Steps?

FAIR presents a risk assessment in four stages.

Stage 1            Identify the component of scenario

Step1. Identify the assets

Step2. Identify the community of threats under consideration

Stage 2            Evaluate Loss Event Frequency (LEF)

  Very High High Moderate Low Very Low
Step3. Estimate the probable Threat Event Frequency (TEF) > 100 x year >10 -100 x year > 1-10 x year > .1 –1 x year < .1 x year
Step4. Estimate the Threat Capability (TCap) Top 2% Top 16% bottom 16% bottom 2%
Step5. Estimate the strength of the controls (CS) Protects all but top 2% Protects all but 16% Protects against bottom 16% Protects against bottom 2%
Step 6. Derive the vulnerability (Vuln) 
Step 7. Derive the Loss Event Frequency (LEF) 


Stage 3            Evaluate Probable Loss Magnitude (PLM)

Step8. Estimate worst-case loss

Step9.  Estimate probable loss

Magnitude Range Low end ($) Range High end ($)
Severe (SV) 10,000,000
High (H) 1,000,000 9,999,999
Significant (Sg) 100,000 999,999
Moderate (M) 10,000 99,999
Low (L) 1,000 9,999
Very Low (VL) 0 999


Stage 4            Derive and articulate risk

Step10. The probable frequency and probable magnitude of future loss

PLM Risks
Severe H H C C C
High M H H C C
Significant M M H H C
Moderate L M M H H
Low L L M M M
Very Low L L M M M

(Risk level: C = Critical, VH= Very High, H= High, M=Medium, L=Low)

FAIR Process flow












    Twitter not configured.