There are times where it makes sense to continue to use decertified algorithms to secure operational data; however, those uses must be analyzed to minimize operational risk and, in the current fiscal environment, system upgrades must be prioritized to systems and areas of responsibility (AORs) with the greatest operational risk. This article will illustrate the who, when, why, where, and what of risk assessments for decertified cryptographic key extension requests (KERs).
Under National Security Directive 42 (NSD-42), signed by President George H.W. Bush in 1990, the National Security Agency (NSA) was appointed as the National Manager for cryptography. As such, NSA has the responsibility to analyze the mathematical strength, and other vulnerabilities, of algorithms that are used to protect National Security Systems (NSS). National Security Systems include any systems used to protect classified information or used for command and control (C2).
The National Manager works under the authority of the Committee on National Security Systems (CNSS), an intragovernmental body within the executive branch also created under NSD-42, currently chaired by the Department of Defense Chief Information Officer (DoD CIO).
Encryption algorithms have a shelf life. As computers become more powerful and as innovative mathematical techniques and technologies are developed, algorithms that were once impractical to attack are vulnerable. Potential attacks include brute-force attacks where an attacker enumerates the key space, i.e., tries all possible key combinations, for an algorithm.
When NSA, in its role as National Manager, calculates that an algorithm can no longer protect data at a specific classification (primarily the confidentiality of data); it decertifies the algorithm and stops delivery of the associated cryptographic keys. The expectation is that all government agencies using those decertified products will take action to move away from those algorithms by either shutting down the system or upgrading to approved cryptographic products; however, this is not always possible, practical or smart. For example, the cost of upgrading a legacy system just before it is decommissioned, as a new system is coming online, might not make good financial sense if there is low risk to warfighter or government operations.
So who is responsible for assuming cyber risk? In 2014, the DoD CIO signed the Risk Management Framework (RMF) for DoD Information Technology (IT). The instruction required a host of actions including the use of specific National Institute of Standards and Technology (NIST) Special Publications (SP), the establishment of a risk management executive, the DoD Senior Information Systems Officer or SISO, and the establishment of an executive risk management board, the DoD Information System Risk Management Committee or ISRMC.
Among the NIST SPs required under the RMF is NIST 800-30, revision 1, “Guide for Conducting Risk Assessments.” NIST SP 800-30 describes a tailorable framework for assessing operational risk under the RMF. The frameworks are adapted for use in assessing risk for cryptographic key extension requests.
The DoD Information System Risk Management Committee is chaired by U.S. Cyber Command, includes the Service and select DoD Agency representatives, and four Principal Authorizing Officials (PAOs). The PAOs are tasked with managing risk in each of their mission areas: Warfighting, Business, Enterprise Information Environment, and the DoD portion of Intelligence.
In cases where there is shared risk across the mission areas, the PAOs use the ISRMC to make shared risk decisions for the DoD Enterprise. The Joint Staff J6 is appointed as the Warfighting Mission Area (WMA) PAO. Each PAO is empowered to create a forum for mission-area specific risk adjudication. The WMA PAO’s forum is the Military C4 Executive Board (MC4EB), which meets monthly, or as required.
The Cryptographic Security Panel (CSP) is chartered under the Military C4 Executive Board and is tasked, among other things, with conducting risk assessments for the continued use of decertified algorithms within the Warfighting Mission Area and forwarding a recommendation to the MC4EB for decision. To initiate a cryptographic key extension request, a sponsor submits a KER package to the CSP in accordance with the CJCSI 6510.02E series (which is classified For Official Use Only). Cryptographic key extension requests sponsors include program offices, the services or agencies, or the combatant commands (CCMDs).
How does the Cryptographic Security Panel assess the risk of continued use of decertified algorithms? The CSP tailored the NIST SP 800-30 Risk Assessment Framework for conducting these assessments. In accordance with NIST SP 800, the assessment team takes three categories of input: operational impact, threat source, and vulnerability.
The operational impact, also known as the threat impact, is the most important input into the assessment. It is this input that actually makes the assessment “operational” — without it the team would simply be assessing vulnerabilities and threats without operational context. The operational impact asks the following question: If an adversary or cyber actor can successfully access a network or system protected by the subject cryptography, what will be the operational impact? In other words, how will a theoretical compromise impact warfighting operations?
In the DoD, this assessment is provided by the operational owner, usually the combatant commands, which under Title 10 of U.S. Code are responsible to the President and Secretary of Defense for “employing forces within that command as he considers necessary to carry out missions assigned to the command.” These impacts are measured against both current operations and major operational plans (future operations).
NIST SP 800-30 allows some flexibility on how these impacts are scored, but for the purposes of the cryptographic key extension requests assessments, the operational community provides input with one of five ratings, per the definition below, for a system being assessed:
-- Very High Impact: The threat event could be expected to have multiple severe or catastrophic adverse effects on organizational operations, organizational assets, individuals, other organizations, or the Nation.
-- High Impact: The threat event could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, individuals, other organizations, or the Nation. A severe or catastrophic adverse effect means that, for example, the threat event might: (i) cause a severe degradation in or loss of mission capability to an extent and duration that the organization is not able to perform one or more of its primary functions; (ii) result in major damage to organizational assets; (iii) result in severe or catastrophic harm to individuals involving loss of life or serious life-threatening injuries.
-- Moderate Impact: The threat event could be expected to have a serious adverse effect on organizational operations, organizational assets, individuals other organizations, or the Nation. A serious adverse effect means that, for example, the threat event might: (i) cause a significant degradation in mission capability to an extent and duration that the organization is able to perform its primary functions, but the effectiveness of the functions is significantly reduced; (ii) result in significant damage to organizational assets; (iii) result in significant harm to individuals that does not involve loss of life or serious life-threatening injuries.
-- Low Impact: The threat event could be expected to have a limited adverse effect on organizational operations, organizational assets, individuals other organizations, or the Nation. A limited adverse effect means that, for example, the threat event might: (i) cause a degradation in mission capability to an extent and duration that the organization is able to perform its primary functions, but the effectiveness of the functions is noticeably reduced; (ii) result in minor damage to organizational assets; (iii) result in minor harm to individuals.
-- Very Low Impact: The threat event could be expected to have a negligible adverse effect on organizational operations, organizational assets, individuals, other organizations, or the Nation.
Under the Framework, the operational impact sets the upper limit of risk, i.e., it “caps” the risk. For example, if the operational impact of a successful cryptographic attack is assessed as low, than the risk assessment will be no higher than low. A quick example will illustrate why this makes sense.
Assume you have a wireless home network and you are using Wired Equivalent Protocol (WEP) to encrypt your network activity. (We highly discourage you from using WEP because it is very insecure). You assess that your operational impact to be “very low” because all you do at home is watch funny videos on YouTube. You make this assessment on the basis that during your video watching “operation” you never experience video buffering, etc. In this case, no matter how vulnerable the WEP encryption is, the highest your overall risk assessment will be is very low. Had you considered the legal liability of a hacker using your network to launch a cyber-attack, you might have assessed a higher impact, but this is only an example and illustrates the importance of making a good impact assessment.
The next input to the risk assessment framework is the vulnerability. Because the focus of the Cryptographic Security Panel is cryptography (and not other NIST controls) the vulnerability assessment is limited to the cryptography-specific controls. NSA, as the National Manager, provides the vulnerability assessment input to the CSP. NSA provides a score from very high (very vulnerable) to very low (secure) depending on the algorithm being assessed.
Lastly, but certainly not least, is the threat source. The threat source has three parts: threat capability, threat intent, and threat targeting. A team, comprised of the Defense Intelligence Agency (DIA), NSA and Joint Staff analysts, provide these inputs by evaluating all sources of intelligence. For threat capability, the team asks: “Does a specific cyber actor have the capability (education, intelligence gathering apparatus, hardware, etc.) to attack a specific algorithm?” Using our WEP example, you might ask, “Is my neighbor tech savvy and have hacker tools?”
For threat intent, the team asks, “Does a specific cyber actor have intent against the system being assessed?” Continuing with our previous example: Has your neighbor commented to you that the price of internet service is too high and that he needs to find a way to reduce or eliminate the cost? Lastly, threat targeting asks, “Can a specific cyber actor intercept the transmission?” In the example you might ask, “Is my neighbor in the useful range of my Wi-Fi?” Again, a score is provided from very high to very low for each part of the threat source.
These inputs are combined in a specific way within the framework to provide the overall risk assessment, per Figure 1. The three threat source parts are averaged to become the “likelihood of threat initiation.” The vulnerability and threat capability are averaged to become the “likelihood of success if initiated.” These two likelihoods are combined in a table lookup to become the “overall likelihood.” Finally, the likelihood is combined with the impact, again via table lookup, to give an overall operational risk assessment. The overall assessment table is provided in Figure 2 to illustrate how it caps the risk.
Great. So now what? How are these risk assessments used within the DoD? Simply, they are used for two purposes: (1) To inform the Military C4 Executive Board’s decision to either accept, or not, the risk of continued use of the decertified algorithm in support of operations. Alternatively, the MC4EB may decide to recommend that the system resource sponsor accelerate funding to buy down the risk to operations. The MC4EB considers both the risk and resourcing required in making its decision. (In much the same way, going back to the WEP example, you might decide that the cost of a new, updated wireless access point outweighs the cost of your neighbor, or that person in the car outside your house wearing a black hat, breaking into your network and using your internet connection ...) In some cases, cryptographic key extension requests sponsors have already decided to accelerate upgrades to avoid the need for a cryptographic key extension request. The sponsor is, in effect, buying down the risk. Lastly, the MC4EB may decide the risk is too high and to shut down the system and lose all the capability that came along with it. Obviously, this is the most extreme option. (2) In a resource-constrained environment the assessments provide a way to distinguish which system upgrades and AORs are resourced first and which, perhaps, should not be funded at all.
Lastly, the reports are released to the operators, the sponsors, and other stakeholders. This provides a level of transparency, especially to the operators, on what operational risks are being assumed on their behalf. The operators and operational planners can then decide whether this agrees with the assumption of risk. If they do not; they have several avenues to object including the combatant command Integrated Priority List (IPL) and the DoD Issue Paper processes.
As more and more of these assessments are completed, the DoD will gain a better understanding of its operational risk tolerance and will take more decisive action when risks are outside its tolerance level.
Capt. J. Steve Correia is an Information Warfare Community (IWC) / Information Professional Officer and has been selected for O-6 IWC Command. He is a Certified Information Systems Security Professional (CISSP).
Cmdr. Robert A. Yee is an Information Warfare Community (IWC) / Information Professional Officer with a Master of Science Degree in Computer Science from the Naval Post Graduate School.