Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  2  3  4  5  6  7  8  (Next)
  ALL

K

Picture of Yee Wei Law

Key establishment and key management

by Yee Wei Law - Monday, 29 May 2023, 11:39 AM
 

Key establishment (see Definition 1) is part of key management (see Definition 2).

Definition 1: Key establishment [MvV96, Definition 1.63]

A process whereby a shared secret key becomes available to two or more parties, for subsequent cryptographic use.

Definition 2: Key management [BB19, p. 12]

Activities involved in the handling of cryptographic keys and other related parameters (e.g., IVs and domain parameters) during the entire life cycle of the keys, including their generation, storage, establishment, entry and output into cryptographic modules, use and destruction.

Simply speaking, key management is a set of processes and mechanisms which support key establishment and the maintenance of ongoing keying relationships between parties, including replacing older keys with new keys as necessary [MvV96, Definition 1.64]; see Fig. 1.

Fig. 1: Key lifecycle in key management [SC16, Figure 2.18]. The events are self-explanatory.
Example 1

Key management includes identification of the key types.

CCSDS [CCS11] has identified the following key types for securing space missions:

  • Long-term data encryption keys: These are symmetric keys for protecting the confidentiality of data over long time periods.

    These keys require confidentiality and integrity protection, and must remain available and associated with the encrypted data or services as long as the data encrypted under these keys is maintained in its encrypted form.

  • Short-term data encryption keys: These are symmetric keys for protecting the confidentiality of data over short time periods, e.g., over a communication session.

    These short-term keys are generated as needed.

    The confidentiality and integrity of these keys must be maintained until the entire session has been decrypted.

    Upon the conclusion of a session, these short-term keys must be securely destroyed.

  • Key transport public keys: These keys are used for transporting keying material (e.g., encryption keys, MAC keys, initialisation vectors) in the encrypted form.

    These keys must be validated prior to their use, and retained until no longer needed (e.g., the public/private key pair is replaced, or key transport is no longer required).

  • Key transport private keys: These keys are used for decrypting keying material encrypted with the corresponding public keys.

  • Other key types discussed in [CCS11, Sec. A2].

Key establishment can be broadly classified into key agreement (see Definition 3) and key transport (see Definition 4).

Definition 3: Key agreement [BB19, p. 11]

A pair-wise key-establishment procedure in which the resultant secret keying material is a function of information contributed by both participants, so that neither party can predetermine the value of the secret keying material independently from the contributions of the other party.

👩 ➡ 🔑 ⬅ 🧔

Definition 4: Key transport [BB19, p. 14]

A key-establishment procedure whereby one entity (the sender) selects a value for secret keying material and then securely distributes that value to one or more other entities (the receivers).

👩 ➡ 🔑 ➡ 👨‍👩‍👧‍👦

Key agreement is more popular than key transport, and the de facto standard key agreement protocol is Diffie-Hellman key agreement.

References

[BB19] E. Barker and W. C. Barker, Recommendation for Key Management: Part 2 - Best Practices for Key Management Organizations, Special Publication 800-57 Part 2 Revision 1, NIST, May 2019. https://doi.org/10.6028/NIST.SP.800-57pt2r1.
[CCS11] CCSDS, Space Missions Key Management Concept, Informational Report CCSDS 350.6-G-1, The Consultative Committee for Space Data Systems, November 2011. Available at https://public.ccsds.org/Pubs/350x6g1.pdf.
[MvV96] A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, Handbook of Applied Cryptography, CRC Press, 1996. Available at https://cacr.uwaterloo.ca/hac/.
[SC16] J. J. Stapleton and W. Clay Epstein, Security without Obscurity: A Guide to PKI Operations, CRC Press, 2016. https://doi.org/10.1201/b19725.

L

Picture of Yee Wei Law

Licklider Transmission Protocol (LTP)

by Yee Wei Law - Friday, 8 March 2024, 11:03 AM
 

CCSDS has adopted the Licklider Transmission Protocol (LTP) as specified in the IETF RFC 5326 [BFR08b] and the associated security extensions specified in IETF RFC 5327 [BFR08a] to provide reliability and authentication mechanisms on top of an underlying (usually data link) communication service [CCS15b, Sec. 1.1].

In an Interplanetary Internet setting deploying the Bundle Protocol [BFR08b], LTP is intended to serve as a reliable “convergence layer” protocol operating in pairwise fashion between adjacent in-range Interplanetary Internet nodes.

LTP aggregates multiple layer-(N+1) PDUs into a single layer-N PDU for reliable delivery — this allows the system to reduce the acknowledgement-channel bandwidth in the case that the layer-(N+1) (and higher) protocols transmit many small PDUs, each of which might otherwise require independent acknowledgement; see Fig. 1.

In CCSDS settings [CCS15b, Sec. 1.2], LTP is intended for use over package delivery services including packet telecommand and packet telemetry.

For space links, LTP is typically deployed over a CCSDS data link that supports CCSDS Encapsulation Packets so that one LPT segment can be encapsulated in a single Encapsulation Packet.

LTP can also operate over ground-network services, in which case it is usually deployed over UDP; see Fig. 2.

Fig. 1: To protocols above LTP (e.g., Bundle Protocol), LTP enables reliable delivery of layer-(N+1) PDUs across a link [CCS15b, Figure 1-1].

For LTP, the interface to the data link is via either direct encapsulation in CCSDS Space Packets or the CCSDS Encapsulation Service.

Fig. 2: A simplified end-to-end view of the system elements involved in a typical Solar System Internetwork (SSI) scenario: a Space User Node, an SSI “cloud”, and an Earth User Node, where the mission operations centre (MOC) is located [CCS13, Figure 3-4].

References

[BFR08a] S. C. Burleigh, S. Farrell, and M. Ramadas, Licklider Transmission Protocol - Security Extensions, RFC 5327, September 2008. https://doi.org/10.17487/RFC5327.
[BFR08b] S. C. Burleigh, S. Farrell, and M. Ramadas, Licklider Transmission Protocol - Specification, RFC 5326, September 2008. https://doi.org/10.17487/RFC5326.
[CCS13] CCSDS, Space Communications Cross Support—Architecture Description Document, Informational Report CCSDS 901.0-G-1, The Consultative Committee for Space Data Systems, November 2013. Available at https://public.ccsds.org/Pubs/901x0g1.pdf.
[CCS15b] CCSDS, Licklider Transmission Protocol (LTP) FOR CCSDS, Recommended Standard CCSDS 734.1-B-1, The Consultative Committee for Space Data Systems, May 2015. Available at https://public.ccsds.org/pubs/734x1b1.pdf.

M

Picture of Yee Wei Law

Meltdown attacks

by Yee Wei Law - Wednesday, 29 March 2023, 8:45 AM
 

Out-of-order execution is a prevalent performance feature of modern processors for reducing latencies of busy execution units, e.g., a memory fetch unit waiting for data from memory: instead of stalling execution, a processor skips ahead and executes subsequent instructions [LSG+18].

See Fig. 1.

Fig. 1: If an executed instruction causes an exception that diverts the control flow to an exception handler, the subsequent instruction must not be executed [LSG+18, Figure 3].

Due to out-of-order execution, the subsequent instructions may already have been partially executed, but not retired.

However, architectural effects of the execution are discarded.

Although the instructions executed out of order do not have any visible architectural effect on the registers or memory, they have microarchitectural side effects [LSG+18, Sec. 3].

  • During out-of-order execution, referenced memory contents are fetched into the registers and also stored in the cache.
  • If the result of out-of-order execution has to be discarded, the register and memory contents are retired, but the cached memory contents are kept in the cache.
  • A microarchitectural side-channel attack such as FLUSH+RELOAD [YF14] can then be exploited to detect whether a specific memory location is cached, to make this microarchitectural state visible.

Meltdown consists of two building blocks [LSG+18, Sec. 4], as illustrated in Fig. 2:

  1. The first building block makes the CPU execute one or more instructions that would never occur in the normal executed path.

    These instructions, which are executed out of order and leaving measurable side effects, are called transient instructions.

  2. The second building block transfers the microarchitectural side effect of the transient instruction sequence to an architectural state to further process the leaked secret.

Operation-wise, Meltdown consists of 3 steps [LSG+18, Sec. 5.1]:

  1. The content of an attacker-chosen memory location, which is inaccessible to the attacker, is loaded into a register.
  2. A transient instruction accesses a cache line based on the secret content of the register; see Fig. 2.
  3. The attacker uses FLUSH+RELOAD to determine the accessed cache line and hence the secret stored at the chosen memory location.

Fig. 2: The Meltdown attack uses exception handling or suppression, e.g., TSX (disabled by default since 2021), to run a series of transient instructions [LSG+18, Figure 5].

The covert channel consists of 1️⃣ transient instructions obtaining a (persistent) secret value and changing the microarchitectural state of the processor based on this secret value; 2️⃣ FLUSH+RELOAD reading the microarchitectural state, making it architectural and recovering the secret value.

Watch the presentation given by one of the discoverers of Meltdown attacks at the 27th USENIX Security Symposium:

Since FLUSH+RELOAD was mentioned, watch the original presentation at USENIX Security ’14:

More information on the Meltdown and Spectre website.

References

[LSG+18] M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, A. Fogh, J. Horn, S. Mangard, P. Kocher, D. Genkin, Y. Yarom, and M. Hamburg, Meltdown: Reading kernel memory from user space, in 27th USENIX Security Symposium (USENIX Security 18), USENIX Association, August 2018, pp. 973–990. Available at https://www.usenix.org/conference/usenixsecurity18/presentation/lipp.
[YF14] Y. Yarom and K. Falkner, FLUSH+RELOAD: A high resolution, low noise, L3 cache side-channel attack, in 23rd USENIX Security Symposium (USENIX Security 14), USENIX Association, August 2014, pp. 719–732. Available at https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/yarom.

Picture of Yee Wei Law

MITRE ATLAS

by Yee Wei Law - Thursday, 16 November 2023, 10:53 PM
 

The field of adversarial machine learning (AML) is concerned with the study of attacks on machine learning (ML) algorithms and the design of robust ML algorithms to defend against these attacks [TBH+19, Sec. 1].

ML systems (and by extension AI systems) can fail in many ways, some more obvious than the others.

AML is not about ML systems failing when they make wrong inferences; it is about ML systems being tricked into making wrong inferences.

Consider three basic attack scenarios on ML systems:

Black-box evasion attack: Consider the most common deployment scenario in Fig. 1, where an ML model is deployed as an API endpoint.

  • In this black-box setting, an adversary can query the model with inputs it can control, and observe the model’s outputs, but does not know how the inputs are processed.
  • The adversary can craft adversarial data (also called adversarial examples), usually by subtly modifying legitimate test samples, that cause the model to make different inferences than what the model would have made based on the legitimate test samples — this is called an evasion attack [OV23, p. 8].
  • This technique can be used to evade a downstream task where machine learning is utilised, e.g., machine-learning-based threat detection.

White-box evasion attack: Consider the scenario in Fig. 2, where an ML model exists on a smartphone or an IoT edge node, which an adversary has access to.

  • In this white-box setting, the adversary could reverse-engineer the model.
  • With visibility into the model, the adversary can optimise its adversarial data for its evasion attack.

Poisoning attacks: Consider the scenario in Fig. 3, where an adversary has control over the training data, process and hence the model.

  • Attacks during the ML training stage are called poisoning attacks [OV23, p. 7].
  • A data poisoning attack is a poisoning attack where the adversary controls a subset of the training data by either inserting or modifying training samples [OV23, pp. 7-8].
  • A model poisoning attack is a poisoning attack where the adversary controls the model and its parameters [OV23, p. 8].
  • A backdoor poisoning attack is a poisoning attack where the adversary poisons some training samples with a trigger or backdoor pattern such that the model performs normally on legitimate test samples but abnormally on test samples containing a trigger [OV23].
Fig. 1: Black-box evasion attack: trained model is outside the adversary’s reach.
Fig. 2: White-box evasion attack: trained model is within the adversary’s reach.
Fig. 3: Poisoning attacks: trained model is poisoned with vulnerabilities.

Watch introduction to evasion attacks (informally called “perturbation attacks”) on LinkedIn Learning:

Perturbation attacks and AUPs from Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes by Diana Kelley

Watch introduction to poisoning attacks on LinkedIn Learning:

Poisoning attacks from Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes by Diana Kelley

In response to the threats of AML, in 2020, MITRE and Microsoft, released the Adversarial ML Threat Matrix in collaboration with Bosch, IBM, NVIDIA, Airbus, University of Toronto, etc.

Subsequently, in 2021, more organisations joined MITRE and Microsoft to release Version 2.0, and renamed the matrix to MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems).

MITRE ATLAS is a knowledge base — modelled after MITRE ATT&CK — of adversary tactics, techniques, and case studies for ML systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research.

Watch MITRE’s presentation:

References

[Eid21] B. Eidson, MITRE ATLAS Takes on AI System Theft, MITRE News & Insights, June 2021. Available at https://www.mitre.org/news-insights/impact-story/mitre-atlas-takes-ai-system-theft.
[OV23] A. Oprea and A. Vassilev, Adversarial machine learning: A taxonomy and terminology of attacks and mitigations, NIST AI 100-2e2023 ipd, March 2023. https://doi.org/10.6028/NIST.AI.100-2e2023.ipd.
[Tab23] E. Tabassi, Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1, January 2023. https://doi.org/10.6028/NIST.AI.100-1.
[TBH+19] E. Tabassi, K. Burns, M. Hadjimichael, A. Molina-Markham, and J. Sexton, A Taxonomy and Terminology of Adversarial Machine Learning, Tech. report, 2019. https://doi.org/10.6028/NIST.IR.8269-draft.

Picture of Yee Wei Law

MITRE ATT&CK

by Yee Wei Law - Sunday, 26 March 2023, 10:57 AM
 

MITRE ATT&CK® is a knowledge base of adversary tactics (capturing “why”) and techniques (capturing “how”) based on real-world observations.

There are three versions [SAM+20]: 1️⃣ Enterprise (first published in 2015), 2️⃣ Mobile (first published in 2017), and 3️⃣ Industrial Control System (ICS, first published in 2020).

Fig. 1 below shows the fourteen MITRE ATT&CK Enterprise tactics:

Fig. 1: Tactics and top-level techniques in the MITRE ATT&CK matrix of adversary tactics and techniques. Each technique labelled with “||” branches into sub-techniques, e.g., “active scanning” consists of options “scanning IP blocks”, “vulnerability scanning” and “wordlist scanning”. See original interactive matrix.
  1. Reconnaissance (TA0043): This consists of techniques that involve adversaries actively or passively gathering information that can be used to support targeting, e.g., active scanning of IP addresses (T1595.001) and vulnerabilities (T1595.002).
  2. Resource development (TA0042): This consists of techniques that involve adversaries creating, purchasing, or compromising/stealing resources that can be used to support targeting, e.g., acquisition of domains that can be used during targeting (T1583.001).
  3. Initial access (TA0001): This consists of techniques that use various entry vectors to gain their initial foothold within a network, e.g., drive-by compromise ( T1189).
  4. Execution (TA0002): This consists of techniques that result in adversary-controlled code running on a local or remote system, e.g., abusing PowerShell commands and scripts for execution (T1059.001).
  5. Persistence (TA0003): This consists of techniques that adversaries use to maintain access to systems across restarts, changed credentials, and other interruptions that could cut off their access, e.g., adding adversary-controlled credentials to a cloud account to maintain persistent access to victim accounts and instances within the environment (T1098.001).
  6. Privilege escalation (TA0004): This consists of techniques that adversaries use to gain higher-level permissions on a system or network, e.g., abusing configurations where an application has the setuid or setgid bits set in order to get code running in a different (and possibly more privileged) user’s context (T1548.001).
  7. Defence evasion (TA0005): This consists of techniques that adversaries use to avoid detection throughout their compromise, e.g., sub-technique T1548.001.
  1. Credential access (TA0006): This consists of techniques for stealing credentials like account names and passwords, e.g., responding to LLMNR/NBT-NS network traffic to spoof an authoritative source for name resolution to force communication with an adversary-controlled system (T1557.001).
  2. Discovery (TA0007): This consists of techniques an adversary may use to gain knowledge about the system and internal network, e.g., getting a listing of local system accounts (T1087.001).
  3. Lateral movement (TA0008): This consists of techniques that adversaries use to enter and control remote systems on a network, e.g., exploiting remote services to gain unauthorised access to internal systems once inside of a network (T1210).
  4. Collection (TA0009): This consists of techniques adversaries may use to gather information and the information sources that are relevant to following through on the adversary’s objectives, e.g., sub-technique T1557.001.
  5. Command and control (TA0011): This consists of techniques that adversaries may use to communicate with systems under their control within a victim network, e.g., communicating using application-layer protocols associated with web traffic to avoid detection/network filtering by blending in with existing traffic (T1071.001).
  6. Exfiltration (TA0010): This consists of techniques that adversaries may use to steal data from your network, e.g., leverage traffic mirroring/duplication in order to automate data exfiltration over compromised network infrastructure (T1020.001).
  7. Impact (TA0040): This consists of techniques that adversaries use to disrupt availability or compromise integrity by manipulating business and operational processes, e.g., interrupting availability of system and network resources by inhibiting access to accounts utilised by legitimate users (T1531).

The Mobile tactics and ICS tactics are summarised below. Note a tactic in the Mobile context is not the same as the identically named tactic in the ICS context.

Table 1: MITRE ATT&CK Mobile tactics.
ID Name Description
TA0027 Initial Access The adversary is trying to get into your device.
TA0041 Execution The adversary is trying to run malicious code.
TA0028 Persistence The adversary is trying to maintain their foothold.
TA0029 Privilege Escalation The adversary is trying to gain higher-level permissions.
TA0030 Defense Evasion The adversary is trying to avoid being detected.
TA0031 Credential Access The adversary is trying to steal account names, passwords, or other secrets that enable access to resources.
TA0032 Discovery The adversary is trying to figure out your environment.
TA0033 Lateral Movement The adversary is trying to move through your environment.
TA0035 Collection The adversary is trying to gather data of interest to their goal.
TA0037 Command and Control The adversary is trying to communicate with compromised devices to control them.
TA0036 Exfiltration The adversary is trying to steal data.
TA0034 Impact The adversary is trying to manipulate, interrupt, or destroy your devices and data.
TA0038 Network Effects The adversary is trying to intercept or manipulate network traffic to or from a device.
TA0039 Remote Service Effects The adversary is trying to control or monitor the device using remote services.
Table 2: MITRE ATT&CK ICS tactics.
ID Name Description
TA0108 Initial Access The adversary is trying to get into your ICS environment.
TA0104 Execution The adversary is trying to run code or manipulate system functions, parameters, and data in an unauthorized way.
TA0110 Persistence The adversary is trying to maintain their foothold in your ICS environment.
TA0111 Privilege Escalation The adversary is trying to gain higher-level permissions.
TA0103 Evasion The adversary is trying to avoid security defenses.
TA0102 Discovery The adversary is locating information to assess and identify their targets in your environment.
TA0109 Lateral Movement The adversary is trying to move through your ICS environment.
TA0100 Collection The adversary is trying to gather data of interest and domain knowledge on your ICS environment to inform their goal.
TA0101 Command and Control The adversary is trying to communicate with and control compromised systems, controllers, and platforms with access to your ICS environment.
TA0107 Inhibit Response Function The adversary is trying to prevent your safety, protection, quality assurance, and operator intervention functions from responding to a failure, hazard, or unsafe state.
TA0106 Impair Process Control The adversary is trying to manipulate, disable, or damage physical control processes.
TA0105 Impact The adversary is trying to manipulate, interrupt, or destroy your ICS systems, data, and their surrounding environment.

Among the tools that support the ATT&CK framework is MITRE CALDERA™ (source code on GitHub).

  • CALDERA is a cyber security platform built on the ATT&CK framework for 1️⃣ automating adversary emulation, 2️⃣ assisting manual red-teaming, and 3️⃣ automating incident response.
  • CALDERA consists of two components:
    1. The core system: This includes an asynchronous command-and-control (C2) server with a RESTful API.
    2. Plugins: These are repositories that expand the core framework capabilities. Examples include Pathfinder, a plugin for automating ingestion of network scanning tool output.

A (blurry) demo is available on YouTube:


A complementary model to ATT&CK called PRE-ATT&CK was published in 2017 to focus on “left of exploit” behavior [SAM+20]:

  • PRE-ATT&CK documents adversarial behavior during requirements gathering, reconnaissance, and weaponisation before access to a network is obtained.
  • PRE-ATT&CK enables technology-independent modelling of an adversary’s behaviour as it attempts to gain access to an organisation or entity through the technology it leverages, spanning multiple domains (e.g., enterprise, mobile).

ATT&CK is not meant to be exhaustive, because that is the role of MITRE Common Weakness Enumeration (CWE™) and MITRE Common Attack Pattern Enumeration and Classification (CAPEC™) [SAM+20].

  • Both CWE and CAPEC are related to Common Vulnerabilities and Exposures (CVE®).
  • CVE provides a list of common identifiers for publicly known cybersecurity vulnerabilities called “CVE Records” that are assigned by CVE Numbering Authorities from around the world and are used by individuals and within products to enhance security and enable automated data exchange [MIT20].
  • Based in part on the CVE List, CWE is a community-developed list of common software and hardware security weaknesses that serves as 1️⃣ a common language, 2️⃣ a measuring stick for security tools, and as 3️⃣ a baseline for weakness identification, mitigation, and prevention efforts [MIT20].
  • Developed by leveraging CVE and CWE, CAPEC is a comprehensive dictionary and classification taxonomy of known attacks that can be used by analysts, developers, testers, and educators to advance community understanding and enhance defences.

References

[MIT20] MITRE, CVE-CWE-CAPEC Relationships, CVE page, December 2020. Available at https://cve.mitre.org/cve_cwe_capec_relationships.
[SAM+20] B. E. Strom, A. Applebaum, D. P. Miller, K. C. Nickels, A. G. Pennington, and C. B. Thomas, MITRE ATT&CK®: Design and Philosophy, MITRE Product MP180360R1, The MITRE Corporation, 2020. Available at https://www.mitre.org/sites/default/files/2021-11/prs-19-01075-28-mitre-attack-design-and-philosophy.pdf.

Picture of Yee Wei Law

MITRE CAPEC

by Yee Wei Law - Wednesday, 1 March 2023, 11:04 AM
 

MITRE’s Common Attack Pattern Enumeration and Classification (CAPEC™) effort provides a publicly available catalogue of common attack patterns to help users understand how adversaries exploit weaknesses in applications and other cyber-enabled capabilities.

Attack patterns are descriptions of the common attributes and approaches employed by adversaries to exploit known weaknesses in cyber-enabled capabilities.

  • Attack patterns define the challenges that an adversary may face and how they go about solving it.
  • They derive from the concept of design patterns applied in a destructive rather than constructive context and are generated from in-depth analysis of specific real-world exploit examples.
  • In contrast to CAPEC, MITRE ATT&CK catalogues individual techniques (more fine-grained) rather than patterns (collection of sequences of techniques).

As of writing, CAPEC stands at version 3.9 and contains 559 attack patterns.

For example, CAPEC-98 is phishing:

Definition 1: Phishing

A social engineering technique where an attacker masquerades as a legitimate entity with which the victim might interact (e.g., do business) in order to prompt the user to reveal some confidential information (typically authentication credentials) that can later be used by an attacker.

CAPEC-98 can be mapped to CWE-451 “User Interface (UI) Misrepresentation of Critical Information”.


Picture of Yee Wei Law

MITRE D3FEND

by Yee Wei Law - Tuesday, 7 March 2023, 12:54 PM
 

MITRE D3FEND is a knowledge base — more precisely a knowledge graph — of cybersecurity countermeasures/techniques, created with the primary goal of helping standardise the vocabulary used to describe defensive cybersecurity functions/technologies.

  • It serves as a catalogue of defensive cybersecurity techniques and their relationships to offensive/adversarial techniques.

The D3FEND knowledge graph was designed to map MITRE ATT&CK techniques (or sub-techniques) through digital artefacts to defensive techniques; see Fig. 1.

  • Any digital trail left by an adversary, such as Internet search, software exploit and phishing email, is a digital artefact [KS21, Sec. IVE].
Fig. 1: Mapping done by the D3FEND knowledge graph [KS21, Figs. 7-8].

Operationally speaking, the D3FEND knowledge graph allows looking up of defence techniques against specific MITRE ATT&CK techniques.

Watch an overview of the D3FEND knowledge graph from MITRE on YouTube:

References

[KS21] P. E. Kaloroumakis and M. J. Smith, Toward a knowledge graph of cybersecurity countermeasures, The MITRE Corporation, 2021. Available at https://d3fend.mitre.org/resources/D3FEND.pdf.

Picture of Yee Wei Law

MITRE Engage

by Yee Wei Law - Wednesday, 15 March 2023, 9:52 AM
 

MITRE Engage (previously MITRE Shield) is a framework for planning and discussing adversary engagement operations.

  • It is meant to empower defenders to engage their adversaries and achieve their cybersecurity goals.

Cyber defense has traditionally focussed on applying defence-in-depth to deny adversaries’ access to an organisation’s critical cyber assets.

Increasingly, actively engaging adversaries proves to be more effective defence [MIT22b].

  • For example, making adversaries doubt the value of any data/information they stole drives down the value of their operations and up their operating cost.

The foundation of adversary engagement, within the context of strategic planning and analysis, is cyber denial and cyber deception [MIT22b]:

  • Cyber denial is the ability to prevent or otherwise impair adversaries’ ability to conduct their operations.
  • Cyber deception intentionally reveals deceptive facts and fictions to mislead the adversary, while concealing critical facts and fictions to prevent the adversary from making correct estimations or taking appropriate actions.

While MITRE Engage has not been around for long, the practice of cyber deception has a long history; honeypots for example can be traced back to the 1990s [Spi04, Ch. 3].

MITRE Engage prescribes the 10-Step Process, which was adapted from the process of deception in [RW13, Ch. 19], in Fig. 1:

Fig. 1: The 3-phase 10-Step Process of MITRE Engage [MIT22b, p. 7].

Prepare phase:

  1. Define the operational objective, e.g., expose adversaries, or affect an adversary’s ability to operate, or elicit intelligence on an adversary’s TTPs.
  2. Construct an engagement narrative (i.e., story to deceive the adversary) that supports this objective.
  3. This narrative informs the design of the engagement environment (e.g., a network) and all operational activities.
  4. Gather all relevant stakeholders to define the acceptable level of operational risk.
  5. Construct clear Rules of Engagement (RoE) to serve as guardrails for operational activities.
  6. Put sufficient monitoring and analysis capabilities in place to ensure activities remain within set bounds.

Operate phase:

  1. Implement and deploy designed activities.
  2. Explore these activities more in the Operationalise the Methodologies section below.

Understand phase:

  1. Turn operational outputs into actionable intelligence to assess whether the operational objective is met.
  2. Capture lessons learned and refine future engagements.
Example 1: The Tularosa Study

A starting point to practising cyber deception is to combine deception tools (e.g., honeypots and decoy content) with traditional defences (e.g., application programming interface monitoring, backup and recovery) [Heb22].

Contrary to intuition, cyber deception is more effective when adversaries know it is in place, because its presence exerts psychological impact on the adversaries [Heb22].

Supporting evidence is available from the 2018 Tularosa Study [FWSR+18]; watch presentation below:

Operationalise the Methodologies

The foundation of an adversary engagement strategy is the Engage Matrix:

Fig. 2: The MITRE Engage Matrix. Click on image 👆 to navigate to https://engage.mitre.org/matrix/.

The Matrix serves a shared reference that bridges the gap between defenders and decision makers when discussing and planning denial, deception, and adversary engagement activities.

The Matrix allows us to apply the theoretical 10-Step Process (see Fig. 1) to an actual operation.

The top row identifies the goals: Prepare and Understand, as well as the objectives: Expose, Affect and Elicit.

  • The Prepare and Understand Goals focus on the inputs and outputs of an operation.
  • While the Matrix is linear like the 10-Step Process, it should be viewed as cyclical.

The second row identifies the approaches, which let us make progress towards our selected goal.

The remainder of the Matrix identifies the activities.

  • The same activities often appear under one or more approach or goal/objective, e.g., Lures under ExposeDetect, AffectDirect and AffectDisrupt, because activities can be adapted to fit multiple use cases.
  • An adversary’s action may expose an unintended weakness of the adversary.
  • We can look at each MITRE ATT&CK® technique to examine the weaknesses revealed and identify engagement activities to exploit these weaknesses.
  • Fig. 3 shows an example of mapping a MITRE ATT&CK technique to an engagement activity.
Fig. 3: Mapping ATT&CK technique Remote System Discovery (T1018) to an engagement activity. Options include Software Manipulation (EAC0014), Lures (EAC0005), Network Manipulation (EAC0016), Network Diversity (EAC0007) and Pocket Litter (EAC0011).

References

[FWSR+18] K. Ferguson-Walter, T. Shade, A. Rogers, M. Trumbo, K. Nauer, K. Divis, A. Jones, A. Combs, and R. Abbott, The Tularosa Study: An Experimental Design and Implementation to Quantify the Effectiveness of Cyber Deception, Tech. Report SAND2018-5870C, Sandia National Lab, 2018. Available at https://www.osti.gov/servlets/purl/1524844.
[Heb22] C. Hebert, Trust Me, I’m a Liar, IEEE Security & Privacy 20 no. 6 (2022), 79–82. https://doi.org/10.1109/MSEC.2022.3202625.
[MIT22b] MITRE Engage, A Starter Kit in Adversary Engagement, 2022, version 1.0. Available at https://engage.mitre.org/wp-content/uploads/2022/04/StarterKit-v1.0-1.pdf.
[RW13] H. Rothstein and B. Whaley, Art and Science of Military Deception, Artech House, 2013. Available at https://app.knovel.com/hotlink/toc/id:kpASMD0003/art-science-military/art-science-military.
[Spi04] L. Spitzner, Honeypots: Tracking Hackers, Addison-Wesley Professional, 2004. Available at https://learning.oreilly.com/library/view/honeypots-tracking-hackers/0321108957/.

Picture of Yee Wei Law

Model checking

by Yee Wei Law - Sunday, 14 May 2023, 5:01 PM
 

Model checking is a method for formally verifying that a model satisfies a specified property [vJ11, p. 1255].

Model checking algorithms typically entail enumerating the program state space to determine if the desired properties hold.

Example 1 [CDW04]

Developed by UC Berkeley, MOdelchecking Programs for Security properties (MOPS) is a static (compile-time) analysis tool, which given a program and a security property (expressed as a finite-state automaton), checks whether the program can violate the security property.

The security properties that MOPS checks are temporal safety properties, i.e., properties requiring that programs perform certain security-relevant operations in certain orders.

An example of a temporal security property is whether a setuid-root program drops root privileges before executing an untrusted program; see Fig. 1.

Fig. 1: An example of a finite-state automaton specifying a temporal security property [CDW04, Figure 1(a)]

References

[CDW04] H. Chen, D. Dean, and D. Wagner, Model Checking One Million Lines of C Code, in NDSS, 2004.
[vJ11] H. C. van Tilborg and S. Jajodia (eds.), Encyclopedia of Cryptography and Security, Springer, Boston, MA, 2011. https://doi.org/10.1007/978-1-4419-5906-5.

N

Picture of Yee Wei Law

NIST Cybersecurity Framework

by Yee Wei Law - Wednesday, 8 March 2023, 10:52 AM
 

The National Institute of Standards and Technology (NIST) has an essential role in identifying and developing cybersecurity risk frameworks for voluntary use by owners and operators of critical infrastructure (see Definition 1) [NIS18, Executive Summary].

Definition 1: Critical infrastructure [NIS18, Sec. 1.0]

Systems and assets, whether physical or virtual, so vital that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.

One such framework is the Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework for short), for which NIST is maintaining an official website.

As of writing, the latest version of the NIST Cybersecurity Framework is 1.1 [NIS18].

The Cybersecurity Framework provides a common language for understanding, managing and expressing cybersecurity risks to internal and external stakeholders [NIS18, Sec. 2.0].

The Cybersecurity Framework has three parts: 1️⃣ Framework Core, 2️⃣ Implementation Tiers, and 3️⃣ Framework Profiles.

Framework Core

This is a set of cybersecurity activities, desired outcomes and applicable references (industry standards, guidelines and practices) that are common across critical infrastructure sectors [NIS18, Sec. 1.1].

The Framework Core consists of five concurrent and continuous Functions that provide a high-level strategic view of the lifecycle of an organisation’s management of cybersecurity risks:

  1. Identify: Develop an organisational understanding to manage cybersecurity risks to systems, people, assets, data and capabilities [NIS18, p. 7].

    Applicable activities include [MMQT21]:

    • identifying critical enterprise processes and assets;
    • documenting information flows (how information is collected, stored, updated and used);
    • maintaining hardware and software inventories;
    • establishing cybersecurity policies specifying roles, responsibilities and procedures in integration with enterprise risk considerations;
    • identifying and assessing vulnerabilities and threats;
    • identifying, prioritising, executing and tracking risk responses.
  2. Protect: Develop and implement appropriate safeguards to ensure delivery of critical services [NIS18, p. 7].

    Applicable activities include [MMQT21]:

    • managing access to assets and information;
    • safeguarding sensitive data, including applying authenticated encryption and deleting data that are no longer needed;
    • making regular backups and storing backups offline;
    • deploying firewalls and other security products, with configuration management, to protect devices;
    • keeping device firmware and software updated, while regularly scanning for vulnerabilities;
    • training and regularly retraining users to maintain cybersecurity hygiene.
  3. Detect: Develop and implement appropriate activities to identify the occurrence of a cybersecurity event [NIS18, p. 7].

    Applicable activities include [MMQT21]:

    • developing, testing and updating processes and procedures for detecting unauthorised entities and actions in the cyber and physical environments;
    • maintaining logs and monitoring them for anomalies, including unexpected changes to systems or accounts, illegitimate communication channels and data flows.
  4. Respond: Develop and implement appropriate activities to take action regarding a detected cybersecurity incident [NIS18, p. 8].

    Applicable activities include [MMQT21]:

    • making, testing and updating response plans, including legal reporting requirements, to ensure each personnel is aware of their responsibilities;
    • coordinating response plans and updates with all key internal and external stakeholders.
  5. Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired by a cybersecurity incident [NIS18, p. 8].

    Applicable activities include [MMQT21]:

    • making, testing and updating recovery plans;
    • coordinating recovery plans and updates with all key internal and external stakeholders, paying attention to what, how and when information is shared;
    • managing public relations and company reputation.

Each Function comprises Categories, and each Category comprises Subcategories, and for each Subcategory, Informative References are provided [NIS18, Sec. 2.1].

  • A Category is a cybersecurity outcome closely tied to programmatic needs and particular activities.
  • A Subcategory is an outcome of technical and/or management activities for supporting achievement of the outcomes in each Category.
  • An Informative Reference is a specific part of a standard, guideline and practice common among critical infrastructure sectors that illustrates a method to achieve the outcomes associated with each Subcategory.

Fig. 1 shows Categories, and the Subcategories under the Category “Business Environment”, and furthermore the Informative References for each of these Subcategories.

Fig. 1: Functions, Categories, sample Subcategories and sample Informative References. Details about these Informative References can be found in [NIS18, p. 44].
Implementation Tiers

The four tiers in Table 1 provide context on how an organisation views cybersecurity risks and the processes in place to manage those risks [NIS18, p. 8].

Table 1: Implementation tiers [NIS18, pp. 9-11].
Tier Risk management process Integrated risk management program External participation
1, Partial Not formalised, ad hoc and reactive.

Limited cybersecurity awareness.

Risk management is irregular and case-by-case.

Organisation does not engage with other entities, and lacks awareness of cyber supply chain risks.
2, Risk-informed

Formalised but not organisation-wide.

Prioritisation of cybersecurity objectives and activities is directly informed by organisational risks, business requirements, or the threat environment.

Cybersecurity awareness exists at the organisational level, but risk management is not organisation-wide.

Irregular risk assessment of assets.

Organisation receives information from other entities and generates some of its own, but may not share information with others.

Organisation is aware of cyber supply chain risks, but does not respond formally to the risks.

3, Repeatable Formalised and regularly updated based on the application of risk management processes to changes in business requirements and the threat landscape.

Risk management is organisation-wide.

Organisation accurately and consistently monitors cybersecurity risks of assets.

Organisation responds effectively and consistently to changes in risks.

Cybersecurity is considered through all lines of operation.

Organisation receives information from other entities and share its original information with others.

Organisation is aware of cyber supply chain risks, and usually responds formally to the risks.

4, Adaptive

Formalised and adaptable to experience and forecast.

Continuously improved leveraging advanced cybersecurity technologies and practices, to respond to evolving, sophisticated threats in a timely and effective manner.

Risk management is organisation-wide.

Decision making is grounded in clear understanding of the relationship between cybersecurity risks and financial risks / organisational objectives.

Risk management is integral to organisational culture and is supported by continuous awareness of activities on systems and networks.

Organisation receives, generates and reviews prioritised information to inform continuous risk assessment.

Organisation uses real-time information to respond formally and consistently to cyber supply chain risks.

Implementation tiers do not represent maturity levels; they are meant to support organisational decision making about how to manage cybersecurity risks.

Framework Profiles

A Framework Profile (“Profile”) is a representation of the outcomes that a particular system or organisation has selected from the Framework Categories and Subcategories [NIS18, Appendix B].

A Profile specifies the alignment of the Functions, Categories, and Subcategories with the business requirements, risk tolerance, and resources of an organisation [NIS18, Sec. 2.3].

A Profile enables organisations to establish a roadmap for reducing cybersecurity risks, that 1️⃣ is well aligned with organisational and sector goals, 2️⃣ considers legal/regulatory requirements and industry best practices, and 3️⃣ reflects risk management priorities [NIS18, Sec. 2.3].

For example,

  • The NIST Interagency Report 8401 [LSB22] specifies a Profile for securing satellite ground segments.
  • A Profile for securing hybrid satellite networks is currently under development.
  • More examples of Profiles can be found here.

Watch a more detailed explanation of the Cybersecurity Framework presented at RSA Conference 2018:

References

[LSB22] S. Lightman, T. Suloway, and J. Brule, Satellite ground segment: Applying the cybersecurity framework to satellite command and control, NIST IR 8401, December 2022. https://doi.org/10.6028/NIST.IR.8401.
[MMQT21] A. Mahn, J. Marron, S. Quinn, and D. Topper, Getting Started with the NIST Cybersecurity Framework: A Quick Start Guide, NIST Special Publication 1271, August 2021. https://doi.org/10.6028/NIST.SP.1271.
[MMBM22] J. McCarthy, D. Mamula, J. Brule, and K. Meldorf, Cybersecurity Framework Profile for Hybrid Satellite Networks (HSN): Final Annotated Outline, NIST Cybersecurity White Paper, NIST CSWP 27, November 2022. https://doi.org/10.6028/NIST.CSWP.27.
[NIS18] NIST, Framework for Improving Critical Infrastructure Cybersecurity, Version 1.1, April 2018. Available at https://www.nist.gov/cyberframework/framework.


Page: (Previous)   1  2  3  4  5  6  7  8  (Next)
  ALL