Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

M

Picture of Yee Wei Law

Meltdown attacks

by Yee Wei Law - Wednesday, 29 March 2023, 8:45 AM
 

Out-of-order execution is a prevalent performance feature of modern processors for reducing latencies of busy execution units, e.g., a memory fetch unit waiting for data from memory: instead of stalling execution, a processor skips ahead and executes subsequent instructions [LSG+18].

See Fig. 1.

Fig. 1: If an executed instruction causes an exception that diverts the control flow to an exception handler, the subsequent instruction must not be executed [LSG+18, Figure 3].

Due to out-of-order execution, the subsequent instructions may already have been partially executed, but not retired.

However, architectural effects of the execution are discarded.

Although the instructions executed out of order do not have any visible architectural effect on the registers or memory, they have microarchitectural side effects [LSG+18, Sec. 3].

  • During out-of-order execution, referenced memory contents are fetched into the registers and also stored in the cache.
  • If the result of out-of-order execution has to be discarded, the register and memory contents are retired, but the cached memory contents are kept in the cache.
  • A microarchitectural side-channel attack such as FLUSH+RELOAD [YF14] can then be exploited to detect whether a specific memory location is cached, to make this microarchitectural state visible.

Meltdown consists of two building blocks [LSG+18, Sec. 4], as illustrated in Fig. 2:

  1. The first building block makes the CPU execute one or more instructions that would never occur in the normal executed path.

    These instructions, which are executed out of order and leaving measurable side effects, are called transient instructions.

  2. The second building block transfers the microarchitectural side effect of the transient instruction sequence to an architectural state to further process the leaked secret.

Operation-wise, Meltdown consists of 3 steps [LSG+18, Sec. 5.1]:

  1. The content of an attacker-chosen memory location, which is inaccessible to the attacker, is loaded into a register.
  2. A transient instruction accesses a cache line based on the secret content of the register; see Fig. 2.
  3. The attacker uses FLUSH+RELOAD to determine the accessed cache line and hence the secret stored at the chosen memory location.

Fig. 2: The Meltdown attack uses exception handling or suppression, e.g., TSX (disabled by default since 2021), to run a series of transient instructions [LSG+18, Figure 5].

The covert channel consists of 1️⃣ transient instructions obtaining a (persistent) secret value and changing the microarchitectural state of the processor based on this secret value; 2️⃣ FLUSH+RELOAD reading the microarchitectural state, making it architectural and recovering the secret value.

Watch the presentation given by one of the discoverers of Meltdown attacks at the 27th USENIX Security Symposium:

Since FLUSH+RELOAD was mentioned, watch the original presentation at USENIX Security ’14:

More information on the Meltdown and Spectre website.

References

[LSG+18] M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, A. Fogh, J. Horn, S. Mangard, P. Kocher, D. Genkin, Y. Yarom, and M. Hamburg, Meltdown: Reading kernel memory from user space, in 27th USENIX Security Symposium (USENIX Security 18), USENIX Association, August 2018, pp. 973–990. Available at https://www.usenix.org/conference/usenixsecurity18/presentation/lipp.
[YF14] Y. Yarom and K. Falkner, FLUSH+RELOAD: A high resolution, low noise, L3 cache side-channel attack, in 23rd USENIX Security Symposium (USENIX Security 14), USENIX Association, August 2014, pp. 719–732. Available at https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/yarom.

Picture of Yee Wei Law

MITRE ATLAS

by Yee Wei Law - Thursday, 16 November 2023, 10:53 PM
 

The field of adversarial machine learning (AML) is concerned with the study of attacks on machine learning (ML) algorithms and the design of robust ML algorithms to defend against these attacks [TBH+19, Sec. 1].

ML systems (and by extension AI systems) can fail in many ways, some more obvious than the others.

AML is not about ML systems failing when they make wrong inferences; it is about ML systems being tricked into making wrong inferences.

Consider three basic attack scenarios on ML systems:

Black-box evasion attack: Consider the most common deployment scenario in Fig. 1, where an ML model is deployed as an API endpoint.

  • In this black-box setting, an adversary can query the model with inputs it can control, and observe the model’s outputs, but does not know how the inputs are processed.
  • The adversary can craft adversarial data (also called adversarial examples), usually by subtly modifying legitimate test samples, that cause the model to make different inferences than what the model would have made based on the legitimate test samples — this is called an evasion attack [OV23, p. 8].
  • This technique can be used to evade a downstream task where machine learning is utilised, e.g., machine-learning-based threat detection.

White-box evasion attack: Consider the scenario in Fig. 2, where an ML model exists on a smartphone or an IoT edge node, which an adversary has access to.

  • In this white-box setting, the adversary could reverse-engineer the model.
  • With visibility into the model, the adversary can optimise its adversarial data for its evasion attack.

Poisoning attacks: Consider the scenario in Fig. 3, where an adversary has control over the training data, process and hence the model.

  • Attacks during the ML training stage are called poisoning attacks [OV23, p. 7].
  • A data poisoning attack is a poisoning attack where the adversary controls a subset of the training data by either inserting or modifying training samples [OV23, pp. 7-8].
  • A model poisoning attack is a poisoning attack where the adversary controls the model and its parameters [OV23, p. 8].
  • A backdoor poisoning attack is a poisoning attack where the adversary poisons some training samples with a trigger or backdoor pattern such that the model performs normally on legitimate test samples but abnormally on test samples containing a trigger [OV23].
Fig. 1: Black-box evasion attack: trained model is outside the adversary’s reach.
Fig. 2: White-box evasion attack: trained model is within the adversary’s reach.
Fig. 3: Poisoning attacks: trained model is poisoned with vulnerabilities.

Watch introduction to evasion attacks (informally called “perturbation attacks”) on LinkedIn Learning:

Perturbation attacks and AUPs from Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes by Diana Kelley

Watch introduction to poisoning attacks on LinkedIn Learning:

Poisoning attacks from Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes by Diana Kelley

In response to the threats of AML, in 2020, MITRE and Microsoft, released the Adversarial ML Threat Matrix in collaboration with Bosch, IBM, NVIDIA, Airbus, University of Toronto, etc.

Subsequently, in 2021, more organisations joined MITRE and Microsoft to release Version 2.0, and renamed the matrix to MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems).

MITRE ATLAS is a knowledge base — modelled after MITRE ATT&CK — of adversary tactics, techniques, and case studies for ML systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research.

Watch MITRE’s presentation:

References

[Eid21] B. Eidson, MITRE ATLAS Takes on AI System Theft, MITRE News & Insights, June 2021. Available at https://www.mitre.org/news-insights/impact-story/mitre-atlas-takes-ai-system-theft.
[OV23] A. Oprea and A. Vassilev, Adversarial machine learning: A taxonomy and terminology of attacks and mitigations, NIST AI 100-2e2023 ipd, March 2023. https://doi.org/10.6028/NIST.AI.100-2e2023.ipd.
[Tab23] E. Tabassi, Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1, January 2023. https://doi.org/10.6028/NIST.AI.100-1.
[TBH+19] E. Tabassi, K. Burns, M. Hadjimichael, A. Molina-Markham, and J. Sexton, A Taxonomy and Terminology of Adversarial Machine Learning, Tech. report, 2019. https://doi.org/10.6028/NIST.IR.8269-draft.

Picture of Yee Wei Law

MITRE ATT&CK

by Yee Wei Law - Sunday, 26 March 2023, 10:57 AM
 

MITRE ATT&CK® is a knowledge base of adversary tactics (capturing “why”) and techniques (capturing “how”) based on real-world observations.

There are three versions [SAM+20]: 1️⃣ Enterprise (first published in 2015), 2️⃣ Mobile (first published in 2017), and 3️⃣ Industrial Control System (ICS, first published in 2020).

Fig. 1 below shows the fourteen MITRE ATT&CK Enterprise tactics:

Fig. 1: Tactics and top-level techniques in the MITRE ATT&CK matrix of adversary tactics and techniques. Each technique labelled with “||” branches into sub-techniques, e.g., “active scanning” consists of options “scanning IP blocks”, “vulnerability scanning” and “wordlist scanning”. See original interactive matrix.
  1. Reconnaissance (TA0043): This consists of techniques that involve adversaries actively or passively gathering information that can be used to support targeting, e.g., active scanning of IP addresses (T1595.001) and vulnerabilities (T1595.002).
  2. Resource development (TA0042): This consists of techniques that involve adversaries creating, purchasing, or compromising/stealing resources that can be used to support targeting, e.g., acquisition of domains that can be used during targeting (T1583.001).
  3. Initial access (TA0001): This consists of techniques that use various entry vectors to gain their initial foothold within a network, e.g., drive-by compromise ( T1189).
  4. Execution (TA0002): This consists of techniques that result in adversary-controlled code running on a local or remote system, e.g., abusing PowerShell commands and scripts for execution (T1059.001).
  5. Persistence (TA0003): This consists of techniques that adversaries use to maintain access to systems across restarts, changed credentials, and other interruptions that could cut off their access, e.g., adding adversary-controlled credentials to a cloud account to maintain persistent access to victim accounts and instances within the environment (T1098.001).
  6. Privilege escalation (TA0004): This consists of techniques that adversaries use to gain higher-level permissions on a system or network, e.g., abusing configurations where an application has the setuid or setgid bits set in order to get code running in a different (and possibly more privileged) user’s context (T1548.001).
  7. Defence evasion (TA0005): This consists of techniques that adversaries use to avoid detection throughout their compromise, e.g., sub-technique T1548.001.
  1. Credential access (TA0006): This consists of techniques for stealing credentials like account names and passwords, e.g., responding to LLMNR/NBT-NS network traffic to spoof an authoritative source for name resolution to force communication with an adversary-controlled system (T1557.001).
  2. Discovery (TA0007): This consists of techniques an adversary may use to gain knowledge about the system and internal network, e.g., getting a listing of local system accounts (T1087.001).
  3. Lateral movement (TA0008): This consists of techniques that adversaries use to enter and control remote systems on a network, e.g., exploiting remote services to gain unauthorised access to internal systems once inside of a network (T1210).
  4. Collection (TA0009): This consists of techniques adversaries may use to gather information and the information sources that are relevant to following through on the adversary’s objectives, e.g., sub-technique T1557.001.
  5. Command and control (TA0011): This consists of techniques that adversaries may use to communicate with systems under their control within a victim network, e.g., communicating using application-layer protocols associated with web traffic to avoid detection/network filtering by blending in with existing traffic (T1071.001).
  6. Exfiltration (TA0010): This consists of techniques that adversaries may use to steal data from your network, e.g., leverage traffic mirroring/duplication in order to automate data exfiltration over compromised network infrastructure (T1020.001).
  7. Impact (TA0040): This consists of techniques that adversaries use to disrupt availability or compromise integrity by manipulating business and operational processes, e.g., interrupting availability of system and network resources by inhibiting access to accounts utilised by legitimate users (T1531).

The Mobile tactics and ICS tactics are summarised below. Note a tactic in the Mobile context is not the same as the identically named tactic in the ICS context.

Table 1: MITRE ATT&CK Mobile tactics.
ID Name Description
TA0027 Initial Access The adversary is trying to get into your device.
TA0041 Execution The adversary is trying to run malicious code.
TA0028 Persistence The adversary is trying to maintain their foothold.
TA0029 Privilege Escalation The adversary is trying to gain higher-level permissions.
TA0030 Defense Evasion The adversary is trying to avoid being detected.
TA0031 Credential Access The adversary is trying to steal account names, passwords, or other secrets that enable access to resources.
TA0032 Discovery The adversary is trying to figure out your environment.
TA0033 Lateral Movement The adversary is trying to move through your environment.
TA0035 Collection The adversary is trying to gather data of interest to their goal.
TA0037 Command and Control The adversary is trying to communicate with compromised devices to control them.
TA0036 Exfiltration The adversary is trying to steal data.
TA0034 Impact The adversary is trying to manipulate, interrupt, or destroy your devices and data.
TA0038 Network Effects The adversary is trying to intercept or manipulate network traffic to or from a device.
TA0039 Remote Service Effects The adversary is trying to control or monitor the device using remote services.
Table 2: MITRE ATT&CK ICS tactics.
ID Name Description
TA0108 Initial Access The adversary is trying to get into your ICS environment.
TA0104 Execution The adversary is trying to run code or manipulate system functions, parameters, and data in an unauthorized way.
TA0110 Persistence The adversary is trying to maintain their foothold in your ICS environment.
TA0111 Privilege Escalation The adversary is trying to gain higher-level permissions.
TA0103 Evasion The adversary is trying to avoid security defenses.
TA0102 Discovery The adversary is locating information to assess and identify their targets in your environment.
TA0109 Lateral Movement The adversary is trying to move through your ICS environment.
TA0100 Collection The adversary is trying to gather data of interest and domain knowledge on your ICS environment to inform their goal.
TA0101 Command and Control The adversary is trying to communicate with and control compromised systems, controllers, and platforms with access to your ICS environment.
TA0107 Inhibit Response Function The adversary is trying to prevent your safety, protection, quality assurance, and operator intervention functions from responding to a failure, hazard, or unsafe state.
TA0106 Impair Process Control The adversary is trying to manipulate, disable, or damage physical control processes.
TA0105 Impact The adversary is trying to manipulate, interrupt, or destroy your ICS systems, data, and their surrounding environment.

Among the tools that support the ATT&CK framework is MITRE CALDERA™ (source code on GitHub).

  • CALDERA is a cyber security platform built on the ATT&CK framework for 1️⃣ automating adversary emulation, 2️⃣ assisting manual red-teaming, and 3️⃣ automating incident response.
  • CALDERA consists of two components:
    1. The core system: This includes an asynchronous command-and-control (C2) server with a RESTful API.
    2. Plugins: These are repositories that expand the core framework capabilities. Examples include Pathfinder, a plugin for automating ingestion of network scanning tool output.

A (blurry) demo is available on YouTube:


A complementary model to ATT&CK called PRE-ATT&CK was published in 2017 to focus on “left of exploit” behavior [SAM+20]:

  • PRE-ATT&CK documents adversarial behavior during requirements gathering, reconnaissance, and weaponisation before access to a network is obtained.
  • PRE-ATT&CK enables technology-independent modelling of an adversary’s behaviour as it attempts to gain access to an organisation or entity through the technology it leverages, spanning multiple domains (e.g., enterprise, mobile).

ATT&CK is not meant to be exhaustive, because that is the role of MITRE Common Weakness Enumeration (CWE™) and MITRE Common Attack Pattern Enumeration and Classification (CAPEC™) [SAM+20].

  • Both CWE and CAPEC are related to Common Vulnerabilities and Exposures (CVE®).
  • CVE provides a list of common identifiers for publicly known cybersecurity vulnerabilities called “CVE Records” that are assigned by CVE Numbering Authorities from around the world and are used by individuals and within products to enhance security and enable automated data exchange [MIT20].
  • Based in part on the CVE List, CWE is a community-developed list of common software and hardware security weaknesses that serves as 1️⃣ a common language, 2️⃣ a measuring stick for security tools, and as 3️⃣ a baseline for weakness identification, mitigation, and prevention efforts [MIT20].
  • Developed by leveraging CVE and CWE, CAPEC is a comprehensive dictionary and classification taxonomy of known attacks that can be used by analysts, developers, testers, and educators to advance community understanding and enhance defences.

References

[MIT20] MITRE, CVE-CWE-CAPEC Relationships, CVE page, December 2020. Available at https://cve.mitre.org/cve_cwe_capec_relationships.
[SAM+20] B. E. Strom, A. Applebaum, D. P. Miller, K. C. Nickels, A. G. Pennington, and C. B. Thomas, MITRE ATT&CK®: Design and Philosophy, MITRE Product MP180360R1, The MITRE Corporation, 2020. Available at https://www.mitre.org/sites/default/files/2021-11/prs-19-01075-28-mitre-attack-design-and-philosophy.pdf.

Picture of Yee Wei Law

MITRE CAPEC

by Yee Wei Law - Wednesday, 1 March 2023, 11:04 AM
 

MITRE’s Common Attack Pattern Enumeration and Classification (CAPEC™) effort provides a publicly available catalogue of common attack patterns to help users understand how adversaries exploit weaknesses in applications and other cyber-enabled capabilities.

Attack patterns are descriptions of the common attributes and approaches employed by adversaries to exploit known weaknesses in cyber-enabled capabilities.

  • Attack patterns define the challenges that an adversary may face and how they go about solving it.
  • They derive from the concept of design patterns applied in a destructive rather than constructive context and are generated from in-depth analysis of specific real-world exploit examples.
  • In contrast to CAPEC, MITRE ATT&CK catalogues individual techniques (more fine-grained) rather than patterns (collection of sequences of techniques).

As of writing, CAPEC stands at version 3.9 and contains 559 attack patterns.

For example, CAPEC-98 is phishing:

Definition 1: Phishing

A social engineering technique where an attacker masquerades as a legitimate entity with which the victim might interact (e.g., do business) in order to prompt the user to reveal some confidential information (typically authentication credentials) that can later be used by an attacker.

CAPEC-98 can be mapped to CWE-451 “User Interface (UI) Misrepresentation of Critical Information”.


Picture of Yee Wei Law

MITRE D3FEND

by Yee Wei Law - Tuesday, 7 March 2023, 12:54 PM
 

MITRE D3FEND is a knowledge base — more precisely a knowledge graph — of cybersecurity countermeasures/techniques, created with the primary goal of helping standardise the vocabulary used to describe defensive cybersecurity functions/technologies.

  • It serves as a catalogue of defensive cybersecurity techniques and their relationships to offensive/adversarial techniques.

The D3FEND knowledge graph was designed to map MITRE ATT&CK techniques (or sub-techniques) through digital artefacts to defensive techniques; see Fig. 1.

  • Any digital trail left by an adversary, such as Internet search, software exploit and phishing email, is a digital artefact [KS21, Sec. IVE].
Fig. 1: Mapping done by the D3FEND knowledge graph [KS21, Figs. 7-8].

Operationally speaking, the D3FEND knowledge graph allows looking up of defence techniques against specific MITRE ATT&CK techniques.

Watch an overview of the D3FEND knowledge graph from MITRE on YouTube:

References

[KS21] P. E. Kaloroumakis and M. J. Smith, Toward a knowledge graph of cybersecurity countermeasures, The MITRE Corporation, 2021. Available at https://d3fend.mitre.org/resources/D3FEND.pdf.

Picture of Yee Wei Law

MITRE Engage

by Yee Wei Law - Wednesday, 15 March 2023, 9:52 AM
 

MITRE Engage (previously MITRE Shield) is a framework for planning and discussing adversary engagement operations.

  • It is meant to empower defenders to engage their adversaries and achieve their cybersecurity goals.

Cyber defense has traditionally focussed on applying defence-in-depth to deny adversaries’ access to an organisation’s critical cyber assets.

Increasingly, actively engaging adversaries proves to be more effective defence [MIT22b].

  • For example, making adversaries doubt the value of any data/information they stole drives down the value of their operations and up their operating cost.

The foundation of adversary engagement, within the context of strategic planning and analysis, is cyber denial and cyber deception [MIT22b]:

  • Cyber denial is the ability to prevent or otherwise impair adversaries’ ability to conduct their operations.
  • Cyber deception intentionally reveals deceptive facts and fictions to mislead the adversary, while concealing critical facts and fictions to prevent the adversary from making correct estimations or taking appropriate actions.

While MITRE Engage has not been around for long, the practice of cyber deception has a long history; honeypots for example can be traced back to the 1990s [Spi04, Ch. 3].

MITRE Engage prescribes the 10-Step Process, which was adapted from the process of deception in [RW13, Ch. 19], in Fig. 1:

Fig. 1: The 3-phase 10-Step Process of MITRE Engage [MIT22b, p. 7].

Prepare phase:

  1. Define the operational objective, e.g., expose adversaries, or affect an adversary’s ability to operate, or elicit intelligence on an adversary’s TTPs.
  2. Construct an engagement narrative (i.e., story to deceive the adversary) that supports this objective.
  3. This narrative informs the design of the engagement environment (e.g., a network) and all operational activities.
  4. Gather all relevant stakeholders to define the acceptable level of operational risk.
  5. Construct clear Rules of Engagement (RoE) to serve as guardrails for operational activities.
  6. Put sufficient monitoring and analysis capabilities in place to ensure activities remain within set bounds.

Operate phase:

  1. Implement and deploy designed activities.
  2. Explore these activities more in the Operationalise the Methodologies section below.

Understand phase:

  1. Turn operational outputs into actionable intelligence to assess whether the operational objective is met.
  2. Capture lessons learned and refine future engagements.
Example 1: The Tularosa Study

A starting point to practising cyber deception is to combine deception tools (e.g., honeypots and decoy content) with traditional defences (e.g., application programming interface monitoring, backup and recovery) [Heb22].

Contrary to intuition, cyber deception is more effective when adversaries know it is in place, because its presence exerts psychological impact on the adversaries [Heb22].

Supporting evidence is available from the 2018 Tularosa Study [FWSR+18]; watch presentation below:

Operationalise the Methodologies

The foundation of an adversary engagement strategy is the Engage Matrix:

Fig. 2: The MITRE Engage Matrix. Click on image 👆 to navigate to https://engage.mitre.org/matrix/.

The Matrix serves a shared reference that bridges the gap between defenders and decision makers when discussing and planning denial, deception, and adversary engagement activities.

The Matrix allows us to apply the theoretical 10-Step Process (see Fig. 1) to an actual operation.

The top row identifies the goals: Prepare and Understand, as well as the objectives: Expose, Affect and Elicit.

  • The Prepare and Understand Goals focus on the inputs and outputs of an operation.
  • While the Matrix is linear like the 10-Step Process, it should be viewed as cyclical.

The second row identifies the approaches, which let us make progress towards our selected goal.

The remainder of the Matrix identifies the activities.

  • The same activities often appear under one or more approach or goal/objective, e.g., Lures under ExposeDetect, AffectDirect and AffectDisrupt, because activities can be adapted to fit multiple use cases.
  • An adversary’s action may expose an unintended weakness of the adversary.
  • We can look at each MITRE ATT&CK® technique to examine the weaknesses revealed and identify engagement activities to exploit these weaknesses.
  • Fig. 3 shows an example of mapping a MITRE ATT&CK technique to an engagement activity.
Fig. 3: Mapping ATT&CK technique Remote System Discovery (T1018) to an engagement activity. Options include Software Manipulation (EAC0014), Lures (EAC0005), Network Manipulation (EAC0016), Network Diversity (EAC0007) and Pocket Litter (EAC0011).

References

[FWSR+18] K. Ferguson-Walter, T. Shade, A. Rogers, M. Trumbo, K. Nauer, K. Divis, A. Jones, A. Combs, and R. Abbott, The Tularosa Study: An Experimental Design and Implementation to Quantify the Effectiveness of Cyber Deception, Tech. Report SAND2018-5870C, Sandia National Lab, 2018. Available at https://www.osti.gov/servlets/purl/1524844.
[Heb22] C. Hebert, Trust Me, I’m a Liar, IEEE Security & Privacy 20 no. 6 (2022), 79–82. https://doi.org/10.1109/MSEC.2022.3202625.
[MIT22b] MITRE Engage, A Starter Kit in Adversary Engagement, 2022, version 1.0. Available at https://engage.mitre.org/wp-content/uploads/2022/04/StarterKit-v1.0-1.pdf.
[RW13] H. Rothstein and B. Whaley, Art and Science of Military Deception, Artech House, 2013. Available at https://app.knovel.com/hotlink/toc/id:kpASMD0003/art-science-military/art-science-military.
[Spi04] L. Spitzner, Honeypots: Tracking Hackers, Addison-Wesley Professional, 2004. Available at https://learning.oreilly.com/library/view/honeypots-tracking-hackers/0321108957/.

Picture of Yee Wei Law

Model checking

by Yee Wei Law - Sunday, 14 May 2023, 5:01 PM
 

Model checking is a method for formally verifying that a model satisfies a specified property [vJ11, p. 1255].

Model checking algorithms typically entail enumerating the program state space to determine if the desired properties hold.

Example 1 [CDW04]

Developed by UC Berkeley, MOdelchecking Programs for Security properties (MOPS) is a static (compile-time) analysis tool, which given a program and a security property (expressed as a finite-state automaton), checks whether the program can violate the security property.

The security properties that MOPS checks are temporal safety properties, i.e., properties requiring that programs perform certain security-relevant operations in certain orders.

An example of a temporal security property is whether a setuid-root program drops root privileges before executing an untrusted program; see Fig. 1.

Fig. 1: An example of a finite-state automaton specifying a temporal security property [CDW04, Figure 1(a)]

References

[CDW04] H. Chen, D. Dean, and D. Wagner, Model Checking One Million Lines of C Code, in NDSS, 2004.
[vJ11] H. C. van Tilborg and S. Jajodia (eds.), Encyclopedia of Cryptography and Security, Springer, Boston, MA, 2011. https://doi.org/10.1007/978-1-4419-5906-5.