Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  2  3  4  5  6  7  8  (Next)
  ALL

M

Picture of Yee Wei Law

Model checking

by Yee Wei Law - Sunday, 14 May 2023, 5:01 PM
 

Model checking is a method for formally verifying that a model satisfies a specified property [vJ11, p. 1255].

Model checking algorithms typically entail enumerating the program state space to determine if the desired properties hold.

Example 1 [CDW04]

Developed by UC Berkeley, MOdelchecking Programs for Security properties (MOPS) is a static (compile-time) analysis tool, which given a program and a security property (expressed as a finite-state automaton), checks whether the program can violate the security property.

The security properties that MOPS checks are temporal safety properties, i.e., properties requiring that programs perform certain security-relevant operations in certain orders.

An example of a temporal security property is whether a setuid-root program drops root privileges before executing an untrusted program; see Fig. 1.

Fig. 1: An example of a finite-state automaton specifying a temporal security property [CDW04, Figure 1(a)]

References

[CDW04] H. Chen, D. Dean, and D. Wagner, Model Checking One Million Lines of C Code, in NDSS, 2004.
[vJ11] H. C. van Tilborg and S. Jajodia (eds.), Encyclopedia of Cryptography and Security, Springer, Boston, MA, 2011. https://doi.org/10.1007/978-1-4419-5906-5.

N

Picture of Yee Wei Law

NIST Cybersecurity Framework

by Yee Wei Law - Wednesday, 8 March 2023, 10:52 AM
 

The National Institute of Standards and Technology (NIST) has an essential role in identifying and developing cybersecurity risk frameworks for voluntary use by owners and operators of critical infrastructure (see Definition 1) [NIS18, Executive Summary].

Definition 1: Critical infrastructure [NIS18, Sec. 1.0]

Systems and assets, whether physical or virtual, so vital that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.

One such framework is the Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework for short), for which NIST is maintaining an official website.

As of writing, the latest version of the NIST Cybersecurity Framework is 1.1 [NIS18].

The Cybersecurity Framework provides a common language for understanding, managing and expressing cybersecurity risks to internal and external stakeholders [NIS18, Sec. 2.0].

The Cybersecurity Framework has three parts: 1️⃣ Framework Core, 2️⃣ Implementation Tiers, and 3️⃣ Framework Profiles.

Framework Core

This is a set of cybersecurity activities, desired outcomes and applicable references (industry standards, guidelines and practices) that are common across critical infrastructure sectors [NIS18, Sec. 1.1].

The Framework Core consists of five concurrent and continuous Functions that provide a high-level strategic view of the lifecycle of an organisation’s management of cybersecurity risks:

  1. Identify: Develop an organisational understanding to manage cybersecurity risks to systems, people, assets, data and capabilities [NIS18, p. 7].

    Applicable activities include [MMQT21]:

    • identifying critical enterprise processes and assets;
    • documenting information flows (how information is collected, stored, updated and used);
    • maintaining hardware and software inventories;
    • establishing cybersecurity policies specifying roles, responsibilities and procedures in integration with enterprise risk considerations;
    • identifying and assessing vulnerabilities and threats;
    • identifying, prioritising, executing and tracking risk responses.
  2. Protect: Develop and implement appropriate safeguards to ensure delivery of critical services [NIS18, p. 7].

    Applicable activities include [MMQT21]:

    • managing access to assets and information;
    • safeguarding sensitive data, including applying authenticated encryption and deleting data that are no longer needed;
    • making regular backups and storing backups offline;
    • deploying firewalls and other security products, with configuration management, to protect devices;
    • keeping device firmware and software updated, while regularly scanning for vulnerabilities;
    • training and regularly retraining users to maintain cybersecurity hygiene.
  3. Detect: Develop and implement appropriate activities to identify the occurrence of a cybersecurity event [NIS18, p. 7].

    Applicable activities include [MMQT21]:

    • developing, testing and updating processes and procedures for detecting unauthorised entities and actions in the cyber and physical environments;
    • maintaining logs and monitoring them for anomalies, including unexpected changes to systems or accounts, illegitimate communication channels and data flows.
  4. Respond: Develop and implement appropriate activities to take action regarding a detected cybersecurity incident [NIS18, p. 8].

    Applicable activities include [MMQT21]:

    • making, testing and updating response plans, including legal reporting requirements, to ensure each personnel is aware of their responsibilities;
    • coordinating response plans and updates with all key internal and external stakeholders.
  5. Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired by a cybersecurity incident [NIS18, p. 8].

    Applicable activities include [MMQT21]:

    • making, testing and updating recovery plans;
    • coordinating recovery plans and updates with all key internal and external stakeholders, paying attention to what, how and when information is shared;
    • managing public relations and company reputation.

Each Function comprises Categories, and each Category comprises Subcategories, and for each Subcategory, Informative References are provided [NIS18, Sec. 2.1].

  • A Category is a cybersecurity outcome closely tied to programmatic needs and particular activities.
  • A Subcategory is an outcome of technical and/or management activities for supporting achievement of the outcomes in each Category.
  • An Informative Reference is a specific part of a standard, guideline and practice common among critical infrastructure sectors that illustrates a method to achieve the outcomes associated with each Subcategory.

Fig. 1 shows Categories, and the Subcategories under the Category “Business Environment”, and furthermore the Informative References for each of these Subcategories.

Fig. 1: Functions, Categories, sample Subcategories and sample Informative References. Details about these Informative References can be found in [NIS18, p. 44].
Implementation Tiers

The four tiers in Table 1 provide context on how an organisation views cybersecurity risks and the processes in place to manage those risks [NIS18, p. 8].

Table 1: Implementation tiers [NIS18, pp. 9-11].
Tier Risk management process Integrated risk management program External participation
1, Partial Not formalised, ad hoc and reactive.

Limited cybersecurity awareness.

Risk management is irregular and case-by-case.

Organisation does not engage with other entities, and lacks awareness of cyber supply chain risks.
2, Risk-informed

Formalised but not organisation-wide.

Prioritisation of cybersecurity objectives and activities is directly informed by organisational risks, business requirements, or the threat environment.

Cybersecurity awareness exists at the organisational level, but risk management is not organisation-wide.

Irregular risk assessment of assets.

Organisation receives information from other entities and generates some of its own, but may not share information with others.

Organisation is aware of cyber supply chain risks, but does not respond formally to the risks.

3, Repeatable Formalised and regularly updated based on the application of risk management processes to changes in business requirements and the threat landscape.

Risk management is organisation-wide.

Organisation accurately and consistently monitors cybersecurity risks of assets.

Organisation responds effectively and consistently to changes in risks.

Cybersecurity is considered through all lines of operation.

Organisation receives information from other entities and share its original information with others.

Organisation is aware of cyber supply chain risks, and usually responds formally to the risks.

4, Adaptive

Formalised and adaptable to experience and forecast.

Continuously improved leveraging advanced cybersecurity technologies and practices, to respond to evolving, sophisticated threats in a timely and effective manner.

Risk management is organisation-wide.

Decision making is grounded in clear understanding of the relationship between cybersecurity risks and financial risks / organisational objectives.

Risk management is integral to organisational culture and is supported by continuous awareness of activities on systems and networks.

Organisation receives, generates and reviews prioritised information to inform continuous risk assessment.

Organisation uses real-time information to respond formally and consistently to cyber supply chain risks.

Implementation tiers do not represent maturity levels; they are meant to support organisational decision making about how to manage cybersecurity risks.

Framework Profiles

A Framework Profile (“Profile”) is a representation of the outcomes that a particular system or organisation has selected from the Framework Categories and Subcategories [NIS18, Appendix B].

A Profile specifies the alignment of the Functions, Categories, and Subcategories with the business requirements, risk tolerance, and resources of an organisation [NIS18, Sec. 2.3].

A Profile enables organisations to establish a roadmap for reducing cybersecurity risks, that 1️⃣ is well aligned with organisational and sector goals, 2️⃣ considers legal/regulatory requirements and industry best practices, and 3️⃣ reflects risk management priorities [NIS18, Sec. 2.3].

For example,

  • The NIST Interagency Report 8401 [LSB22] specifies a Profile for securing satellite ground segments.
  • A Profile for securing hybrid satellite networks is currently under development.
  • More examples of Profiles can be found here.

Watch a more detailed explanation of the Cybersecurity Framework presented at RSA Conference 2018:

References

[LSB22] S. Lightman, T. Suloway, and J. Brule, Satellite ground segment: Applying the cybersecurity framework to satellite command and control, NIST IR 8401, December 2022. https://doi.org/10.6028/NIST.IR.8401.
[MMQT21] A. Mahn, J. Marron, S. Quinn, and D. Topper, Getting Started with the NIST Cybersecurity Framework: A Quick Start Guide, NIST Special Publication 1271, August 2021. https://doi.org/10.6028/NIST.SP.1271.
[MMBM22] J. McCarthy, D. Mamula, J. Brule, and K. Meldorf, Cybersecurity Framework Profile for Hybrid Satellite Networks (HSN): Final Annotated Outline, NIST Cybersecurity White Paper, NIST CSWP 27, November 2022. https://doi.org/10.6028/NIST.CSWP.27.
[NIS18] NIST, Framework for Improving Critical Infrastructure Cybersecurity, Version 1.1, April 2018. Available at https://www.nist.gov/cyberframework/framework.

O

Picture of Yee Wei Law

Open Systems Interconnection (OSI)

by Yee Wei Law - Monday, 31 March 2025, 2:25 PM
 

Imagine writing a piece of networking software.

  • It needs to enable two neighbouring (i.e., directly connected) devices to communicate.
  • It also needs to enable two devices separated by multiple hops to communicate.

The Open Systems Interconnection (sometimes Open Systems Interconnect or simply OSI) model 1️⃣ enables networking software to be built in a structured manner; 2️⃣ provides an interoperability framework for networking protocols.

History: The OSI model was introduced in 1983 by several major computer and telecom companies; and was adopted by ISO as an international standard in 1984 [Imp22].

  • The second and latest edition of the international standard is ISO/IEC 7498-1:1994 [ISO94].

Watch the following video for a quick overview:

Learning the seven layers from Networking Foundations: Networking Basics by Kevin Wallace

More details follows.

The OSI model is a logical (as opposed to physical) model that consists of seven nonoverlapping layers (going bottom-up, opposite to Fig. 1):

  1. Layer 1 (L1), physical layer [TW11, p. 43]: This layer transmits and/or receives raw bits (see Fig. 2) over a communication channel (e.g., coaxial cable, optical fibre, RF channel).

    This layer deals with mechanical, electrical, and timing interfaces, as well as the physical transmission medium, which lies below the physical layer.

    For example, the IEEE Standard for Ethernet [IEE22] specifies several variants of the physical layer and one data link layer.

  2. Layer 2 (L2), data link layer [TW11, p. 43]: This layer transforms a raw transmission facility into a line that appears free of undetected transmission errors, by masking the real errors so the network layer above does not see them.

    Input data is encapsulated in data frames (see Fig. 2) which are transmitted sequentially.

    For example, the Challenge-Handshake Authentication Protocol (CHAP, see RFC 1994 and RFC 2433) is a link-layer protocol.

Fig. 1: The OSI model in a nutshell [Clo22].
Fig. 2: The OSI model and associated terms [TW11, Figure 1-20]. While the bottom three layers are peer-to-peer, the upper layers are end-to-end.
  1. Layer 3 (L3), network layer [TW11, pp. 43-44]: This layer controls the operation of a subnet (see Fig. 2), by determining how packets (see Fig. 2) are routed from a source to a destination.

    For example, the Internet Protocol Security (IPsec) is a network-layer protocol.

  2. Layer 4 (L4), transport layer [TW11, pp. 44]: This layer accepts data from above it, split it up into smaller units, called transport protocol data units (TPDUs, see Fig. 2), if need be, passes these to the network layer, and ensures that all pieces arrive correctly and efficiently at the other end.

    For example, the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are transport-layer protocols (see Fig. 1).

  1. Layer 5 (L5), session layer [TW11, pp. 44-45]: This layer enables users on different machines to establish sessions between them.

    Sessions offer various services, including dialog control (keeping track of whose turn it is to transmit), token management (preventing two parties from attempting the same critical operation simultaneously), and synchronisation (checkpointing long transmissions to allow them to pick up from where they left off in the event of a crash and subsequent recovery).

    For example, SOCKS5 (see RFC 1928) is a session-layer protocol.

  2. Layer 6 (L6), presentation layer [TW11, p. 45]: This layer manages the syntax and semantics of the information to be transmitted.

    For most protocols however, the distinction between the presentation layer and application layer is blur. For example, the HyperText Transfer Protocol (HTTP) is commonly classified as an application-layer protocol although it has clear presentation-layer functions such as encoding, decoding, and managing different content types [FNR22].

  3. Layer 7 (L7), application layer [TW11, p. 45]: This layer implements a suite of protocols for supporting end-user applications.

    For example, HTTP is a stateless application-layer protocol for distributed, collaborative, hypertext information systems [FNR22]. HTTP supports eight “methods”, including the GET method for requesting transfer of a target resource in a specified representation [FNR22, Sec. 9.3.1].

The OSI model is not the only networking model. The TCP/IP reference model plays an equally important role in the history of networking.

The APRANET and its descendent — the Internet as we know it — are based on the TCP/IP model.

The TCP/IP model has only four layers, as shown in Fig. 3.

Fig. 3 also shows the different protocols occupying the different layers of the TCP/IP model.

Both models use 1️⃣ the transport and lower layers to provide an end-to-end, network-independent transport service; and 2️⃣ the layers above transport for applications leveraging the transport service.

Most real-world protocol stacks are developed based on a hybrid of the OSI and TCP/IP models, consisting of these layers (from bottom to top): physical, data link, network, transport, application [TW11, Sec. 1.4.3].

Fig. 3: Comparing the OSI model and TCP/IP model [Imp22].

The salient differences between the OSI and TCP/IP models are summarised in Table 1 below.

Table 1: Salient differences between the OSI and TCP/IP models [TW11, Secs. 1.4.2-1.4.5].
OSI model TCP/IP model
Created before the protocols residing in the different layers were. Created after the protocols were.
The OSI model differentiates services (provided by each layer), interfaces (between adjacent layers) and protocols (implementing different layers) from each other. This abstraction is consistent with object-oriented programming. The TCP/IP model appears to be more monolithic, and this provides little help with engineering a non-TCP/IP stack.
The data link layer originally only catered for point-to-point communications. For broadcast networks, a medium access control sublayer had to be grafted onto the OSI model. No sublayering is needed within the network access layer.
Too many layers, because the top three layers can often be collapsed into a single application layer in practical implementations. Not enough layers, because the network access layer should really be split into two layers: physical and data link. For example, IEEE 802.3 (Ethernet) and IEEE 802.11 (Wi-Fi) protocols have distinct specifications for the physical and data link layers.
The OSI model supports both connectionless and connection-oriented communication in the network layer, but only connection-oriented communication in the transport layer. The TCP/IP model supports only connectionless communication in the network layer, but both connectionless and connection-oriented communication in the transport layer. Having these two choices in the transport layer is good for simple request-response protocols.

References

[CCI91] CCITT, ITU, Security architecture for Open Systems Interconnection for CCITT applications, Recommendation X.800 (03/91), 1991. Available at https://www.itu.int/rec/T-REC-X.800-199103-I/en.
[Clo22] Cloudfare, What is the OSI Model?, DDoS Glossary, 2022, accessed 28 Nov 2022. Available at https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/.
[FNR22] R. Fielding, M. Nottingham, and J. Reschke, HTTP Semantics, IETF RFC 9110, June 2022.
[IEE22] IEEE, IEEE Standard for Ethernet, IEEE Std 802.3-2022 (Revision of IEEE Std 802.3-2018), 2022. https://doi.org/10.1109/IEEESTD.2022.9844436.
[Imp22] Imperva, OSI Model, Learning Center, 2022, accessed 1 Dec 2022. Available at https://www.imperva.com/learn/application-security/osi-model/.
[ISO94] ISO/IEC, Information technology – open systems interconnection – basic reference model: The basic model, International Standard ISO/IEC 7498-1:1994 second edition, November 1994, corrected and reprinted 1996-06-15. Available at https://www.iso.org/standard/20269.html.
[TW11] A. S. Tanenbaum and D. J. Wetherall, Computer Networks, 5th ed., Prentice Hall, 2011.

P

Picture of Yee Wei Law

Physical-layer security

by Yee Wei Law - Wednesday, 17 May 2023, 12:00 AM
 

References

[LFZZ20] B. Li, Z. Fei, C. Zhou, and Y. Zhang, Physical-layer security in space information networks: A survey, IEEE Internet of Things Journal 7 no. 1 (2020), 33–52. https://doi.org/10.1109/JIOT.2019.2943900.

Picture of Yee Wei Law

Physical unclonable function (PUF)

by Yee Wei Law - Wednesday, 5 April 2023, 9:08 AM
 

Physical unclonable functions (PUFs, see Definition 1) serve as a physical and unclonable alternative to digital cryptographic keys.

Definition 1: Physical unclonable function (PUF) [GASA20]

A device that exploits inherent randomness introduced during manufacturing to give a physical entity a unique “fingerprint” or trust anchor.

Think of a PUF as a keyed hash function, where the key is built-in and unique due to manufacturing variations [GASA20].

  • Given an input, which we shall call a challenge, a PUF outputs a response. The challenge-response pair (CRP) is unique to the PUF.
  • Every CRP is used only once.

Types of PUFs include 1️⃣ optical PUFs, 2️⃣ arbiter PUFs, 3️⃣ memory-based intrinsic PUFs [GASA20].

  • An intrinsic PUF is a PUF that is already embedded within a device at the time of manufacturing.
  • The first intrinsic PUF was introduced in 2007 in the form of an SRAM PUF.
  • Flash memory PUFs and DRAM PUFs were subsequently introduced.
  • A memory-based PUF usually offers desired independence among response bits, so its primary application is on-demand derivation of volatile cryptographic keys.

Watch a high-level introduction to SRAM PUF:

References

[GASA20] Y. Gao, S. F. Al-Sarawi, and D. Abbott, Physical unclonable functions, Nat Electron 3 (2020), 81–91. https://doi.org/10.1038/s41928-020-0372-5.

Picture of Yee Wei Law

Proximity-1 Space Link Protocol

by Yee Wei Law - Sunday, 10 March 2024, 7:37 PM
 

Proximity-1 covers the data link layer [CCS20d] and physical layer [CCS13b].

Proximity-1 enables communications among probes, landers, rovers, orbiting constellations, and orbiting relays in a proximate environment, up to about 100,000 km [CCS13c].

These scenarios are devoid of manual intervention from ground operators, and furthermore, resources such as computational power and storage are typically limited at both ends of the link.

In fact, Proximity-1 has been field-tested in the 2004-2005 Mars missions; see Figs. 1-2 for illustration.

Fig. 1: Proximity-1 relay link for telecommands [CCS13c, Figure 2-1a].
Fig. 2: Proximity-1 relay link for telemetry [CCS13c, Figure 2-1b].

In contrast, the AOS/TC/TM Space Data Link Protocols are meant for Earth-deep space links, over extremely long distances.

Proximity-1 supports symbol rates of up to 4,096,000 coded symbols per second.

Designed for the Mars environment, the physical Layer of Proximity-1 only uses UHF frequencies [CCS13b, Sec. 1.2].

The frequency range consists of 60 MHz between 390 MHz to 450 MHz with a 30 MHz guard-band between forward and return frequency bands, specifically 435-450 MHz for the forward channel and 390-405 MHz for the return channel [CCS13b, Sec. 3.3.2].

References

[CCS13b] CCSDS, Proximity-1 Space Link Protocol—Physical Layer, Recommended Standard CCSDS 211.1-B-4, The Consultative Committee for Space Data Systems, December 2013. Available at https://public.ccsds.org/Pubs/211x1b4e1.pdf.
[CCS13c] CCSDS, Proximity-1 Space Link Protocol—Rationale, Architecture, and Scenarios, Recommended Standard CCSDS 210.0-G-2, The Consultative Committee for Space Data Systems, December 2013. Available at https://public.ccsds.org/Pubs/210x0g2e1.pdf.
[CCS20d] CCSDS, Proximity-1 Space Link Protocol—Data Link Layer, Recommended Standard CCSDS 211.0-B-6, The Consultative Committee for Space Data Systems, July 2020. Available at https://public.ccsds.org/Pubs/211x0b6.pdf.

Picture of Yee Wei Law

Public-key cryptography

by Yee Wei Law - Wednesday, 4 June 2025, 11:09 PM
 

Also known as asymmetric-key cryptography, public-key cryptography (PKC) uses a pair of keys called a public key and a private key for 1️⃣ encryption and decryption, as well as 2️⃣ signing and verification.

Encryption and decryption

For 👩 Alice to send a confidential message to 🧔 Bob,

  • 👩 Alice uses 🧔 Bob’s public key to encrypt her secret plaintext and sends the ciphertext to Bob.
  • 🧔 Bob uses his private key to decrypt the ciphertext.
  • 👩 Alice’s keys are not involved unless someone wants to send confidential messages to Alice.

However,

  • PKC is not usually used for encryption because of the computational cost and the ciphertext length.
  • The more powerful quantum computers become, the longer the keys need to be, the higher the computational costs and the longer the ciphertexts become.
  • Instead, a key establishment protocol is used to establish a symmetric key between two parties and the symmetric key is used for encryption instead.

Signing and verification

For 👩 Alice to assure 🧔 Bob a message really originated from her (i.e., for Bob to authenticate her message),

  • 👩 Alice signs the message with her private key and sends the signed message to 🧔 Bob.
  • 🧔 Bob uses Alice’s public key to verify the signature attached to the message.
  • Successful verification assures 🧔 Bob that the message was signed by 👩 Alice.
  • Simultaneously, 👩 Alice cannot repudiate (see Definition 1) the fact that she signed the message.
Definition 1: Non-repudiation [NIS13]

A service that is used to provide assurance of the integrity and origin of data in such a way that the integrity and origin can be verified and validated by a third party as having originated from a specific entity in possession of the private key (i.e., the signatory).

The ability of PKC to generate and verify signatures gives rise to 📜 digital certificates, an essential feature of PKC.

Digital certificates and public-key infrastructure (PKI)

Suppose 👩 Alice is somebody everybody trusts.

  • When 👩 Alice signs 🧔 Bob’s public key, anybody can verify Bob’s public key using Alice’s public key.
  • Successful verification means we can trust that the public key is Bob’s because we trust Alice.
  • Essentially, 🧔 Bob’s public key with 👩 Alice’s signature on it serves a 📜 digital certificate (see Definition 2) certifying Bob’s identity.
Definition 3: Digital certificate [ENISA]

Also called a public-key certificate, a digital certificate is an electronic data structure that binds an entity (e.g., an institution, a person, a computer program, a web address) to its public key.

Watch a quick introduction to digital certificates on LinkedIn Learning:

Digital certificates and signing from Ethical Hacking: Cryptography by Stephanie Domas

Digital certificates are only useful if we can trust their signatories.

To ensure signatories and hence certificates can be trusted, PKC relies on a public-key infrastructure (PKI, see Definition 3) to work.

Definition 3: Public-key infrastructure (PKI)

In ENISA’s certificate-centric definition, a PKI is a combination of policies, procedures and technology needed to manage digital certificates in a PKC scheme.

In ITU-T’s [ITU19] key-centric definition, a PKI is an infrastructure able to support the management of public keys able to support authentication, encryption, integrity and non-repudiation services.

Watch a quick introduction to PKI from an operational viewpoint on LinkedIn Learning:

Cryptography: Public key infrastructure and certificates from CISA Cert Prep: 5 Information Asset Protection for IS Auditors by Human Element LLC and Michael Lester

A PKI, as specified in the ITU-T X.509 [ITU19] standard, consist of certification authorities (CAs).

  • One or more CAs are trusted to create and digitally sign public-key certificates in response to certificate signing requests (CSRs).

  • A CA may optionally create the subjects’ keys.
  • A CA certificate is a public-key certificate for one CA [ITU19, Sec. 7.4]

    • issued by another CA, in which case the CA certificate is a cross-certificate;
    • issued by the same CA, in which case the CA certificate is a self-issued certificate.

      If the signing key is the private key associated with the public key signed, the self-issued certificate is a self-signed certificate.

  • Thus, CAs can clearly exist in a hierarchy, e.g., the two-tier hierarchy in Fig. 1, or the three-tier hierarchy in Fig. 2.
  • In a hierarchy, the root CA serves as the trust anchor [ITU19, Sec. 7.5].
  • Examples of CAs: IdenTrust, DigiCert Group, others.
  • An example of a software solution that implements CA functionality is Cloudfare’s CFSSL.

Fig. 1: A two-tier hierarchy of CAs [NCS20, p. 6].

In a 2-tier hierarchy, a root CA issues certificates to intermediate CAs, and intermediate CAs issue certificates to end entities.

Intermediate CAs are often organised to issue certificates for certain functions, e.g., a technology use case, VPN, web application.

Alternatively, the CAs can be organised by organisational function, e.g., user / machine / service authentication.

Fig. 2: A three-tier hierarchy of CAs [NCS20, p. 6].

In a 3-tier hierarchy, there is a root CA and two levels of intermediate CAs, in which the lowest layer issues certificates to end entities.

This setup is often used to give an extra layer of separation between the root CA and the intermediate issuing certificates to end entities.

The number of tiers in a CA hierarchy is a balance between the level of separation required and the tolerable administration overheard.

A PKI also has registration authorities (RAs).

  • One or more RAs are responsible for those aspects of a CA’s responsibilities that are related to identification and authentication of the subject of a public-key certificate to be issued by that CA.
  • An RA may either be a separate entity or be an integrated part of the CA.
  • CAs typically play the role of RA as well.
  • An example of a software solution that implements RA functionality is PrimeKey’s EJBCA Registration Authority.

Although the X.509 standard does not specify any validation authority (VA), a VA allows an entity to check that a certificate has not been revoked [NCS20, p. 3].

  • The VA role is often carried out by an online facility hosted by an organisation who operates the PKI.
  • VAs often use the Online Certificate Status Protocol (OCSP, see RFC 6960) or certificate revocation lists (CRLs) to advertise revoked certificates.
  • Fig. 3 illustrates the interactions among an RA, a CA and a VA in a PKI.
  • An example of a software solution that implements VA functionality is PrimeKey’s EJBCA Validation Authority.
Fig. 3: The human representing an organisation registers their public key with an RA, which gets a CA to generate a digital certificate certifying the organisation’s key. The digital certificate enables website users to verify the organisation’s website. For the verification, a user can use a VA. Image from Wikipedia.

Public-key cryptosystems

Algorithmically speaking, there is more than one way of constructing a public-key cryptosystem.

Standard public-key cryptosystems: 1️⃣ Rivest-Shamir-Adleman (RSA) cryptosystem, 2️⃣ elliptic-curve cryptosystems.

These cryptosystems rely on the hardness of certain computational problems for their security.

The hardness of these computational problems has come under threat of quantum computers and quantum algorithms like Shor’s algorithm.

As a countermeasure, NIST has been searching for post-quantum cryptography (PQC, also called quantum-resistant cryptography).

As of writing, there are three PQC candidates.

References

[ITU19] ITU-T, Information technology – Open Systems Interconnection – The Directory: Public-key and attribute certificate frameworks, Recommendation ITU-T X.509 | ISO/IEC 9594-8, October 2019. Available at https://www.itu.int/rec/T-REC-X.509-201910-I/en.
[NCS20] NCSC, Design and build a privately hosted Public Key Infrastructure: Principles for the design and build of in-house Public Key Infrastructure (PKI), National Cyber Security Centre guidance, November 2020. Available at https://www.ncsc.gov.uk/collection/in-house-public-key-infrastructure/introduction-to-public-key-infrastructure/ca-hierarchy.
[NIS13] NIST, Digital Signature Standard (DSS), FIPS PUB 186-4, Information Technology Laboratory, National Institute of Standards and Technology, 2013. https://doi.org/10.6028/NIST.FIPS.186-4.
[SC16] J. J. Stapleton and W. Clay Epstein, Security without Obscurity: A Guide to PKI Operations, CRC Press, 2016. https://doi.org/10.1201/b19725.

R

Picture of Yee Wei Law

Rivest-Shamir-Adleman (RSA) cryptosystem

by Yee Wei Law - Saturday, 19 August 2023, 8:11 PM
 

See 👇 attachment (coming soon) or the latest source on Overleaf.

Tags:

Picture of Yee Wei Law

Rowhammer

by Yee Wei Law - Saturday, 22 April 2023, 11:00 PM
 

Not ready for 2023 but see reference below.

References

[SD15] M. Seaborn and T. Dullien, Exploiting the DRAM rowhammer bug to gain kernel privileges, Black Hat, 2015. Available at https://www.blackhat.com/docs/us-15/materials/us-15-Seaborn-Exploiting-The-DRAM-Rowhammer-Bug-To-Gain-Kernel-Privileges.pdf.

S

Picture of Yee Wei Law

Safe programming languages

by Yee Wei Law - Saturday, 24 May 2025, 2:46 PM
 

A safe programming language is one that is memory-safe (see Definition 1), type-safe (see Definition 2) and thread-safe (see Definition 3).

Definition 1: Memory safety [WWK+21]

Assurance that adversaries cannot read or write to memory locations other than those intended by the programmer.

A significant percentage of software vulnerabilities have been attributed to memory safety issues [NSA22], hence memory safety is of critical importance.

Examples of violations of memory safety can be found in the discussion of common weakness CWE-787 and CWE-125.

Definition 2: Type safety [Fru07, Sec. 1.1]

Type safety is a formal guarantee that the execution of any program is free of type errors, which are undesirable program behaviours resulting from attempts to perform on some value an operation that is inappropriate to the type of the value.

For example, applying a factorial function to any value that is not an integer should result in a type error.

Type safety ⇒ memory safety, but the converse is not true [SM07, Sec. 6.5.2], hence type safety is commonly considered to be a central theme in language-based security [Fru07].

Type-safe programming languages, e.g., Java , Ruby, C#, Go, Kotlin, Swift and Rust, have been around for a while. However, memory-unsafe languages are still being used because:

  • Type-safety features come at the expense of performance. There is for example overhead associated with checking the bounds on every array access [NSA22].

    Even the current speed champion, Rust, among the type-safe languages, and the only type-safe language that has made it to the Linux kernel [VN22] and Windows kernel [Thu23], is not efficient enough for all use cases [Iva22]. This is one of the reasons why Google is still trying to create a successor to C++ called Carbon.

  • Type-safety features also come at the expense of resource requirements. Most memory-safe languages use garbage collection for memory management [NSA22], and this translates to higher memory usage.
  • Although most type-safe languages are supported on the mainstream computing platforms (e.g., Wintel), the same cannot be said of embedded platforms.

    It can be challenging to program a resource-constrained platform using a type-safe language.

  • There is already a vast amount of legacy code in C/C++ and other memory-unsafe languages.

    The cost to port legacy code, including the cost of training programmers, is often prohibitive.

    Depending on the language, interfacing memory-safe code with unsafe legacy code can be cumbersome.

  • Besides invoking unsafe code, it is all too easy to do unsafe things with a type-safe language, e.g., not checking user input, not implementing access control.

    Programmers often use this as an excuse to not use a different language than what they are familiar with.

Nevertheless, adoption of type-safe languages, especially Rust, has been on the rise [Cla23].

Thread safety rounds up the desirable properties of type-safe languages.

Definition 3: Thread safety [Ora19, Ch. 7]

The avoidance of data races, which occur when data are set to either correct or incorrect values, depending upon the order in which multiple threads access and modify the data.

Watch the following LinkedIn Learning video about thread safety:

Thread safety from IoT Foundations: Operating Systems Fundamentals by Ryan Hu

Rust is an example of a type-safe language that is also thread-safe.

References

[Cla23] T. Claburn, Memory safety is the new black, fashionable and fit for any occasion: Calls to avoid C/C++ and embrace Rust grow louder, The Register, January 2023. Available at https://www.theregister.com/2023/01/26/memory_safety_mainstream/.
[Fru07] N. G. Fruja, Type safety of C# and .Net CLR, Ph.D. thesis, ETH Zürich, 2007. https://doi.org/10.3929/ethz-a-005357653.
[Iva22] N. Ivanov, Is Rust C++-fast? Benchmarking System Languages on Everyday Routines, arXiv preprint arXiv:2209.09127, 2022. https://doi.org/10.48550/ARXIV.2209.09127.
[NSA22] NSA, Software memory safety, Cybersecurity Information Sheet, November 2022. Available at https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI_SOFTWARE_MEMORY_SAFETY.PDF.
[Ora19] Oracle, Multithreaded programming guide, Part No: E54803, March 2019. Available at https://docs.oracle.com/cd/E53394_01/pdf/E54803.pdf.
[SM07] S. Smith and J. Marchesini, The Craft of System Security, Addison-Wesley Professional, 2007. Available at https://learning.oreilly.com/library/view/the-craft-of/9780321434838/.
[Thu23] P. Thurrott, First Rust Code Shows Up in the Windows 11 Kernel, blog post, May 2023. Available at https://www.thurrott.com/windows/windows-11/282995/first-rust-code-shows-up-in-the-windows-11-kernel.
[VN22] S. Vaughan-Nichols, Linus Torvalds: Rust will go into Linux 6.1, ZDNET, September 2022. Available at https://www.zdnet.com/article/linus-torvalds-rust-will-go-into-linux-6-1/.
[WWK+21] D. Wagner, N. Weaver, P. Kao, F. Shakir, A. Law, and N. Ngai, Computer security, online textbook for CS 161 Computer Security at UC Berkeley, 2021. Available at https://textbook.cs161.org/.


Page: (Previous)   1  2  3  4  5  6  7  8  (Next)
  ALL