Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL
C |
|---|
CCSDS File Delivery Protocol | ||||||||
|---|---|---|---|---|---|---|---|---|
The CCSDS File Delivery Protocol (CFDP) is standardised in
CFDP has existed for decades, and it is intended to enable packet delivery services in space (space-to-ground, ground-to-space, and space-to-space) environments [CCS20, Sec. 1.1]. CFDP defines 1️⃣ a protocol suitable for the transmission of files to and from spacecraft data storage, and 2️⃣ file management services to allow control over the storage medium [CCS20, Sec. 2.1]. CFDP assumes a virtual filestore and associated services that an implementation must map to the capabilities of the actual underlying filestore used [CCS20, Sec. 1.1]. File transfers can be either unrealiable (class 1) or reliable (class 2): Class 1 [CCS20, Sec. 7.2] All file segments are transferred without the possibility of retransmission. End of file (EOF) is not acknowledged by the receiver. When the flag Closure Requested is set, the receiver is required to send a Finished PDU upon receiving all file segments (or when the request is cancelled), but the sender does not need to acknowledge the Finished PDU. The Closure Requested flag is useful when the underlying communication protocol is reliable. Class 2 [CCS20, Sec. 7.3] The receiver is required to acknowledge the EOF PDU and the sender has to acknowledge the Finished PDU. Sending a PDU that requires acknowledgment triggers a timer. When the timer expires and if the acknowledgment has not been received, the relevant file segment PDU is resent. This repeats until the ACK Timer Expiration Counter [CCS21b, Sec. 4.4] reaches a predefined maximum value. Finally, if the counter has reached its maximum value and the acknowledgment has still not been received, a fault condition is triggered which may cause the transfer to be abandoned, canceled or suspended. The receiver can also indicate missing metadata or data by sending NAK PDUs. Fig. 2 shows a sample class-2 Copy File transaction between two entities. Fig. 3 summarises 1️⃣the operation primitives and PDUs in CFDP, as well as 2️⃣ the relationships of these primitives and PDUs to the operational process from initiation through termination. Watch an introduction to the CFDP on YouTube: There are several open-source implementations of CFDP, e.g.,
References
| ||||||||
CCSDS optical communications physical layer | |||
|---|---|---|---|
CCSDS publications naming convention | |||
|---|---|---|---|
Publications from the Consultative Committee for Space Data Systems (CCSDS) can be found here. Each publication has an identifier of the form MMM.MM-A-N, where
| |||
CCSDS RF communications physical layer | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|
The International Telecommunication Union (ITU) has defined a series of regulations or recommendations for space research, space operations and Earth exploration-satellite services [ITU20, Recommendation ITU-R SA.1154-0], but CCSDS has tweaked these recommendations for their purposes [CCS21, Sec. 1.5]. CCSDS has defined two mission categories [CCS21, Sec. 1.5]:
Orthogonal to the preceding classification, CCSDS has also divided their recommendations into these six categories [CCS21, Sec. 2]:
For example, the recommendations for telemetry RF comm are summarised in Table 1. Note:
The Proximity-1 physical layer is separate from all the above. References
| ||||||||||
Complexity theory | |||
|---|---|---|---|
The attachment covers the following topics:
| |||
Cryptography: introductory overview | |||
|---|---|---|---|
See 👇 attachment or the latest source on Overleaf.
| |||
CWE-1037 | ||||
|---|---|---|---|---|
This is a student-friendly explanation of the hardware weakness CWE-1037 “Processor Optimization Removal or Modification of Security-critical Code”, which is susceptible to
While increasingly many security mechanisms have been baked into software, the processors themselves are optimising the execution of the programs such that these mechanisms become ineffective. Example 1
🛡 General mitigation
Software fixes exist but are partial as the use of speculative execution remains a favourable way of increasing processor performance. Fortunately, the likelihood of successful exploitation is considered to be low. References
| ||||
CWE-1189 | |||
|---|---|---|---|
This is a student-friendly explanation of the hardware weakness CWE-1189 “Improper Isolation of Shared Resources on System-on-a-Chip (SoC)”, which is susceptible to
A system-on-a-chip (SoC) may have many functions but a limited number of pins or pads. A pin can only perform one function at a time, but it can be configured to perform multiple functions; this technique is called pin multiplexing. Similarly, multiple resources on the chip may be shared to multiplex and support different features or functions. When such resources are shared between trusted and untrusted agents, untrusted agents may be able to access assets authorised only for trusted agents. Consider the generic SoC architecture in Fig. 1 below: The SRAM in the hardware root of trust (HRoT) is mapped to the core{0-N} address space accessible by the untrusted part of the system. The HRoT interface (hrot_iface in Fig. 1) mediates access to private memory ranges, allowing the SRAM to function as a mailbox for communication between the trusted and untrusted partitions.
Example 1
An example of CWE-1189 in the real world is CVE-2020-8698 with the description: “Improper isolation of shared resources in some Intel(R) Processors may allow an authenticated user to potentially enable information disclosure via local access”.
🛡 General mitigation
Untrusted agents should not share resources with trusted agents, so when sharing resources, avoid mixing agents of varying trust levels. | |||
CWE-1191 | ||||
|---|---|---|---|---|
This is a student-friendly explanation of the hardware weakness CWE-1191 “On-Chip Debug and Test Interface With Improper Access Control”, which is susceptible to
The internal information of a device may be accessed through a scan chain of interconnected internal registers, typically through a Joint Test Action Group (JTAG) interface.
The JTAG interface is so important in the area of hardware security that you should make sure you read the knowledge base entry carefully. Sometimes, designers choose not to expose the debug pins on the motherboard.
Example 1
Barco’s ClickShare family of products is designed to provide end users with wireless presenting capabilities, eliminating the need for wired connections such as HDMI [Wit19]. ClickShare Button R9861500D01 devices, before firmware version 1.9.0, were vulnerable to CVE-2019-18827.
🛡 General mitigation
Disable the JTAG interface or implement access control (at least debug authorisation). Authentication logic, if implemented, should resist timing attacks. Security-sensitive data stored in registers, such as keys, should be cleared when entering debug mode. References
| ||||
CWE-125 | ||
|---|---|---|
This is a student-friendly explanation of the software weakness CWE-125 “Out-of-bounds Read”, where the vulnerable entity reads data past the end, or before the beginning, of the intended buffer. This weakness is susceptible to
This and CWE-787 are two sides of the same coin. Typically, this weakness allows an attacker to read sensitive information from unexpected memory locations or cause a crash. Example 1
A high-profile vulnerability is CVE-2014-0160, infamously known as the Heartbleed bug. Watch an accessible explanation given by Computerphile: 🛡 General mitigation
| ||
CWE-1256 | ||
|---|---|---|
CWE-1260 | ||
|---|---|---|
This is a student-friendly explanation of the hardware weakness CWE-1260. Not ready for 2023. | ||
CWE-1300 | ||||
|---|---|---|---|---|
This is a student-friendly explanation of the hardware weakness CWE-1300 “Improper Protection of Physical Side Channels”, which is susceptible to A hardware product with this weakness does not contain sufficient protection mechanisms to prevent physical side channels from exposing sensitive information due to patterns in physically observable phenomena such as variations in power consumption, electromagnetic emissions (EME), or acoustic emissions. Example 1
Google’s Titan Security Key is a FIDO universal 2nd-factor (U2F) hardware device that became available since July 2018. Unfortunately, the security key is susceptible to side-channel attacks observing the local electromagnetic radiations of its secure element — an NXP A700x chip which is now discontinued — during an ECDSA signing operation [RLMI21]. The side-channel attack can clone the secret key in a Titan Security Key. Watch the USENIX Security ’21 presentation on the side-channel attack: More examples of side-channel attacks are available here. 🛡 General mitigation
The standard countermeasures are those that apply to CWE-1300 “Improper Protection of Physical Side Channels” and its parent weakness CWE-203 “Observable Discrepancy”. CWE-1300:
CWE-203:
References
| ||||
CWE-1384 | ||||
|---|---|---|---|---|
This is a student-friendly explanation of the hardware weakness CWE-1384 “Improper Handling of Physical or Environmental Conditions”, where a hardware product does not properly handle unexpected physical or environmental conditions that occur naturally or are artificially induced. This weakness CWE-1384 and the weakness CWE-1300 can be seen as two sides of a coin: while the latter is about leakage of sensitive information, the former is about injection of malicious signals. Example 1
The GhostTouch attack [WMY+22] generates electromagnetic interference (EMI) signals on the scan-driving-based capacitive touchscreen of a smartphone, which result in “ghostly” touches on the touchscreen; see Fig. 1. The EMI signals can be generated, for example, using ChipSHOUTER. These ghostly touches enable the attacker to actuate unauthorised taps and swipes on the victim’s touchscreen. Watch the authors’ presentation at USENIX Security ’22: PCspooF is another example of an attack exploiting weakness CWE-1384. 🛡 General mitigation
Product specification should include expectations for how the product will perform when it exceeds physical and environmental boundary conditions, e.g., by shutting down. Where possible, include independent components that can detect excess environmental conditions and are capable of shutting down the product. Where possible, use shielding or other materials that can increase the adversary’s workload and reduce the likelihood of being able to successfully trigger a security-related failure. References
| ||||
CWE-787 | |||
|---|---|---|---|
This is a student-friendly explanation of the software weakness CWE-787 “Out-of-bounds write”, where the vulnerable entity writes data past the end, or before the beginning, of the intended buffer. This and CWE-125 are two sides of the same coin. This can be caused by incorrect pointer arithmetic (see Example 1), accessing invalid pointers due to incomplete initialisation or memory release (see Example 2), etc. Example 1
In the adjacent C code, char pointer Reminder: In a POSIX environment, a C program can be compiled and linked into an executable using command
Example 2
In the adjacent C code, after the memory allocated to
Typically, the weakness CWE-787 can result in corruption of data, a crash, or code execution. 🛡 General mitigation
A long list of mitigation measures exist, so only a few are mentioned here:
Read about more measures here. Example 3
Using the
Enabling compilation of unsafe code in Visual Studio as per Fig. 1, the code above can be compiled. Once compiled and run, the code above will not trigger any runtime error, unless the line writing Question: What runtime error would the | |||
CWE-79 | ||||
|---|---|---|---|---|
This is a student-friendly explanation of the software weakness CWE-79 “Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')”, which is susceptible to
This weakness exists when user-controllable input is not neutralised or is incorrectly neutralised before it is placed in output that is used as a web page that is served to other users. In general, cross-site scripting (XSS) vulnerabilities occur when:
Once the malicious script is injected, a variety of attacks are achievable, e.g.,
The attacks above can usually be launched without alerting the victim. Even with careful users, URL encoding or Unicode can be used to obfuscate web requests, to make the requests look less suspicious. Watch an introduction to XSS on LinkedIn Learning: Understanding cross-site scripting from Security Testing: Vulnerability Management with Nessus by Mike Chapple Watch a demonstration of XSS on LinkedIn Learning: Cross-site scripting attacks from Web Security: Same-Origin Policies by Sasha Vodnik Example 1
The vulnerability CVE-2022-20916 caused the web-based management interface of Cisco IoT Control Center to allow an unauthenticated, remote attacker to conduct an XSS attack against a user of the interface. The vulnerability was due to absence of proper validation of user-supplied input. 🛡 General mitigation
A long list of mitigation measures exist, so only a few are mentioned here:
Read about more measures here and also consult the OWASP Cross Site Scripting Prevention Cheat Sheet. References
| ||||
CWE-917 | ||||
|---|---|---|---|---|
This is a student-friendly explanation of the software weakness CWE-917 “Improper Neutralization of Special Elements used in an Expression Language Statement ('Expression Language Injection')”. The vulnerable entity constructs all or part of an expression language (EL) statement in a framework such as Jakarta Server Pages (JSP, formerly JavaServer Pages) using externally-influenced input from an upstream component, but it does not neutralise or it incorrectly neutralises special elements that could modify the intended EL statement before it is executed.
Example 1
The infamous vulnerability Log4Shell (CVE-2021-44228) that occupied headlines for months in 2022 cannot be a better example. Watch an explanation of Log4Shell on YouTube: 🛡 General mitigation
Avoid adding user-controlled data into an expression interpreter. If user-controlled data must be added to an expression interpreter, one or more of the following should be performed:
By default, disable the processing of EL expressions. In JSP, set the attribute References
| ||||
Cyber Kill Chain | ||||
|---|---|---|---|---|
The Cyber Kill Chain® framework/model was developed by Lockheed Martin as part of their Intelligence Driven Defense® model for identification and prevention of cyber intrusions. The model identifies what an adversary must complete in order to achieve its objectives. The seven steps of the Cyber Kill Chain sheds light on an adversary’s tactics, techniques and procedures (TTP): Watch a quick overview of the Cyber Kill Chain on LinkedIn Learning: Overview of the cyber kill chain from Ethical Hacking with JavaScript by Emmanuel Henri Example 1: Modelling Stuxnet with the Cyber Kill Chain
Stuxnet (W32.Stuxnet in Symantec’s naming scheme) was discovered in 2010, with some components being used as early as November 2008 [FMC11]. Stuxnet is a large and complex piece of malware that targets industrial control systems, leveraging multiple zero-day exploits, an advanced Windows rootkit, complex process injection and hooking code, network infection routines, peer-to-peer updates, and a command and control interface [FMC11]. Watch a brief discussion of modelling Stuxnet with the Cyber Kill Chain: Stuxnet and the kill chain from Practical Cybersecurity for IT Professionals by Malcolm Shore ⚠ Contrary to what the video above claims, Stuxnet does have a command and control routine/interface [FMC11]. References
| ||||
