Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page:  1  2  (Next)
  ALL

C

Picture of Yee Wei Law

CCSDS File Delivery Protocol

by Yee Wei Law - Monday, 2 June 2025, 7:53 PM
 

The CCSDS File Delivery Protocol (CFDP) is standardised in

CFDP has existed for decades, and it is intended to enable packet delivery services in space (space-to-ground, ground-to-space, and space-to-space) environments [CCS20, Sec. 1.1].

CFDP defines 1️⃣ a protocol suitable for the transmission of files to and from spacecraft data storage, and 2️⃣ file management services to allow control over the storage medium [CCS20, Sec. 2.1].

CFDP assumes a virtual filestore and associated services that an implementation must map to the capabilities of the actual underlying filestore used [CCS20, Sec. 1.1].

File transfers can be either unrealiable (class 1) or reliable (class 2):

Class 1 [CCS20, Sec. 7.2]

All file segments are transferred without the possibility of retransmission.

End of file (EOF) is not acknowledged by the receiver.

When the flag Closure Requested is set, the receiver is required to send a Finished PDU upon receiving all file segments (or when the request is cancelled), but the sender does not need to acknowledge the Finished PDU.

The Closure Requested flag is useful when the underlying communication protocol is reliable.

Class 2 [CCS20, Sec. 7.3]

The receiver is required to acknowledge the EOF PDU and the sender has to acknowledge the Finished PDU.

Sending a PDU that requires acknowledgment triggers a timer.

When the timer expires and if the acknowledgment has not been received, the relevant file segment PDU is resent.

This repeats until the ACK Timer Expiration Counter [CCS21b, Sec. 4.4] reaches a predefined maximum value.

Finally, if the counter has reached its maximum value and the acknowledgment has still not been received, a fault condition is triggered which may cause the transfer to be abandoned, canceled or suspended.

The receiver can also indicate missing metadata or data by sending NAK PDUs.

Fig. 2 shows a sample class-2 Copy File transaction between two entities.

Fig. 2: A sample Copy File transaction where an NAK is sent once the (N+1)th File Data PDU is found missing [CCS20, Sec. 3.3]. Additionally, the source user replies with an ACK PDU upon receiving a Finished PDU from the destination user.

Fig. 3 summarises 1️⃣the operation primitives and PDUs in CFDP, as well as 2️⃣ the relationships of these primitives and PDUs to the operational process from initiation through termination.

Fig. 3: An operations view of CFDP: a summary of CFDP operation primitives, PDUs and their relationships to the operational process [CCS21b, Figure 2-1].

Watch an introduction to the CFDP on YouTube:

There are several open-source implementations of CFDP, e.g.,

References

[CCS20] CCSDS, CCSDS File Delivery Protocol (CFDP), Recommended Standard CCSDS 727.0-B-5, The Consultative Committee for Space Data Systems, July 2020. Available at https://public.ccsds.org/Pubs/727x0b5.pdf.
[CCS21a] CCSDS, CCSDS File Delivery Protocol (CFDP) — Part 1: Introduction and Overview, Informational Report CCSDS 720.1-G-4, The Consultative Committee for Space Data Systems, May 2021. Available at https://public.ccsds.org/Pubs/720x1g4.pdf.
[CCS21b] CCSDS, CCSDS File Delivery Protocol (CFDP) — Part 2: Implementers Guide, Informational Report CCSDS 720.2-G-4, The Consultative Committee for Space Data Systems, May 2021. Available at https://public.ccsds.org/Pubs/720x2g4.pdf.

Picture of Yee Wei Law

CCSDS optical communications physical layer

by Yee Wei Law - Saturday, 19 August 2023, 8:10 PM
 
Coming soon.

References

[] .

Picture of Yee Wei Law

CCSDS publications naming convention

by Yee Wei Law - Wednesday, 11 January 2023, 2:51 PM
 

Publications from the Consultative Committee for Space Data Systems (CCSDS) can be found here.

Each publication has an identifier of the form MMM.MM-A-N, where

  • The alphabet A is “B” for blue book (recommended standard), “M” for magenta book (recommended practice), and “G” for green book (informational report).
  • The suffix N is the issue number.

Picture of Yee Wei Law

CCSDS RF communications physical layer

by Yee Wei Law - Monday, 2 June 2025, 7:54 PM
 

The International Telecommunication Union (ITU) has defined a series of regulations or recommendations for space research, space operations and Earth exploration-satellite services [ITU20, Recommendation ITU-R SA.1154-0], but CCSDS has tweaked these recommendations for their purposes [CCS21, Sec. 1.5].

CCSDS has defined two mission categories [CCS21, Sec. 1.5]:

  • Category A for missions having an altitude above the Earth of < 2 million km.
  • Category B for missions having an altitude above the Earth of ≥ 2 million km.

Orthogonal to the preceding classification, CCSDS has also divided their recommendations into these six categories [CCS21, Sec. 2]:

  1. earth-to-space RF
  2. telecommand
  3. space-to-earth RF
  4. telemetry

    Here, “telemetry” encompasses spacecraft housekeeping data and mission data (e.g., video) transmitted from the spacecraft directly to an Earth station or via another spacecraft (space-to-space return link).
  5. radio metric
  6. spacecraft

For example, the recommendations for telemetry RF comm are summarised in Table 1.

Table 1: Telemetry recommendation summary [CCS13a, p. 2.0-5], where NRZ = non-return-to-zero, PCM = pulse code modulation, QPSK = quadrature phase-shift keying,  OQPSK = offset quadrature phase-shift keying, BPSK = binary phase-shift keying, GMSK = Gaussian minimum shift keying, APSK = amplitude and phase-shift keying.

Note:

  • For Category-A missions [CCS21, Sec. 2.4.12A], filtered OQPSK and GMSK modulations are recommended for high-rate telemetry in 1️⃣ the 2-GHz and 8-GHz Space Research bands, 2️⃣ the 8-GHz Earth Exploration-Satellite band, and 3️⃣ the 26-GHz Space Research band.

    Filtered 8PSK modulation is also recommended for the 8-GHz Earth Exploration-Satellite band.

    2 GHz, 8 GHz and 26 GHz are part of the L/S, X and Ka bands respectively.

    This knowledge base entry discusses usage of different frequency bands.

  • For Category-B missions [CCS21, Sec. 2.4.12B], GMSK is recommended for high-rate telemetry in the 2-GHz, 8-GHz and 32-GHz bands.
  • The 25.5–27.0 GHz band (part of K band) is already being used for high-rate transmission in many missions, and usage is expected to rise [CCS21, Sec. 2.4.23].

  • The 32-GHz band (part of Ka band) is planned to become the backbone for communications with high-rate Category-B missions [CCS21, Sec. 2.4.20B].

The Proximity-1 physical layer is separate from all the above.

References

[CCS13a] CCSDS, Proximity-1 Space Link Protocol—Physical Layer, Recommended Standard CCSDS 211.1-B-4, The Consultative Committee for Space Data Systems, December 2013. Available at https://public.ccsds.org/Pubs/211x1b4e1.pdf.
[CCS21] CCSDS, Radio Frequency and Modulation Systems—Part 1 Earth Stations and Spacecraft, Recommended Standard CCSDS 401.0-B-32, The Consultative Committee for Space Data Systems, October 2021. Available at https://public.ccsds.org/Pubs/401x0b32.pdf.
[ITU20] ITU, Radio regulations, 2020. Available at https://www.itu.int/hub/publication/r-reg-rr-2020/.
[McC09] D. McClure, Overview of satellite communications, slides, 2009. Available at https://olli.gmu.edu/docstore/800docs/0909-803-Satcom-course.pdf.

Picture of Yee Wei Law

Complexity theory

by Yee Wei Law - Sunday, 14 July 2024, 2:05 PM
 

The attachment covers the following topics:

  • Complexity and asymptotic notation.
  • Complexity classes.

Picture of Yee Wei Law

Cryptography: introductory overview

by Yee Wei Law - Thursday, 9 October 2025, 5:22 PM
 
See 👇 attachment or the latest source on Overleaf.
Tags:

Picture of Yee Wei Law

CWE-1037

by Yee Wei Law - Saturday, 28 June 2025, 4:04 PM
 

This is a student-friendly explanation of the hardware weakness CWE-1037 “Processor Optimization Removal or Modification of Security-critical Code”, which is susceptible to

  • CAPEC-663 “Exploitation of Transient Instruction Execution”

While increasingly many security mechanisms have been baked into software, the processors themselves are optimising the execution of the programs such that these mechanisms become ineffective.

Example 1

The most high-profile exploits are known as Meltdown and Spectre (🖱 click links for details).

🛡 General mitigation

Software fixes exist but are partial as the use of speculative execution remains a favourable way of increasing processor performance.

Fortunately, the likelihood of successful exploitation is considered to be low.

References

[KHF+20] P. Kocher, J. Horn, A. Fogh, D. Genkin, D. Gruss, W. Haas, M. Hamburg, M. Lipp, S. Mangard, T. Prescher, M. Schwarz, and Y. Yarom, Spectre attacks: Exploiting speculative execution, Commun. ACM 63 no. 7 (2020), 93 – 101. https://doi.org/10.1145/3399742.

Picture of Yee Wei Law

CWE-1189

by Yee Wei Law - Saturday, 28 June 2025, 4:04 PM
 

This is a student-friendly explanation of the hardware weakness CWE-1189 “Improper Isolation of Shared Resources on System-on-a-Chip (SoC)”, which is susceptible to

  • CAPEC-124 “Shared Resource Manipulation”.

A system-on-a-chip (SoC) may have many functions but a limited number of pins or pads.

A pin can only perform one function at a time, but it can be configured to perform multiple functions; this technique is called pin multiplexing.

Similarly, multiple resources on the chip may be shared to multiplex and support different features or functions.

When such resources are shared between trusted and untrusted agents, untrusted agents may be able to access assets authorised only for trusted agents.

Consider the generic SoC architecture in Fig. 1 below:

Fig. 1: A generic SoC architecture. Diagram from MITRE.

The SRAM in the hardware root of trust (HRoT) is mapped to the core{0-N} address space accessible by the untrusted part of the system.

The HRoT interface (hrot_iface in Fig. 1) mediates access to private memory ranges, allowing the SRAM to function as a mailbox for communication between the trusted and untrusted partitions.

  • Assuming a malware resides in the untrusted partition, and has access to the core{0-N} memory map.
  • The malware can read from and write to the mailbox region of the SRAM access-controlled by hrot_iface.
  • Security prohibits information from entering or exiting the mailbox region of the SRAM through hrot_iface when the system is in secure or privileged mode.
Example 1

An example of CWE-1189 in the real world is CVE-2020-8698 with the description: “Improper isolation of shared resources in some Intel(R) Processors may allow an authenticated user to potentially enable information disclosure via local access”.

  • A list of the affected Intel processors is available here.
  • Industrial PCs and CNC devices using the affected Intel processors were naturally affected, e.g., some industrial controllers made by Siemens, which fortunately could be patched by updating the BIOS.
🛡 General mitigation

Untrusted agents should not share resources with trusted agents, so when sharing resources, avoid mixing agents of varying trust levels.


Picture of Yee Wei Law

CWE-1191

by Yee Wei Law - Saturday, 7 June 2025, 9:09 AM
 

This is a student-friendly explanation of the hardware weakness CWE-1191 “On-Chip Debug and Test Interface With Improper Access Control”, which is susceptible to

  • CAPEC-1 “Accessing Functionality Not Properly Constrained by ACLs” and
  • CAPEC-180 “Exploiting Incorrectly Configured Access Control Security Levels”.

The internal information of a device may be accessed through a scan chain of interconnected internal registers, typically through a Joint Test Action Group (JTAG) interface.

  • The JTAG interface provides access to these registers in a serial fashion in the form of a scan chain for the purposes of debugging programs running on the device.
  • Since almost all information contained within a device may be accessed over this interface, device manufacturers typically implement some form of access control — debug authorisation being the simplest form — in addition to on-chip protections, to prevent unintended use of this sensitive information.
  • If access control is not implemented or not implemented correctly, a user may be able to bypass on-chip protection mechanisms through the debug interface.

The JTAG interface is so important in the area of hardware security that you should make sure you read the knowledge base entry carefully.

Sometimes, designers choose not to expose the debug pins on the motherboard.

  • Instead, they choose to hide these pins in the intermediate layers of the board, to work around the lack of debug authorisation inside the chip.
  • In this scenario (without debug authorisation), when the debug interface is exposed, chip internals become accessible to an attacker.
Example 1

Barco’s ClickShare family of products is designed to provide end users with wireless presenting capabilities, eliminating the need for wired connections such as HDMI [Wit19].

ClickShare Button R9861500D01 devices, before firmware version 1.9.0, were vulnerable to CVE-2019-18827.

  • These devices were equipped with an i.MX28 System-on-Chip (SoC), which in turn was equipped with a JTAG interface for debugging [Wit19].
  • JTAG access protection could be enabled by setting the JTAG_SHIELD bit in the HW_DIGCTRL_CTRL register, which had to be done by an user application, and on system reset, JTAG access became enabled.

  • One way of exploiting this was booting the SoC in the “wait for JTAG” boot mode, where the ROM code entered an infinite loop that could only be broken by manipulating certain registers via JTAG.
  • Therefore on these devices, although JTAG access was disabled after ROM code execution, JTAG access was possible when the system was running code from ROM before handing control over to embedded firmware.
🛡 General mitigation

Disable the JTAG interface or implement access control (at least debug authorisation).

Authentication logic, if implemented, should resist timing attacks.

Security-sensitive data stored in registers, such as keys, should be cleared when entering debug mode.

References

[Wit19] WithSecure Labs, Multiple Vulnerabilities in Barco ClickShare, advisory, 2019. Available at https://labs.withsecure.com/advisories/multiple-vulnerabilities-in-barco-clickshare.

Picture of Yee Wei Law

CWE-125

by Yee Wei Law - Monday, 2 June 2025, 7:51 PM
 

This is a student-friendly explanation of the software weakness CWE-125 “Out-of-bounds Read”, where the vulnerable entity reads data past the end, or before the beginning, of the intended buffer. This weakness is susceptible to

This and CWE-787 are two sides of the same coin.

Typically, this weakness allows an attacker to read sensitive information from unexpected memory locations or cause a crash.

Example 1

A high-profile vulnerability is CVE-2014-0160, infamously known as the Heartbleed bug.

Watch an accessible explanation given by Computerphile:

🛡 General mitigation
  • Use a safe programming language that provides appropriate memory abstractions.
  • Assume all inputs are malicious. Where possible, apply the “accept known good” input validation strategy, i.e., use a list of acceptable inputs that strictly conform to specifications; and reject any input that does not strictly conform to specifications, or transform it into something that does.

  • When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules.

    Example of business-rule logic: “John” may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as “red” or “blue”.

  • Ensure correct calculation of variables related to length, size, dimension, offset, etc. Be especially cautious of relying on a “sentinel” (e.g., NUL or any other special character) in untrusted inputs.

Picture of Yee Wei Law

CWE-1256

by Yee Wei Law - Thursday, 9 October 2025, 5:22 PM
 

This is a student-friendly explanation of the hardware weakness CWE-1256 “Improper Restriction of Software Interfaces to Hardware Features”, which is susceptible to

Not ready for 2023.


Picture of Yee Wei Law

CWE-1260

by Yee Wei Law - Thursday, 9 October 2025, 5:22 PM
 

This is a student-friendly explanation of the hardware weakness CWE-1260.

Not ready for 2023.


Picture of Yee Wei Law

CWE-1300

by Yee Wei Law - Monday, 2 June 2025, 7:55 PM
 

This is a student-friendly explanation of the hardware weakness CWE-1300 “Improper Protection of Physical Side Channels”, which is susceptible to

  • CAPEC-189 “Black Box Reverse Engineering”
  • CAPEC-699 “Eavesdropping on a Monitor”

A hardware product with this weakness does not contain sufficient protection mechanisms to prevent physical side channels from exposing sensitive information due to patterns in physically observable phenomena such as variations in power consumption, electromagnetic emissions (EME), or acoustic emissions.

Example 1

Google’s Titan Security Key is a FIDO universal 2nd-factor (U2F) hardware device that became available since July 2018.

Unfortunately, the security key is susceptible to side-channel attacks observing the local electromagnetic radiations of its secure element — an NXP A700x chip which is now discontinued — during an ECDSA signing operation [RLMI21].

The side-channel attack can clone the secret key in a Titan Security Key.

Watch the USENIX Security ’21 presentation on the side-channel attack:

More examples of side-channel attacks are available here.

🛡 General mitigation

The standard countermeasures are those that apply to CWE-1300 “Improper Protection of Physical Side Channels” and its parent weakness CWE-203 “Observable Discrepancy”.

CWE-1300:

  • Apply blinding or masking techniques to implementations of cryptographic algorithms.
  • Add shielding or tamper-resistant protections to the device to increase the difficulty of obtaining measurements of the side channel.

CWE-203:

  • Compartmentalise the system to have “safe” areas where trust boundaries can be unambiguously drawn.

    Constrain sensitive data to within trust boundaries.

  • Ensure that appropriate compartmentalisation is built into the system design, and the compartmentalisation allows for and reinforces privilege separation functionality.

    Apply the principle of least privilege to decide the appropriate conditions to use or drop privileges.

  • Ensure that error messages only contain minimal details that are useful to the intended audience and no one else.

    The messages should achieve balance between being too cryptic (which can confuse users) and being too detailed (which may reveal more than intended).

    The messages should not reveal the methods used to determine the error because this information can enable attackers to refine or optimise their original attack, thereby increasing their chances of success.

  • If errors must be captured in some detail, record them in log messages not accessible by attackers.

    Highly sensitive information such as passwords should never be saved to log files.

  • Avoid inconsistent messaging that might unintentionally tip off an attacker about internal state, such as whether a user account exists or not.

References

[RLMI21] T. Roche, V. Lomné, C. Mutschler, and L. Imbert, A side journey to titan, in 30th USENIX Security Symposium (USENIX Security 21), USENIX Association, August 2021, pp. 231–248. Available at https://www.usenix.org/conference/usenixsecurity21/presentation/roche.

Picture of Yee Wei Law

CWE-1384

by Yee Wei Law - Wednesday, 26 April 2023, 10:49 AM
 

This is a student-friendly explanation of the hardware weakness CWE-1384 “Improper Handling of Physical or Environmental Conditions”, where a hardware product does not properly handle unexpected physical or environmental conditions that occur naturally or are artificially induced.

This weakness CWE-1384 and the weakness CWE-1300 can be seen as two sides of a coin: while the latter is about leakage of sensitive information, the former is about injection of malicious signals.

Example 1

The GhostTouch attack [WMY+22] generates electromagnetic interference (EMI) signals on the scan-driving-based capacitive touchscreen of a smartphone, which result in “ghostly” touches on the touchscreen; see Fig. 1.

The EMI signals can be generated, for example, using ChipSHOUTER.

Fig. 1: The GhostTouch attack scenario, where the attacker uses an electromagnetic interference (EMI) device under a table to remotely actuate the touchscreen of a smartphone placed face-down on the table [WMY+22, Figure 1].

These ghostly touches enable the attacker to actuate unauthorised taps and swipes on the victim’s touchscreen.

Watch the authors’ presentation at USENIX Security ’22:

PCspooF is another example of an attack exploiting weakness CWE-1384.

🛡 General mitigation

Product specification should include expectations for how the product will perform when it exceeds physical and environmental boundary conditions, e.g., by shutting down.

Where possible, include independent components that can detect excess environmental conditions and are capable of shutting down the product.

Where possible, use shielding or other materials that can increase the adversary’s workload and reduce the likelihood of being able to successfully trigger a security-related failure.

References

[WMY+22] K. Wang, R. Mitev, C. Yan, X. Ji, A.-R. Sadeghi, and W. Xu, GhostTouch: Targeted attacks on touchscreens without physical touch, in 31st USENIX Security Symposium (USENIX Security 22), USENIX Association, Boston, MA, August 2022, pp. 1543–1559. Available at https://www.usenix.org/conference/usenixsecurity22/presentation/wang-kai.

Picture of Yee Wei Law

CWE-787

by Yee Wei Law - Saturday, 7 June 2025, 11:03 AM
 

This is a student-friendly explanation of the software weakness CWE-787 “Out-of-bounds write”, where the vulnerable entity writes data past the end, or before the beginning, of the intended buffer.

This and CWE-125 are two sides of the same coin.

This can be caused by incorrect pointer arithmetic (see Example 1), accessing invalid pointers due to incomplete initialisation or memory release (see Example 2), etc.

Example 1

In the adjacent C code, char pointer p is allocated only 1 byte, but two bytes away from p, a value of 1 is stored.

Reminder: In a POSIX environment, a C program can be compiled and linked into an executable using command gcc cfilename -o exefilename.

#include <stdlib.h>

int main() {
	char *p;
	p = (char *)malloc(1);
	*(p+2) = 1;
	return 0;
}
Example 2

In the adjacent C code, after the memory allocated to p is freed, p points to a non-existent buffer, but then a value is stored in the nonexistent buffer.

#include <stdlib.h>

int main() {
	char *p;
	p = (char *)malloc(1);
	free(p);
	*p = 1;
	return 0;
}

Typically, the weakness CWE-787 can result in corruption of data, a crash, or code execution.

🛡 General mitigation

A long list of mitigation measures exist, so only a few are mentioned here:

  • Use a safe programming language that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

    • For example, many languages that perform their own memory management, such as Java and Perl, are not subject to buffer overflows.
    • Other languages, such as Ada and C# (see Example 3), typically provide overflow protection, but the protection can be disabled by the programmer.
    • Nevertheless, a language’s interface to native code may still be subject to overflows, even if the language itself is theoretically safe.
  • Use a vetted library or framework that is not vulnerable to CWE-787, e.g., Intel’s Safe String Library, and Microsoft’s Strsafe.h library. 👈 These libraries provide safer versions of overflow-prone string-handling functions.
  • Use compilers or compiler extensions that support automatic detection of buffer overflows, e.g., Microsoft Visual Studio with its buffer security check (/GS) flag, the FORTIFY_SOURCE macro on Red Hat Linux platforms, GCC with a range of stack protection flags (that evolved from StackGuard and ProPolice).

Read about more measures here.

Example 3

Using the unsafe keyword, we can write code involving pointers in C#, e.g.,

// compile with: -unsafe
class UnsafeTest
{
    unsafe static void Main()
    {
        int *p; int i = 0;
        p = &i;
        p[2] = 1;
        // Console.WriteLine(p[2]);
    }
}
Fig. 1: Compiling unsafe C# code in Visual Studio 2022.

Enabling compilation of unsafe code in Visual Studio as per Fig. 1, the code above can be compiled.

Once compiled and run, the code above will not trigger any runtime error, unless the line writing p[2] to the console is uncommented.

Question: What runtime error would the Console.WriteLine statement in the code above trigger?


Picture of Yee Wei Law

CWE-79

by Yee Wei Law - Sunday, 23 November 2025, 9:52 PM
 

This is a student-friendly explanation of the software weakness CWE-79 “Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')”, which is susceptible to

This weakness exists when user-controllable input is not neutralised or is incorrectly neutralised before it is placed in output that is used as a web page that is served to other users.

In general, cross-site scripting (XSS) vulnerabilities occur when:

  • Untrusted data enters a web application, typically through a web request.
  • The web application dynamically generates a web page that contains this untrusted data.
  • During page generation, the application does not prevent the data from containing content that is executable by a web browser, such as JavaScript, HTML tags, HTML attributes, mouse events, etc.
  • A victim visits the generated web page through a web browser, which contains malicious script that was injected using the untrusted data.
  • Since the script comes from a web page that was sent by the web server, the victim’s web browser executes the malicious script in the context of the web server’s domain.
  • This effectively violates the intention of the web browser’s same-origin policy, which states that scripts in one domain should not be able to access resources or run code in a different domain.

Once the malicious script is injected, a variety of attacks are achievable, e.g.,

  • The script could transfer private information (e.g., cookies containing session information) from the victim’s computer to the attacker.
  • The script could send malicious requests to a web site on behalf of the victim, which could be especially dangerous to the site if the victim has administrator privileges to manage that site. Phishing attacks could be used to emulate trusted web sites and trick the victim into entering a password, allowing the attacker to compromise the victim’s account on that web site.
  • The script could exploit a vulnerability of the web browser itself, potentially taking over the victim’s computer. This is known as “drive-by attack”.

The attacks above can usually be launched without alerting the victim.

Even with careful users, URL encoding or Unicode can be used to obfuscate web requests, to make the requests look less suspicious.

Watch an introduction to XSS on LinkedIn Learning:

Understanding cross-site scripting from Security Testing: Vulnerability Management with Nessus by Mike Chapple

Watch a demonstration of XSS on LinkedIn Learning:

Cross-site scripting attacks from Web Security: Same-Origin Policies by Sasha Vodnik

Example 1

The vulnerability CVE-2022-20916 caused the web-based management interface of Cisco IoT Control Center to allow an unauthenticated, remote attacker to conduct an XSS attack against a user of the interface.

The vulnerability was due to absence of proper validation of user-supplied input.

🛡 General mitigation

A long list of mitigation measures exist, so only a few are mentioned here:

  • For any data that will be output to another web page, especially any data that was received from external inputs, use the appropriate encoding on all non-alphanumeric characters (e.g., the symbols “<” and “>”).
  • Use a vetted library or framework implementing defences against XSS, e.g., Microsoft’s Anti-XSS API, the OWASP Enterprise Security API, and Apache Wicket (for Java-based web applications).
  • For any security checks that are performed on the client side, duplicate these checks on the server side, because attackers can bypass client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely (e.g., using the Web Developer Tools in Firefox).

Read about more measures here and also consult the OWASP Cross Site Scripting Prevention Cheat Sheet.

References

[] .

Picture of Yee Wei Law

CWE-917

by Yee Wei Law - Monday, 2 June 2025, 8:01 PM
 

This is a student-friendly explanation of the software weakness CWE-917 “Improper Neutralization of Special Elements used in an Expression Language Statement ('Expression Language Injection')”.

The vulnerable entity constructs all or part of an expression language (EL) statement in a framework such as Jakarta Server Pages (JSP, formerly JavaServer Pages) using externally-influenced input from an upstream component, but it does not neutralise or it incorrectly neutralises special elements that could modify the intended EL statement before it is executed.

  • In the context of JSP, the EL provides a mechanism for enabling the presentation layer (web pages) to communicate with the application logic (managed beans).
  • This weakness is a descendant of CWE-707 “Improper Neutralization”.
Example 1

The infamous vulnerability Log4Shell (CVE-2021-44228) that occupied headlines for months in 2022 cannot be a better example.

Watch an explanation of Log4Shell on YouTube:

🛡 General mitigation

Avoid adding user-controlled data into an expression interpreter.

If user-controlled data must be added to an expression interpreter, one or more of the following should be performed:

  • Ensure no user input will be evaluated as an expression.
  • Encode every user input in such a way that it is never evaluated as an expression.

By default, disable the processing of EL expressions. In JSP, set the attribute isELIgnored for a page to true.

References

[CWE21] CWE/CAPEC, Neutralizing Your Inputs: A Log4Shell Weakness Story, Medium article, December 2021. Available at https://medium.com/@CWE_CAPEC/neutralizing-your-inputs-a-log4shell-weakness-story-89954c8b25c9.

Picture of Yee Wei Law

Cyber Kill Chain

by Yee Wei Law - Wednesday, 15 March 2023, 9:24 AM
 

The Cyber Kill Chain® framework/model was developed by Lockheed Martin as part of their Intelligence Driven Defense® model for identification and prevention of cyber intrusions.

The model identifies what an adversary must complete in order to achieve its objectives.

The seven steps of the Cyber Kill Chain sheds light on an adversary’s tactics, techniques and procedures (TTP):

Watch a quick overview of the Cyber Kill Chain on LinkedIn Learning:

Overview of the cyber kill chain from Ethical Hacking with JavaScript by Emmanuel Henri

Example 1: Modelling Stuxnet with the Cyber Kill Chain

Stuxnet (W32.Stuxnet in Symantec’s naming scheme) was discovered in 2010, with some components being used as early as November 2008 [FMC11].

Stuxnet is a large and complex piece of malware that targets industrial control systems, leveraging multiple zero-day exploits, an advanced Windows rootkit, complex process injection and hooking code, network infection routines, peer-to-peer updates, and a command and control interface [FMC11].

Watch a brief discussion of modelling Stuxnet with the Cyber Kill Chain:

Stuxnet and the kill chain from Practical Cybersecurity for IT Professionals by Malcolm Shore

⚠ Contrary to what the video above claims, Stuxnet does have a command and control routine/interface [FMC11].

References

[FMC11] N. Falliere, L. O. Murchu, and E. Chien, W32.Stuxnet Dossier, Symantec Security Response, February 2011, version 1.4. Available at http://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2014/11/20082206/w32_stuxnet_dossier.pdf.


Page:  1  2  (Next)
  ALL