Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  2  3  4  5  6  7  8  (Next)
  ALL

C

Picture of Yee Wei Law

CWE-1260

by Yee Wei Law - Tuesday, 28 March 2023, 12:46 PM
 

This is a student-friendly explanation of the hardware weakness CWE-1260.

Not ready for 2023.


Picture of Yee Wei Law

CWE-1300

by Yee Wei Law - Sunday, 23 April 2023, 3:02 PM
 

This is a student-friendly explanation of the hardware weakness CWE-1300 “Improper Protection of Physical Side Channels”, which is susceptible to

  • CAPEC-189 “Black Box Reverse Engineering”
  • CAPEC-699 “Eavesdropping on a Monitor”

A hardware product with this weakness does not contain sufficient protection mechanisms to prevent physical side channels from exposing sensitive information due to patterns in physically observable phenomena such as variations in power consumption, electromagnetic emissions (EME), or acoustic emissions.

Example 1

Google’s Titan Security Key is a FIDO universal 2nd-factor (U2F) hardware device that became available since July 2018.

Unfortunately, the security key is susceptible to side-channel attacks observing the local electromagnetic radiations of its secure element — an NXP A700x chip which is now discontinued — during an ECDSA signing operation [RLMI21].

The side-channel attack can clone the secret key in a Titan Security Key.

Watch the USENIX Security ’21 presentation on the side-channel attack:

More examples of side-channel attacks are available here.

🛡 General mitigation

The standard countermeasures are those that apply to CWE-1300 “Improper Protection of Physical Side Channels” and its parent weakness CWE-203 “Observable Discrepancy”.

CWE-1300:

  • Apply blinding or masking techniques to implementations of cryptographic algorithms.
  • Add shielding or tamper-resistant protections to the device to increase the difficulty of obtaining measurements of the side channel.

CWE-203:

  • Compartmentalise the system to have “safe” areas where trust boundaries can be unambiguously drawn.

    Constrain sensitive data to within trust boundaries.

  • Ensure that appropriate compartmentalisation is built into the system design, and the compartmentalisation allows for and reinforces privilege separation functionality.

    Apply the principle of least privilege to decide the appropriate conditions to use or drop privileges.

  • Ensure that error messages only contain minimal details that are useful to the intended audience and no one else.

    The messages should achieve balance between being too cryptic (which can confuse users) and being too detailed (which may reveal more than intended).

    The messages should not reveal the methods used to determine the error because this information can enable attackers to refine or optimise their original attack, thereby increasing their chances of success.

  • If errors must be captured in some detail, record them in log messages not accessible by attackers.

    Highly sensitive information such as passwords should never be saved to log files.

  • Avoid inconsistent messaging that might unintentionally tip off an attacker about internal state, such as whether a user account exists or not.

References

[RLMI21] T. Roche, V. Lomné, C. Mutschler, and L. Imbert, A side journey to titan, in 30th USENIX Security Symposium (USENIX Security 21), USENIX Association, August 2021, pp. 231–248. Available at https://www.usenix.org/conference/usenixsecurity21/presentation/roche.

Picture of Yee Wei Law

CWE-1384

by Yee Wei Law - Wednesday, 26 April 2023, 10:49 AM
 

This is a student-friendly explanation of the hardware weakness CWE-1384 “Improper Handling of Physical or Environmental Conditions”, where a hardware product does not properly handle unexpected physical or environmental conditions that occur naturally or are artificially induced.

This weakness CWE-1384 and the weakness CWE-1300 can be seen as two sides of a coin: while the latter is about leakage of sensitive information, the former is about injection of malicious signals.

Example 1

The GhostTouch attack [WMY+22] generates electromagnetic interference (EMI) signals on the scan-driving-based capacitive touchscreen of a smartphone, which result in “ghostly” touches on the touchscreen; see Fig. 1.

The EMI signals can be generated, for example, using ChipSHOUTER.

Fig. 1: The GhostTouch attack scenario, where the attacker uses an electromagnetic interference (EMI) device under a table to remotely actuate the touchscreen of a smartphone placed face-down on the table [WMY+22, Figure 1].

These ghostly touches enable the attacker to actuate unauthorised taps and swipes on the victim’s touchscreen.

Watch the authors’ presentation at USENIX Security ’22:

PCspooF is another example of an attack exploiting weakness CWE-1384.

🛡 General mitigation

Product specification should include expectations for how the product will perform when it exceeds physical and environmental boundary conditions, e.g., by shutting down.

Where possible, include independent components that can detect excess environmental conditions and are capable of shutting down the product.

Where possible, use shielding or other materials that can increase the adversary’s workload and reduce the likelihood of being able to successfully trigger a security-related failure.

References

[WMY+22] K. Wang, R. Mitev, C. Yan, X. Ji, A.-R. Sadeghi, and W. Xu, GhostTouch: Targeted attacks on touchscreens without physical touch, in 31st USENIX Security Symposium (USENIX Security 22), USENIX Association, Boston, MA, August 2022, pp. 1543–1559. Available at https://www.usenix.org/conference/usenixsecurity22/presentation/wang-kai.

Picture of Yee Wei Law

CWE-787

by Yee Wei Law - Wednesday, 3 May 2023, 9:58 AM
 

This is a student-friendly explanation of the software weakness CWE-787 “Out-of-bounds write”, where the vulnerable entity writes data past the end, or before the beginning, of the intended buffer.

This and CWE-125 are two sides of the same coin.

This can be caused by incorrect pointer arithmetic (see Example 1), accessing invalid pointers due to incomplete initialisation or memory release (see Example 2), etc.

Example 1

In the adjacent C code, char pointer p is allocated only 1 byte, but two bytes away from p, a value of 1 is stored.

Reminder: In a POSIX environment, a C program can be compiled and linked into an executable using command gcc cfilename -o exefilename.

#include <stdlib.h>

int main() {
	char *p;
	p = (char *)malloc(1);
	*(p+2) = 1;
	return 0;
}
Example 2

In the adjacent C code, after the memory allocated to p is freed, p points to a non-existent buffer, but then a value is stored in the nonexistent buffer.

#include <stdlib.h>

int main() {
	char *p;
	p = (char *)malloc(1);
	free(p);
	*p = 1;
	return 0;
}

Typically, the weakness CWE-787 can result in corruption of data, a crash, or code execution.

🛡 General mitigation

A long list of mitigation measures exist, so only a few are mentioned here:

  • Use a safe programming language that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.

    • For example, many languages that perform their own memory management, such as Java and Perl, are not subject to buffer overflows.
    • Other languages, such as Ada and C# (see Example 3), typically provide overflow protection, but the protection can be disabled by the programmer.
    • Nevertheless, a language’s interface to native code may still be subject to overflows, even if the language itself is theoretically safe.
  • Use a vetted library or framework that is not vulnerable to CWE-787, e.g., Intel’s Safe String Library, and Microsoft’s Strsafe.h library. 👈 These libraries provide safer versions of overflow-prone string-handling functions.
  • Use compilers or compiler extensions that support automatic detection of buffer overflows, e.g., Microsoft Visual Studio with its buffer security check (/GS) flag, the FORTIFY_SOURCE macro on Red Hat Linux platforms, GCC with a range of stack protection flags (that evolved from StackGuard and ProPolice).

Read about more measures here.

Example 3

Using the unsafe keyword, we can write code involving pointers in C#, e.g.,

// compile with: -unsafe
class UnsafeTest
{
    unsafe static void Main()
    {
        int *p; int i = 0;
        p = &i;
        p[2] = 1;
        // Console.WriteLine(p[2]);
    }
}
Fig. 1: Compiling unsafe C# code in Visual Studio 2022.

Enabling compilation of unsafe code in Visual Studio as per Fig. 1, the code above can be compiled.

Once compiled and run, the code above will not trigger any runtime error, unless the line writing p[2] to the console is uncommented.

Question: What runtime error would the Console.WriteLine statement in the code above trigger?

References

[] .

Picture of Yee Wei Law

CWE-79

by Yee Wei Law - Wednesday, 3 May 2023, 10:02 AM
 

This is a student-friendly explanation of the software weakness CWE-79 “Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')”, which is susceptible to

This weakness exists when user-controllable input is not neutralised or is incorrectly neutralised before it is placed in output that is used as a web page that is served to other users.

In general, cross-site scripting (XSS) vulnerabilities occur when:

  • Untrusted data enters a web application, typically through a web request.
  • The web application dynamically generates a web page that contains this untrusted data.
  • During page generation, the application does not prevent the data from containing content that is executable by a web browser, such as JavaScript, HTML tags, HTML attributes, mouse events, etc.
  • A victim visits the generated web page through a web browser, which contains malicious script that was injected using the untrusted data.
  • Since the script comes from a web page that was sent by the web server, the victim’s web browser executes the malicious script in the context of the web server’s domain.
  • This effectively violates the intention of the web browser’s same-origin policy, which states that scripts in one domain should not be able to access resources or run code in a different domain.

Once the malicious script is injected, a variety of attacks are achievable, e.g.,

  • The script could transfer private information (e.g., cookies containing session information) from the victim’s computer to the attacker.
  • The script could send malicious requests to a web site on behalf of the victim, which could be especially dangerous to the site if the victim has administrator privileges to manage that site. Phishing attacks could be used to emulate trusted web sites and trick the victim into entering a password, allowing the attacker to compromise the victim’s account on that web site.
  • The script could exploit a vulnerability of the web browser itself, potentially taking over the victim’s computer. This is known as “drive-by attack”.

The attacks above can usually be launched without alerting the victim.

Even with careful users, URL encoding or Unicode can be used to obfuscate web requests, to make the requests look less suspicious.

Watch an introduction to XSS on LinkedIn Learning:

Understanding cross-site scripting from Security Testing: Vulnerability Management with Nessus by Mike Chapple

Watch a demonstration of XSS on LinkedIn Learning:

Cross-site scripting attacks from Web Security: Same-Origin Policies by Sasha Vodnik

Example 1

The vulnerability CVE-2022-20916 caused the web-based management interface of Cisco IoT Control Center to allow an unauthenticated, remote attacker to conduct an XSS attack against a user of the interface.

The vulnerability was due to absence of proper validation of user-supplied input.

🛡 General mitigation

A long list of mitigation measures exist, so only a few are mentioned here:

  • For any data that will be output to another web page, especially any data that was received from external inputs, use the appropriate encoding on all non-alphanumeric characters (e.g., the symbols “<” and “>”).
  • Use a vetted library or framework implementing defences against XSS, e.g., Microsoft’s Anti-XSS API, the OWASP Enterprise Security API, and Apache Wicket (for Java-based web applications).
  • For any security checks that are performed on the client side, duplicate these checks on the server side, because attackers can bypass client-side checks by modifying values after the checks have been performed, or by changing the client to remove the client-side checks entirely (e.g., using the Web Developer Tools in Firefox).

Read about more measures here and also consult the OWASP Cross Site Scripting Prevention Cheat Sheet.

References

[] .

Picture of Yee Wei Law

CWE-917

by Yee Wei Law - Wednesday, 10 May 2023, 9:24 AM
 

This is a student-friendly explanation of the software weakness CWE-917 “Improper Neutralization of Special Elements used in an Expression Language Statement ('Expression Language Injection')”.

The vulnerable entity constructs all or part of an expression language (EL) statement in a framework such as Jakarta Server Pages (JSP, formerly JavaServer Pages) using externally-influenced input from an upstream component, but it does not neutralise or it incorrectly neutralises special elements that could modify the intended EL statement before it is executed.

  • In the context of JSP, the EL provides a mechanism for enabling the presentation layer (web pages) to communicate with the application logic (managed beans).
  • This weakness is a descendant of CWE-707 “Improper Neutralization”.
Example 1

The infamous vulnerability Log4Shell (CVE-2021-44228) that occupied headlines for months in 2022 cannot be a better example.

Watch an explanation of Log4Shell on YouTube:

🛡 General mitigation

Avoid adding user-controlled data into an expression interpreter.

If user-controlled data must be added to an expression interpreter, one or more of the following should be performed:

  • Ensure no user input will be evaluated as an expression.
  • Encode every user input in such a way that it is never evaluated as an expression.

By default, disable the processing of EL expressions. In JSP, set the attribute isELIgnored for a page to true.

References

[CWE21] CWE/CAPEC, Neutralizing Your Inputs: A Log4Shell Weakness Story, Medium article, December 2021. Available at https://medium.com/@CWE_CAPEC/neutralizing-your-inputs-a-log4shell-weakness-story-89954c8b25c9.

Picture of Yee Wei Law

Cyber Kill Chain

by Yee Wei Law - Wednesday, 15 March 2023, 9:24 AM
 

The Cyber Kill Chain® framework/model was developed by Lockheed Martin as part of their Intelligence Driven Defense® model for identification and prevention of cyber intrusions.

The model identifies what an adversary must complete in order to achieve its objectives.

The seven steps of the Cyber Kill Chain sheds light on an adversary’s tactics, techniques and procedures (TTP):

Watch a quick overview of the Cyber Kill Chain on LinkedIn Learning:

Overview of the cyber kill chain from Ethical Hacking with JavaScript by Emmanuel Henri

Example 1: Modelling Stuxnet with the Cyber Kill Chain

Stuxnet (W32.Stuxnet in Symantec’s naming scheme) was discovered in 2010, with some components being used as early as November 2008 [FMC11].

Stuxnet is a large and complex piece of malware that targets industrial control systems, leveraging multiple zero-day exploits, an advanced Windows rootkit, complex process injection and hooking code, network infection routines, peer-to-peer updates, and a command and control interface [FMC11].

Watch a brief discussion of modelling Stuxnet with the Cyber Kill Chain:

Stuxnet and the kill chain from Practical Cybersecurity for IT Professionals by Malcolm Shore

⚠ Contrary to what the video above claims, Stuxnet does have a command and control routine/interface [FMC11].

References

[FMC11] N. Falliere, L. O. Murchu, and E. Chien, W32.Stuxnet Dossier, Symantec Security Response, February 2011, version 1.4. Available at http://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2014/11/20082206/w32_stuxnet_dossier.pdf.

D

Picture of Yee Wei Law

Data flow analysis

by Yee Wei Law - Sunday, 14 May 2023, 1:19 PM
 

This continues from discussion of application security testing.

Data flow analysis is a static analysis technique for calculating facts of interest at each program point, based on the control flow graph representation of the program [vJ11, p. 1254].

A canonical data flow analysis is reaching definitions analysis [NNH99, Sec. 2.1.2].

For example, if statement is x := y◻z, where is any operator, and x is defined at statement .

Reaching definitions analysis determines for each statement that uses variable x which previous statement defined x.

Example 1 [vJ11, pp. 1254-1255]
In the example below, the definition x at line 1 reaches line 2, but does not reach beyond line 3 because x is assigned on line 3.
1 x := y + z;
2 w := x + z;
3 x := w;

More formally, data flow analysis can be expressed in terms of lattice theory, where facts about a program are modelled as vertices in a lattice.

The lattice meet operator determines how two sets of facts are combined.

Given an analysis where the lattice meet operator is well-defined and the lattice is of finite height, a data flow analysis is guaranteed to terminate and converge to an answer for each statement in the program using a simple iterative algorithm.

A typical application of data flow analysis to security is to determine whether a particular program variable is derived from user input (tainted) or not (untainted).

Given an initial set of variables initialised by user input, data flow analysis can determine (typically an over approximation of) all variables in the program that are derived from user data.

References

[NNH99] F. Nielson, H. R. Nielson, and C. Hankin, Principles of Program Analysis, Springer Berlin, Heidelberg, 1999. https://doi.org/10.1007/978-3-662-03811-6.
[vJ11] H. C. van Tilborg and S. Jajodia (eds.), Encyclopedia of Cryptography and Security, Springer, Boston, MA, 2011. https://doi.org/10.1007/978-1-4419-5906-5.

Picture of Yee Wei Law

Delay-tolerant networking (DTN)

by Yee Wei Law - Monday, 22 May 2023, 10:58 PM
 

In a mobile ad hoc network (MANET, see Definition 1), nodes move around causing connections to form and break over time.

Definition 1: Mobile ad hoc network [PBB+17]

A wireless network that allows easy connection establishment between mobile wireless client devices in the same physical area without the use of an infrastructure device, such as an access point or a base station.

Due to mobility, a node can sometimes find itself devoid of network neighbours. In this case, the node 1️⃣ stores the messages en route to their destination (which is not the node itself), and 2️⃣ when it finds a route of the destination of the messages, forwards the messages to the next node on the route. This networking paradigm is called store-and-forward.

A delay-tolerant networking (DTN) architecture [CBH+07] is a store-and-forward communications architecture in which source nodes send DTN bundles through a network to destination nodes. In a DTN architecture, nodes use the Bundle Protocol (BP) to deliver data across multiple links to the destination nodes.

Watch short animation from NASA:

Watch detailed lecture from NASA:

References

[CCS15] CCSDS, CCSDS Bundle Protocol Specification, Recommended Standard CCSDS 734.2-B-1, The Consultative Committee for Space Data Systems, September 2015.
[CBH+07] V. Cerf, S. Burleigh, A. Hooke, L. Torgerson, R. Durst, K. Scott, K. Fall, and H. Weiss, Delay-tolerant networking architecture, RFC 4838, April 2007.
[IEH+19] D. Israel, B. Edwards, J. Hayes, W. Knopf, A. Robles, and L. Braatz, The Benefits of Delay/Disruption Tolerant Networking (DTN) for Future NASA Science Missions, in 70th International Astronautical Congress (IAC), October 2019. Available at https://www.nasa.gov/sites/default/files/atoms/files/the_benefits_of_dtn_for_future_nasa_science_missions.pdf.
[PBB+17] J. Padgette, J. Bahr, M. Batra, M. Holtmann, R. Smithbey, L. Chen, and K. Scarfone, Guide to Bluetooth Security, NIST Special Publication 800-121 Revision 2 Update 1, May 2017. https://doi.org/10.6028/NIST.SP.800-121r2-upd1.
[SB07] K. Scott and S. Burleigh, Bundle protocol specification, RFC 5050, November 2007.

Picture of Yee Wei Law

Differential power analysis

by Yee Wei Law - Wednesday, 26 April 2023, 9:49 AM
 

Kocher et al. [KJJ99] pioneered the method of differential power analysis (DPA).

A power trace is a set of power consumption measurements taken over a cryptographic operation; see Fig. 1 for an example.

Fig. 1: A sample power trace of a DES encryption [KJJ99, Figure 1], which is clearly indicative of the 16 rounds of the Feistel structure.

Let us define simple power analysis (SPA) before we get into DPA. SPA is the interpretation of direct power consumption measurements of cryptographic operations like Fig. 1.

Watch a demonstration of SPA:

Most hard-wired hardware implementations of symmetric cryptographic algorithms have sufficiently small power consumption variations that SPA cannot reveal any key bit.

Unlike SPA, DPA is the interpretation of the difference between two sets of power traces.

More precisely, this difference is defined as

where

  • is the number of traces;
  • is the time index;
  • is the selection function which for the DES (see Figs. 2-3) is defined as the value of bit () of the DES intermediate (see input to block E in Fig. 3) at the beginning of the 16th round for ciphertext when is the 6-bit subkey entering the S-box corresponding to bit ;
  • is the th power trace (vector of power values).

Note each trace is associated with a different ciphertext.

Fig. 2: The Feistel structure of DES, where F denotes the Feistel function (see Fig. 3).

Fig. 3: The Feistel function of DES, where E denotes the expansion permutation that expands a 32-bit input to 48 bits.

During decryption of , denotes the half block (32 bits).

If bit enters S-box S1, then is the 6-bit subkey that enters S-box S1.

DPA was originally devised for DES but it can be adapted to other cryptographic algorithms.

DPA uses power consumption measurements to determine whether a key block guess is correct.

  • There are only possible values of .
  • When the attacker’s guess of is incorrect, the attacker’s value of differs from the actual target bit for about half of the ciphertexts ; equivalently, the selection function is uncorrelated to what was actually computed by the target device, i.e., .
  • When the attacker’s guess of is correct, is correlated to the value of the bit manipulated in the 16th round, i.e.,

    • approaches the effect of the target bit on the power consumption as , where is the time index corresponding to when   is involved in computation;
    • approaches zero for all the times when is not involved in computation.
  • Fig. 4 shows four sample power traces (1 simple, 3 differential).
Fig. 4: From top to bottom: a simple power trace, a differential trace with a spike indicating correct guess, two differential traces for incorrect guesses [KJJ99, Figure 4]. for the differential traces.

References

[KJJ99] P. Kocher, J. Jaffe, and B. Jun, Differential power analysis, in Advances in Cryptology — CRYPTO’ 99 (M. Wiener, ed.), Springer Berlin Heidelberg, Berlin, Heidelberg, 1999, pp. 388–397. https://doi.org/10.1007/3-540-48405-1_25.


Page: (Previous)   1  2  3  4  5  6  7  8  (Next)
  ALL