Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL
A |
---|
Activation function: contemporary options | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
This knowledge base entry follows discussion of artificial neural networks and backpropagation. Contemporary options for are the non-saturating activation functions [Mur22, Sec. 13.4.3], although the term is not accurate. Below, ( should be understood as the output of the summing junction.
References
| ||||||||||||||||||||
Active learning | ||
---|---|---|
Adversarial machine learning | ||||||||
---|---|---|---|---|---|---|---|---|
Adversarial machine learning (AML) as a field can be traced back to [HJN+11]. AML is the study of 1️⃣ the capabilities of attackers and their goals, as well as the design of attack methods that exploit the vulnerabilities of ML during the ML life cycle; 2️⃣ the design of ML algorithms that can withstand these security and privacy challenges [OV24]. The impact of adversarial examples on deep learning is well known within the computer vision community, and documented in a body of literature that has been growing exponentially since Szegedy et al.’s discovery [SZS+14]. The field is moving so fast that the taxonomy, terminology and threat models are still being standardised. See MITRE ATLAS. References
| ||||||||
Apache MXNet | ||
---|---|---|
Deep learning library Apache MXNet reached version 1.9.1 when it was retired in 2023. Despite its obsolescence, there are MXNet-based projects that have not yet been ported to other libraries. In the process of porting these projects, it is useful to be able to evaluate their performance in MXNet, and hence it is useful to be able to set up MXNet. The problem is the dependencies of MXNet have not been updated for a while, and installation is not as straightforward as the installation guide makes it out to be. The installation guide here is applicable to Ubuntu 24.04 LTS on WSL2 and requires
After setting up all the above, do pip install mxnet-cu112
Some warnings like this will appear but are inconsequential: cuDNN lib mismatch: linked-against version 8907 != compiled-against version 8101. Set MXNET_CUDNN_LIB_CHECKING=0 to quiet this warning. | ||
Artificial neural networks and backpropagation | |||
---|---|---|---|
See 👇 attachment or the latest source on Overleaf.
| |||
B |
---|
Batch normalisation (BatchNorm) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Watch a high-level explanation of BatchNorm: Watch more detailed explanation of BatchNorm by Prof Ng: Watch coverage of BatchNorm in Stanford 2016 course CS231n Lecture 5 Part 2: References
| ||||||||||
C |
---|
Cross-entropy loss | ||||
---|---|---|---|---|
[Cha19, pp. 11-14] References
| ||||
D |
---|
Domain adaptation | |||
---|---|---|---|
Domain adaptation is learning a discriminative classifier or other predictor in the presence of a shift of data distribution between the source/training domain and the target/test domain [GUA+16]. References | |||
Dropout | ||||
---|---|---|---|---|
Deep neural networks (DNNs) employ a large number of parameters to learn complex dependencies of outputs on inputs, but overfitting often occurs as a result. Large DNNs are also slow to converge. The dropout method implements the intuitive idea of randomly dropping units (along with their connections) from a network during training [SHK+14]. References
| ||||
F |
---|
Few-shot learning | ||||||
---|---|---|---|---|---|---|
References
| ||||||