Search
Feature-robustness, flatness and generalization error for deep neural networks
IntroductionIn this paper, the authors focus on the following open question: Why does minimizing the empirical error during deep neural network train...
A Deep Neural Network's Loss Surface Contains Every Low-dimensional Pattern
HighlightsThey prove two theorems stating that, independently of the data sets: 1.- Every finite-dimensional pattern can be found as the loss landsca...
Similarity of Neural Network Representations Revisited
Highlights A similarity measure is introduced to measure the difference between representations created by trained neural networks. They compare to ...
G-SGD: Optimizing ReLU neural networks in its positively scale-invariant space
Highlights ReLU neural networks are optimized in a space that is invariant under positive rescaling. This space is also sufficient to represent ReLU ...
An exponential learning rate schedule for deep learning
IntroductionIn this paper, they show that when using batch normalization and weight decay one can use an exponentially increasing learning rate and st...
Sharp Minima can Generalize for Deep Nets
IntroductionGeneralization of neural networks has been linked to the flatness of the loss landscape around the found minimum. Therefore, it is reasona...
Unadversarial examples: designing objects for robust vision
HighlightsIn this paper, the authors assume a setting where the users have access to both the model and the objects they want to detect with the model...
Deep Learning: A philosophical introduction
IntroductionThis paper focuses on the, yet not answered satisfactorily, why question in deep learning: Why do CNNs work so good?Before going into the...
Evasion attacks against machine learning at test time
Highlights The authors propose an optimization method to generate adversarial examples for SVM and neural nets. The method has a regularization meth...
Path-SGD: Path-Normalized Optimization in Deep Neural Networks
Highlights Use of positive homogeneity (or positive scale invariance) of ReLU networks to suggest an optimization method that is invariant under resc...
Intriguing properties of neural networks
HighlightsThe authors find two properties of neural networks. There is no distinction between individual high-level units and random linear combinati...