Loss¶
Functions¶
-
torchelie.loss.
tempered_cross_entropy
(x, y, t1, t2, n_iters=3, weight=None, reduction='mean')¶ The bi-tempered loss from https://arxiv.org/abs/1906.03361
Parameters: - x (tensor) – a tensor of batched probabilities like for cross_entropy
- y (tensor) – a tensor of labels
- t1 (float) – temperature 1
- t2 (float) – temperature 2
- weight (tensor) – a tensor that associates a weight to each class
- reduction (str) – how to reduce the batch of losses: ‘none’, ‘sum’, or ‘mean’
Returns: the loss
-
torchelie.loss.
tempered_nll_loss
(x, y, t1, t2, weight=None, reduction='mean')¶ Compute tempered nll loss
Parameters: - x (tensor) – activations of log softmax
- y (tensor) – labels
- t1 (float) – temperature 1
- t2 (float) – temperature 2
- weight (tensor) – a tensor that associates a weight to each class
- reduction (str) – how to reduce the batch of losses: ‘none’, ‘sum’, or ‘mean’
Returns: the loss
-
torchelie.loss.
tempered_softmax
(x, t, n_iters=3)¶ Tempered softmax. Computes softmax along dimension 1
Parameters: - x (tensor) – activations
- t (float) – temperature
- n_iters (int) – number of iters to converge (default: 3
Returns: result of tempered softmax
-
torchelie.loss.
tempered_log_softmax
(x, t, n_iters=3)¶ Tempered log softmax. Computes log softmax along dimension 1
Parameters: - x (tensor) – activations
- t (float) – temperature
- n_iters (int) – number of iters to converge (default: 3
Returns: result of tempered log softmax
-
torchelie.loss.
ortho
(w)¶ Returns the orthogonal loss for weight matrix m, from Big GAN.
https://arxiv.org/abs/1809.11096
\(R_{\beta}(W)= ||W^T W \odot (1 - I)||_F^2\)
-
torchelie.loss.
total_variation
(i)¶ Returns the total variation loss for batch of images i
-
torchelie.loss.
continuous_cross_entropy
(pred, soft_targets)¶ Compute the cross entropy between the logits pred and a normalized distribution soft_targets. If soft_targets is a one-hot vector, this is equivalent to nn.functional.cross_entropy with a label
-
torchelie.loss.
focal_loss
(input, target, gamma=0)¶ Returns the focal loss between target and input
\(\text{FL}(p_t)=-(1-p_t)^\gamma\log(p_t)\)
Modules¶
-
class
torchelie.loss.
TemperedCrossEntropyLoss
(t1, t2, weight=None, reduction='mean')¶ The bi-tempered loss from https://arxiv.org/abs/1906.03361
Parameters: - t1 (float) – temperature 1
- t2 (float) – temperature 2
- weight (tensor) – a tensor that associates a weight to each class
- reduction (str) – how to reduce the batch of losses: ‘none’, ‘sum’, or ‘mean’
-
forward
(x, y)¶ Forward pass
Parameters: - x (tensor) – a tensor of batched probabilities like for cross_entropy
- y (tensor) – a tensor of labels
Returns: the loss
-
class
torchelie.loss.
OrthoLoss
(*args, **kwargs)¶ Orthogonal loss
See
torchelie.loss.ortho()
for details.-
forward
(w)¶
-
-
class
torchelie.loss.
TotalVariationLoss
(*args, **kwargs)¶ Total Variation loss
See
torchelie.loss.total_variation()
for details.-
forward
(x)¶
-
-
class
torchelie.loss.
ContinuousCEWithLogits
(*args, **kwargs)¶ Cross Entropy loss accepting continuous target values
See
torchelie.loss.continuous_cross_entropy()
for details.-
forward
(pred, soft_targets)¶
-
-
class
torchelie.loss.
FocalLoss
(gamma=0)¶ The focal loss
https://arxiv.org/abs/1708.02002
See
torchelie.loss.focal_loss()
for details.-
forward
(input, target)¶
-
-
class
torchelie.loss.
PerceptualLoss
(l, rescale=False, loss_fn=<sphinx.ext.autodoc.importer._MockObject object>)¶ Perceptual loss: the distance between a two images deep representation
\(\text{Percept}(\text{input}, \text{target})=\sum_l^{layers} \text{loss_fn}(\text{Vgg}(\text{input})_l, \text{Vgg}(\text{target})_l)\)
Parameters: - l (list of str) – the layers on which to compare the representations
- rescale (bool) – whether to scale images to 224x224 as expected by the underlying vgg net
- loss_fn (distance function) – a distance function to compare the representations, like mse_loss or l1_loss
-
forward
(x, y)¶ Return the perceptual loss between batch of images x and y
-
class
torchelie.loss.
NeuralStyleLoss
¶ Style Transfer loss by Leon Gatys
https://arxiv.org/abs/1508.06576
set the style and content before performing a forward pass.
-
forward
(input_img)¶ Actually compute the loss
-
get_style_content_
(img, detach)¶
-
set_content
(content_img, content_layers=None)¶ Set the content.
Parameters: - content_img (3xHxW tensor) – an image tensor
- content_layer (str, optional) – the layer on which to compute the content representation, or None to keep it unchanged
-
set_style
(style_img, style_ratio, style_layers=None)¶ Set the style.
Parameters: - style_img (3xHxW tensor) – an image tensor
- style_ratio (float) – a multiplier for the style loss to make it greater or smaller than the content loss
- style_layer (list of str, optional) – the layers on which to compute the style, or None to keep them unchanged
-
-
class
torchelie.loss.
DeepDreamLoss
(model, dream_layer, max_reduction=3)¶ The Deep Dream loss
Parameters: - model (nn.Module) – a pretrained network on which to compute the activations
- dream_layer (str) – the name of the layer on which the activations are to be maximized
- max_reduction (int) – the maximum factor of reduction of the image, for multiscale generation
-
forward
(input_img)¶ Compute the Deep Dream loss on input_img
-
get_acts_
(img, detach)¶
GAN losses¶
Hinge loss from Spectral Normalization GAN.
https://arxiv.org/abs/1802.05957
\(L_D(x_r, x_f) = \text{max}(0, 1 - D(x_r)) + \text{max}(0, 1 + D(x_f))\)
\(L_G(x_f) = -D(x_f)\)
-
torchelie.loss.gan.hinge.
fake
(x, reduction='mean')¶
-
torchelie.loss.gan.hinge.
generated
(x, reduction='mean')¶
-
torchelie.loss.gan.hinge.
real
(x, reduction='mean')¶
Standard, non saturating, GAN loss from the original GAN paper
https://arxiv.org/abs/1406.2661
\(L_D(x_r, x_f) = - \log(1 - D(x_f)) - \log D(x_r)\)
\(L_G(x_f) = -\log D(x_f)\)
-
torchelie.loss.gan.standard.
fake
(x)¶
-
torchelie.loss.gan.standard.
generated
(x)¶
-
torchelie.loss.gan.standard.
real
(x)¶