torchelie.loss¶
Functions¶
-
torchelie.loss.
tempered_cross_entropy
(x: torch.Tensor, y: torch.Tensor, t1: float, t2: float, n_iters: int = 3, weight: Optional[torch.Tensor] = None, reduction: str = 'mean') → torch.Tensor¶ The bi-tempered loss from https://arxiv.org/abs/1906.03361
- Parameters
x (tensor) – a tensor of batched logits like for cross_entropy
y (tensor) – a tensor of labels
t1 (float) – temperature 1
t2 (float) – temperature 2
weight (tensor) – a tensor that associates a weight to each class
reduction (str) – how to reduce the batch of losses: ‘none’, ‘sum’, or ‘mean’
- Returns
the loss
-
torchelie.loss.
tempered_nll_loss
(x: torch.Tensor, y: torch.Tensor, t1: float, t2: float, weight: Optional[torch.Tensor] = None, reduction: str = 'mean') → torch.Tensor¶ Compute tempered nll loss
- Parameters
x (tensor) – activations of log softmax
y (tensor) – labels
t1 (float) – temperature 1
t2 (float) – temperature 2
weight (tensor) – a tensor that associates a weight to each class
reduction (str) – how to reduce the batch of losses: ‘none’, ‘sum’, or ‘mean’
- Returns
the loss
-
torchelie.loss.
tempered_softmax
(x: torch.Tensor, t: float, n_iters: int = 3) → torch.Tensor¶ Tempered softmax. Computes softmax along dimension 1
- Parameters
x (tensor) – activations
t (float) – temperature
n_iters (int) – number of iters to converge (default: 3
- Returns
result of tempered softmax
-
torchelie.loss.
tempered_log_softmax
(x: torch.Tensor, t: float, n_iters: int = 3) → torch.Tensor¶ Tempered log softmax. Computes log softmax along dimension 1
- Parameters
x (tensor) – activations
t (float) – temperature
n_iters (int) – number of iters to converge (default: 3
- Returns
result of tempered log softmax
-
torchelie.loss.
ortho
(w: torch.Tensor) → torch.Tensor¶ Returns the orthogonal loss for weight matrix m, from Big GAN.
https://arxiv.org/abs/1809.11096
\(R_{\beta}(W)= ||W^T W \odot (1 - I)||_F^2\)
-
torchelie.loss.
total_variation
(i: torch.Tensor) → torch.Tensor¶ Returns the total variation loss for batch of images i
-
torchelie.loss.
continuous_cross_entropy
(pred: torch.Tensor, soft_targets: torch.Tensor, weights: Optional[torch.Tensor] = None, reduction: str = 'mean') → torch.Tensor¶ Compute the cross entropy between the logits pred and a normalized distribution soft_targets. If soft_targets is a one-hot vector, this is equivalent to nn.functional.cross_entropy with a label
-
torchelie.loss.
focal_loss
(input: torch.Tensor, target: torch.Tensor, gamma: float = 0, weight: Optional[torch.Tensor] = None) → torch.Tensor¶ Experimental: Returns the focal loss between target and input
\(\text{FL}(p_t)=-(1-p_t)^\gamma\log(p_t)\) .. warning:
focal_loss() is experimental, and may change or be deleted soon if not already broken
Modules¶
-
class
torchelie.loss.
TemperedCrossEntropyLoss
(t1, t2, weight=None, reduction='mean')¶ The bi-tempered loss from https://arxiv.org/abs/1906.03361
- Parameters
t1 (float) – temperature 1
t2 (float) – temperature 2
weight (tensor) – a tensor that associates a weight to each class
reduction (str) – how to reduce the batch of losses: ‘none’, ‘sum’, or ‘mean’
-
forward
(x, y)¶ Forward pass
- Parameters
x (tensor) – a tensor of batched logits like for cross_entropy
y (tensor) – a tensor of labels
- Returns
the loss
-
training
: bool¶
-
class
torchelie.loss.
OrthoLoss
¶ Orthogonal loss
See
torchelie.loss.ortho()
for details.-
forward
(w)¶
-
training
: bool¶
-
-
class
torchelie.loss.
TotalVariationLoss
¶ Total Variation loss
See
torchelie.loss.total_variation()
for details.-
forward
(x)¶
-
training
: bool¶
-
-
class
torchelie.loss.
ContinuousCEWithLogits
¶ Cross Entropy loss accepting continuous target values
See
torchelie.loss.continuous_cross_entropy()
for details.-
forward
(pred, soft_targets)¶
-
training
: bool¶
-
-
class
torchelie.loss.
FocalLoss
(gamma: float = 0)¶ The focal loss
https://arxiv.org/abs/1708.02002
See
torchelie.loss.focal_loss()
for details.-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶
-
training
: bool¶
-
-
class
torchelie.loss.
PerceptualLoss
(layers: Union[List[str], List[Tuple[str, float]]], rescale: bool = False, loss_fn: Callable[[torch.Tensor, torch.Tensor], torch.Tensor] = <function mse_loss>, use_avg_pool: bool = True, remove_unused_layers: bool = True)¶ Perceptual loss: the distance between a two images deep representation
\(\text{Percept}(\text{input}, \text{target})=\sum_l^{layers} \text{loss_fn}(\text{Vgg}(\text{input})_l, \text{Vgg}(\text{target})_l)\)
- Parameters
l (list of str) – the layers on which to compare the representations
rescale (bool) – whether to scale images smaller side to 224 as expected by the underlying vgg net
loss_fn (distance function) – a distance function to compare the representations, like mse_loss or l1_loss
-
forward
(x: torch.Tensor, y: torch.Tensor) → torch.Tensor¶ Return the perceptual loss between batch of images x and y
-
training
: bool¶
-
class
torchelie.loss.
NeuralStyleLoss
¶ Style Transfer loss by Leon Gatys
https://arxiv.org/abs/1508.06576
set the style and content before performing a forward pass.
-
forward
(input_img: torch.Tensor) → Tuple[torch.Tensor, Dict[str, float]]¶ Actually compute the loss
-
get_style_content_
(img: torch.Tensor, detach: bool) → Dict[str, Dict[str, torch.Tensor]]¶
-
set_content
(content_img: torch.Tensor, content_layers: Optional[List[str]] = None) → None¶ Set the content.
- Parameters
content_img (3xHxW tensor) – an image tensor
content_layer (str, optional) – the layer on which to compute the content representation, or None to keep it unchanged
-
set_style
(style_img: torch.Tensor, style_ratio: float, style_layers: Optional[List[str]] = None) → None¶ Set the style.
- Parameters
style_img (3xHxW tensor) – an image tensor
style_ratio (float) – a multiplier for the style loss to make it greater or smaller than the content loss
style_layer (list of str, optional) – the layers on which to compute the style, or None to keep them unchanged
-
net
: torchelie.models.perceptualnet.PerceptualNet¶
-
-
class
torchelie.loss.
DeepDreamLoss
(model: torch.nn.modules.module.Module, dream_layer: str, max_reduction: int = 3)¶ The Deep Dream loss
- Parameters
model (nn.Module) – a pretrained network on which to compute the activations
dream_layer (str) – the name of the layer on which the activations are to be maximized
max_reduction (int) – the maximum factor of reduction of the image, for multiscale generation
-
forward
(input_img: torch.Tensor) → torch.Tensor¶ Compute the Deep Dream loss on input_img
-
get_acts_
(img: torch.Tensor, detach: bool) → torch.Tensor¶
-
training
: bool¶
GAN losses¶
Hinge loss from Spectral Normalization GAN.
https://arxiv.org/abs/1802.05957
\(L_D(x_r, x_f) = \text{max}(0, 1 - D(x_r)) + \text{max}(0, 1 + D(x_f))\)
\(L_G(x_f) = -D(x_f)\)
-
torchelie.loss.gan.hinge.
fake
(x: torch.Tensor, reduction: str = 'mean') → torch.Tensor¶
-
torchelie.loss.gan.hinge.
generated
(x: torch.Tensor, reduction: str = 'mean') → torch.Tensor¶
-
torchelie.loss.gan.hinge.
real
(x: torch.Tensor, reduction: str = 'mean') → torch.Tensor¶
Standard, non saturating, GAN loss from the original GAN paper
https://arxiv.org/abs/1406.2661
\(L_D(x_r, x_f) = - \log(1 - D(x_f)) - \log D(x_r)\)
\(L_G(x_f) = -\log D(x_f)\)
-
torchelie.loss.gan.standard.
fake
(x: torch.Tensor, reduce: str = 'mean') → torch.Tensor¶
-
torchelie.loss.gan.standard.
generated
(x: torch.Tensor, reduce: str = 'mean') → torch.Tensor¶
-
torchelie.loss.gan.standard.
real
(x: torch.Tensor, reduce: str = 'mean') → torch.Tensor¶