torchelie.nn¶
Convolutions¶
A Conv2d with ‘same’ padding |
|
A 3x3 Conv2d with ‘same’ padding |
|
A 1x1 Conv2d |
|
A masked 2D convolution for PixelCNN |
|
A 2D convolution for PixelCNN made of a convolution above the current pixel and another on the left. |
Normalization¶
Adaptive InstanceNormalization from *Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization* (Huang et al, 2017) |
|
Feature-wise Linear Modulation from https://distill.pub/2018/feature-wise-transformations/ The difference with AdaIN is that FiLM does not uses the input’s mean and std in its calculations |
|
PixelNorm from ProgressiveGAN |
|
Normalize images channels as torchvision models expects, in a differentiable way |
|
Misc¶
Quantization layer from Neural Discrete Representation Learning |
|
Multi codebooks quantization layer from Neural Discrete Representation Learning |
|
Add gaussian noise to the input, with a per channel or global learnable std. |
|
An pass-through layer that prints some debug info during forward pass. |
|
A pure pass-through layer |
|
Applies a lambda function on forward() |
|
Reshape the input volume |
|
A wrapper around |
|
A wrapper around |
|
Pools with AdaptiveMaxPool2d AND AdaptiveAvgPool2d and concatenates both results. |
|
Self Attention such as used in SAGAN or BigGAN. |
|
Force a representation to fit a unit gaussian prior. |
|
Experimental: Return a constant learnable volume. |
|
Experimental |
|
Minibatch Stddev layer from Progressive GAN |
Blocks¶
A packed block with Conv-BatchNorm-ReLU and various operations to alter it. |
|
Experimental: A packed block with Masked Conv-Norm-ReLU |
|
Experimental: A packed block with Masked Conv-BN-ReLU |
|
A Spade ResBlock from Semantic Image Synthesis with Spatially-Adaptive Normalization |
|
A block of the generator discovered by AutoGAN. |
|
A preactivated resblock suited for discriminators: it features leaky relus, no batchnorm, and an optional downsampling operator. |
|
Experimental: A Upsample-(ModulatedConv-Noise-LeakyReLU)* block from StyleGAN2 |
|
A Squeeze-And-Excite block |
|
A Preactivated Residual Block. |
|
A Preactivated Residual Block. |
|
A Residual Block. |
|
A Residual Block. |
|
Sequential¶
Hook |
|
An extension to torch’s Sequential that allows conditioning either as a second forward argument or condition() |
|
Allows description of networks as computation graphs. |
Activations¶
Hard Sigmoid |
|
Hard Swish |
torchelie.nn.utils¶
Compute the receptive field of |
Model edition¶
Allow to edit any part of a model by recursively editing its modules. |
|
Insert module |
|
Insert module |
|
Change all relus into leaky relus for modules and submodules of net. |
|
Remove BatchNorm in Sequentials / CondSeqs in a smart way, restoring biases in the preceding layer. |
Lambda¶
Apply a lambda function as a hook to the weight matrix of a layer before a forward pass. |
|
Apply |
|
Remove the hook |
Weight normalization / equalized learning rate¶
Set weight norm and equalized learning rate like demodulated convs in StyleGAN2 for module m. |
|
Remove a weight_norm_and_equal_lr hook previously applied on |
|
Remove a weight_scale hook previously applied on |
|
Multiply |
|
Set all Conv2d, ConvTransposed2d and Linear of |