torchelie.nn

Convolutions

Conv2d

A Conv2d with ‘same’ padding

Conv3x3

A 3x3 Conv2d with ‘same’ padding

Conv1x1

A 1x1 Conv2d

MaskedConv2d

A masked 2D convolution for PixelCNN

TopLeftConv2d

A 2D convolution for PixelCNN made of a convolution above the current pixel and another on the left.

Normalization

AdaIN2d

Adaptive InstanceNormalization from *Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization* (Huang et al, 2017)

FiLM2d

Feature-wise Linear Modulation from https://distill.pub/2018/feature-wise-transformations/ The difference with AdaIN is that FiLM does not uses the input’s mean and std in its calculations

PixelNorm

PixelNorm from ProgressiveGAN

ImageNetInputNorm

Normalize images channels as torchvision models expects, in a differentiable way

ConditionalBN2d

Spade2d

AttenNorm2d

From https://arxiv.org/abs/1908.01259

GhostBatchNorm2d

Misc

VQ

Quantization layer from Neural Discrete Representation Learning

MultiVQ

Multi codebooks quantization layer from Neural Discrete Representation Learning

Noise

Add gaussian noise to the input, with a per channel or global learnable std.

Debug

An pass-through layer that prints some debug info during forward pass.

Dummy

A pure pass-through layer

Lambda

Applies a lambda function on forward()

Reshape

Reshape the input volume

Interpolate2d

A wrapper around pytorch.nn.functional.interpolate()

InterpolateBilinear2d

A wrapper around pytorch.nn.functional.interpolate() with bilinear mode.

AdaptiveConcatPool2d

Pools with AdaptiveMaxPool2d AND AdaptiveAvgPool2d and concatenates both results.

ModulatedConv

SelfAttention2d

Self Attention such as used in SAGAN or BigGAN.

GaussianPriorFunc

UnitGaussianPrior

Force a representation to fit a unit gaussian prior.

InformationBottleneck

Const

Experimental: Return a constant learnable volume.

SinePositionEncoding2d

Experimental

MinibatchStddev

Minibatch Stddev layer from Progressive GAN

Blocks

ConvBlock

A packed block with Conv-BatchNorm-ReLU and various operations to alter it.

MConvNormReLU

Experimental: A packed block with Masked Conv-Norm-ReLU

MConvBNReLU

Experimental: A packed block with Masked Conv-BN-ReLU

SpadeResBlock

A Spade ResBlock from Semantic Image Synthesis with Spatially-Adaptive Normalization

AutoGANGenBlock

A block of the generator discovered by AutoGAN.

ResidualDiscrBlock

A preactivated resblock suited for discriminators: it features leaky relus, no batchnorm, and an optional downsampling operator.

StyleGAN2Block

Experimental: A Upsample-(ModulatedConv-Noise-LeakyReLU)* block from StyleGAN2

SEBlock

A Squeeze-And-Excite block

PreactResBlock

A Preactivated Residual Block.

PreactResBlockBottleneck

A Preactivated Residual Block.

ResBlock

A Residual Block.

ResBlockBottleneck

A Residual Block.

ConvDeconvBlock

UBlock

Sequential

WithSavedActivations

Hook model in order to get intermediate activations.

CondSeq

An extension to torch’s Sequential that allows conditioning either as a second forward argument or condition()

ModuleGraph

Allows description of networks as computation graphs.

Activations

HardSigmoid

Hard Sigmoid

HardSwish

Hard Swish

torchelie.nn.utils

receptive_field_for

Compute the receptive field of net using a backward pass.

Model edition

edit_model

Allow to edit any part of a model by recursively editing its modules.

insert_after

Insert module new with name name after element key in sequential base and return the new sequence.

insert_before

Insert module new with name name before element key in sequential base and return the new sequence.

make_leaky

Change all relus into leaky relus for modules and submodules of net.

remove_batchnorm

Remove BatchNorm in Sequentials / CondSeqs in a smart way, restoring biases in the preceding layer.

Lambda

WeightLambda

Apply a lambda function as a hook to the weight matrix of a layer before a forward pass.

weight_lambda

Apply function() to getattr(module, name) on each forward pass.

remove_weight_lambda

Remove the hook hook_name applied on member name of module.

Weight normalization / equalized learning rate

weight_norm_and_equal_lr

Set weight norm and equalized learning rate like demodulated convs in StyleGAN2 for module m.

remove_weight_norm_and_equal_lr

Remove a weight_norm_and_equal_lr hook previously applied on getattr(module, name).

remove_weight_scale

Remove a weight_scale hook previously applied on getattr(module, name).

weight_scale

Multiply getattr(module, name) by scale on forward pass as a hook.

net_to_equal_lr

Set all Conv2d, ConvTransposed2d and Linear of net to equalized learning rate, initialized with torchelie.utils.kaiming() and dynamic=True.