ConvBlock

class torchelie.nn.ConvBlock(in_channels: int, out_channels: int, kernel_size: int, stride: int = 1)

A packed block with Conv-BatchNorm-ReLU and various operations to alter it.

Parameters
  • in_channels (int) – input channels

  • out_channels (int) – output channels

  • kernel_size (int) – kernel size

  • stride (int) – stride of the conv

Returns

A packed block with Conv-Norm-ReLU as a CondSeq

add_upsampling()torchelie.nn.conv.ConvBlock

Add a bilinear upsampling layer before the conv that doubles the spatial size

leaky(leak: float = 0.2)torchelie.nn.conv.ConvBlock

Change the ReLU to a LeakyReLU, also rescaling the weights in the conv to preserve the variance.

Returns

self

no_bias()torchelie.nn.conv.ConvBlock

Remove the bias term.

Returns

self

no_relu()torchelie.nn.conv.ConvBlock

Remove the ReLU

remove_batchnorm()torchelie.nn.conv.ConvBlock

Remove the BatchNorm, restores the bias term in conv.

Returns

self

reset()None

Recreate the block as a simple conv-BatchNorm-ReLU

restore_batchnorm()torchelie.nn.conv.ConvBlock

Restore BatchNorm if deleted

to_input_specs(in_channels: int)torchelie.nn.conv.ConvBlock

Recreate a convolution with in_channels input channels

to_preact()torchelie.nn.conv.ConvBlock

Place the normalization and ReLU before the convolution.

to_transposed_conv()torchelie.nn.conv.ConvBlock

Experimental: Transform the convolution into a hopefully equivalent transposed convolution .. warning:

ConvBlock.to_transposed_conv() is experimental, and may change or be deleted soon if not already broken
conv: Union[torch.nn.modules.conv.Conv2d, torch.nn.modules.conv.ConvTranspose2d]
in_channels: int
kernel_size: Tuple[int, int]
norm: Optional[torch.nn.modules.module.Module]
out_channels: int
relu: Optional[torch.nn.modules.module.Module]
stride: Tuple[int, int]