Menu

Deep learning for NeuroImaging in Python.

Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the gallery for the big picture.

class surfify.models.vae.HemiFusionDecoder(output_shape, before_latent_dim, latent_dim, conv_flts=(64, 128, 128, 256, 256), fusion_level=1, activation='LeakyReLU', batch_norm=False)[source]

Init class.

Parameters:

output_channels : int, default 1

the number of output channels.

input_dim : int,

the size of the squared input to the convnet, after the dense layer transforming the input from the latent space.

latent_dim : int, default 64

the size of the latent space it decodes from.

conv_flts : list of int

the size of convolutional filters, given in reverse order: the first filter in the list will be the last one in the network.

fusion_level : int, default 1

at which max pooling level left and right hemisphere data are concatenated.

activation : str, default ‘LeakyReLU’

activation function’s class name in pytorch’s nn module to use after each convolution

batch_norm : bool, default False

optionally uses batch normalization after each convolution

forward(z)[source]

The decoder.

Parameters:

z : Tensor (samples, <latent_dim>)

the stochastic latent state z.

Returns:

left_recon_x : Tensor (samples, <input_channels>, azimuth, elevation)

reconstructed left cortical texture.

right_recon_x : Tensor (samples, <input_channels>, azimuth, elevation)

reconstructed right cortical texture.

Follow us

© 2025, nidl developers