Menu

Deep learning for NeuroImaging in Python.

Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the gallery for the big picture.

class surfify.models.vae.SphericalHemiFusionEncoder(input_channels, input_order, latent_dim, conv_flts=(64, 128, 128, 256, 256), fusion_level=1, activation='LeakyReLU', batch_norm=False, conv_mode='DiNe', dine_size=1, repa_size=5, repa_zoom=5, dynamic_repa_zoom=False, standard_ico=False, cachedir=None)[source]

Init class.

Parameters:

input_channels : int, default 1

the number of input channels.

input_dim : int, default 192

the size of the converted 3-D surface to the 2-D grid.

latent_dim : int, default 64

the size of the latent space it encodes to.

conv_flts : list of int

the size of convolutional filters.

fusion_level : int, default 1

at which max pooling level left and right hemisphere data are concatenated.

activation : str, default ‘LeakyReLU’

activation function’s class name in pytorch’s nn module to use after each convolution

batch_norm : bool, default False

optionally uses batch normalization after each convolution

forward(x)[source]

The encoding.

Parameters:

left_x : Tensor (batch_size, <input_channels>, n_vertices)

input left cortical textures.

right_x : Tensor (batch_size, <input_channels>, n_vertices)

input right cortical textures.

Returns:

x : Tensor (batch_size, <latent_dim>)

the latent representations.

Follow us

© 2025, nidl developers