Menu

Deep learning for NeuroImaging in Python.

Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the gallery for the big picture.

class surfify.models.vgg.SphericalGVGG(input_channels, cfg, n_classes, input_dim=194, hidden_dim=4096, batch_norm=False, fusion_level=1, init_weights=True)[source]

Spherical Grided VGG architecture.

See also

SphericalVGG

Notes

Debuging messages can be displayed by changing the log level using setup_logging(level='debug').

Examples

>>> import torch
>>> from surfify.models import SphericalGVGG11
>>> x = torch.zeros((1, 2, 192, 192))
>>> model = SphericalGVGG11(
>>>     input_channels=2, n_classes=10, input_dim=194, hidden_dim=512,
>>>     fusion_level=2, init_weights=True)
>>> print(model)
>>> out = model(x, x)
>>> print(out.shape)

Init class.

Parameters:

input_channels : int

the number of input channels.

cfg : list

the definition of layers where ‘M’ stands for max pooling.

n_classes : int

the number of class in the classification problem.

input_dim : int, default 192

the size of the converted 3-D surface to the 2-D grid.

hidden_dim : int, default 4096

the 2-layer classification MLP number of hidden dims.

batch_norm : bool, default False

wether or not to use batch normalization after a convolution layer.

fusion_level : int, default 1

at which max pooling level left and right hemisphere data are concatenated.

init_weights : bool, default True

initialize network weights.

forward(left_x, right_x)[source]

Forward method.

Parameters:

left_x : Tensor (samples, <input_channels>, azimuth, elevation)

input left cortical texture.

right_x : Tensor (samples, <input_channels>, azimuth, elevation)

input right cortical texture.

Returns:

out : torch.Tensor

the prediction.

Follow us

© 2025, nidl developers