Menu

Deep learning for NeuroImaging in Python.

Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the gallery for the big picture.

class nidl.volume.transforms.preprocessing.intensity.z_normalization.ZNormalization(masking_fn: Callable | None = None, eps: float = 1e-08, **kwargs)[source]

Bases: VolumeTransform

Normalize a 3d volume by removing the mean and scaling to unit variance.

Applies the following normalization to each channel separately:

x_i' = \frac{x_i - \mu(x)}{\sigma(x)+\epsilon}

where x_i is the original voxel intensity, \mu(x) is the data mean, \sigma(x) is the data std, and \epsilon is a small constant added for numerical stability.

It can handle a np.ndarray or torch.Tensor as input and it returns a consistent output (same type and shape). Input shape must be (C, H, W, D) or (H, W, D) (spatial dimensions).

Parameters:

masking_fn : Callable or None, default=None

If Callable, a masking function to be applied on the input data for each channel separately. It should return a boolean mask used to compute the data statistics (mean and std). If None, the whole volume is taken to compute the statistics.

eps : float, default=1e-8

Small float added to the standard deviation to avoid numerical errors.

kwargs : dict

Keyword arguments given to nidl.transforms.Transform.

Notes

If the input volume has constant values, the output will have almost constant non-deterministic values.

apply_transform(data: ndarray | Tensor) ndarray | Tensor[source]

Apply the z-normalization.

Parameters:

data : np.ndarray or torch.Tensor

The input data with shape (C, H, W, D) or (H, W, D)

Returns:

array or torch.Tensor

The z-normalized data per channel with same type as input.

Follow us

© 2025, nidl developers