Menu

Deep learning for NeuroImaging in Python.

Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the gallery for the big picture.

class nidl.volume.transforms.preprocessing.intensity.rescale.RobustRescaling(out_min_max: tuple[float, float] = (0, 1), percentiles: tuple[float, float] = (1, 99), masking_fn: Callable | None = None, **kwargs)[source]

Bases: VolumeTransform

Rescale intensities in a 3d volume to a given range.

It is robust to outliers since the volume is clipped according to a given inter-quantile range. It applies the following percentile-based min-max transformation per channel:

x_i' = \frac{\min\left(\max\left(x_i, p_l\right), p_u\right) -
p_l}{p_u - p_l} (o_{max} - o_{min}) + o_{min}

p_{l} = \text{percentile}(x, p_{min}), \quad
p_{u} = \text{percentile}(x, p_{max})

where :math:x_i is the original voxel intensity, (p_{        ext{min}},
p_{ ext{max}}) defines the input quantile range used for clipping, and (o_{ ext{min}}, o_{  ext{max}}) defines the target output intensity range.

It handles a np.ndarray or torch.Tensor as input and returns a consistent output (same type and shape). Input shape must be (C, H, W, D) or (H, W, D) (spatial dimensions).

Parameters:

out_min_max : (float, float), default=(0, 1)

Range of output intensities.

percentiles : (float, float), default=(1, 99)

Percentage for the quantile values of the input volume used to clip the data. For example, SynthSeg [R9] uses (1, 99) while nnUNet [R10] uses (0.5, 99.5).

masking_fn : Callable or None, default=None

If Callable, a masking function returning a boolean mask to be applied on the input volume for each channel separately. Only voxels inside the mask are used to compute the cutoff values when clipping the data. If None, the whole volume is taken to compute the cutoff.

kwargs : dict

Keyword arguments given to nidl.transforms.Transform.

Notes

If the input volume has constant values, the normalized output is set to its minimum value by convention.

References

[R9] (1,2)

Billot, B. et al., (2023). “SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining.” Medical Image Analysis, 86, 102789.

[R10] (1,2)

Isensee, F. et al., (2021). “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation.” Nature Methods, 18, 203-211.

Examples

>>> import numpy as np
>>> from nidl.volume.transforms import RobustRescaling
>>> # Create a random 3d volume with shape (64, 64, 64)
>>> volume = np.random.normal(loc=100, scale=20, size=(64, 64, 64))
>>> # Define the transform
>>> transform = RobustRescaling(out_min_max=(0, 1), percentiles=(1, 99))
>>> # Apply the transform
>>> rescaled = transform(volume)
>>> rescaled.shape
(64, 64, 64)
>>> # Values are now in the range [0, 1]
>>> rescaled.min(), rescaled.max()
(0.0, 1.0)
apply_transform(data: ndarray | Tensor) ndarray | Tensor[source]

Apply the intensity rescaling.

Parameters:

data : np.ndarray or torch.Tensor

The input data with shape (C, H, W, D) or (H, W, D)

Returns:

array or torch.Tensor

The rescaled data with same type as input.

Follow us

© 2025, nidl developers