Deep learning for NeuroImaging in Python.
Note
This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the gallery for the big picture.
- class nidl.volume.transforms.preprocessing.spatial.crop_or_pad.CropOrPad(target_shape: int | tuple[int, int, int], padding_mode: str = 'constant', constant_values: float | tuple[float, float, float] = 0.0, **kwargs)[source]¶
Bases:
VolumeTransform
Crop and/or pad a 3d volume to match the target shape.
It handles
np.ndarray
ortorch.Tensor
as input and returns a consistent output (same type).- Parameters:
target_shape : int or tuple[int, int, int]
Expected output shape. If int, apply the same size across all dimensions.
padding_mode : str in {‘edge’, ‘maximum’, ‘constant’, ‘mean’, ‘median’, ‘minimum’, ‘reflect’, ‘symmetric’}
Possible modes for padding. See more infos in the Numpy documentation.
constant_values : float or tuple[float, float, float]
The values to set the padded values for each axis if the padding mode is ‘constant’.
kwargs : dict
Keyword arguments given to
nidl.transforms.Transform
.
- apply_transform(data: ndarray | Tensor) ndarray | Tensor [source]¶
Crop and/or pad the input data to match target shape.
- Parameters:
data : np.ndarray or torch.Tensor
The input data with shape
or
. Transformation is applied across all channels.
- Returns:
data : np.ndarray or torch.Tensor
Cropped or padded data with same type as input and shape target_shape.
Follow us