Note
This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.
nidl.volume.transforms.preprocessing.CropOrPad¶
- class nidl.volume.transforms.preprocessing.CropOrPad(target_shape, padding_mode='constant', constant_values=0.0, **kwargs)[source]¶
Bases:
VolumeTransformCrop and/or pad a 3d volume to match the target shape.
It handles
numpy.ndarrayortorch.Tensoras input and returns a consistent output (same type).- Parameters:
- target_shape: int or tuple[int, int, int]
Expected output shape. If int, apply the same size across all dimensions.
- padding_mode: str in {‘edge’, ‘maximum’, ‘constant’, ‘mean’, ‘median’, ‘minimum’, ‘reflect’, ‘symmetric’}
Possible modes for padding. See more infos in the Numpy documentation.
- constant_values: float or tuple[float, float, float]
The values to set the padded values for each axis if the padding mode is ‘constant’.
- kwargs: dict
Keyword arguments given to
nidl.transforms.Transform.
- apply_transform(data)[source]¶
Crop and/or pad the input data to match target shape.
- Parameters:
- data: np.ndarray or torch.Tensor
The input data with shape
or
. Transformation is applied across
all channels.
- Returns:
- data: np.ndarray or torch.Tensor
Cropped or padded data with same type as input and shape target_shape.