Deep learning for NeuroImaging in Python.
Note
This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the gallery for the big picture.
- class nidl.callbacks.model_probing.ModelProbing(train_dataloader: DataLoader, test_dataloader: DataLoader, probe: BaseEstimator, every_n_train_epochs: int | None = 1, every_n_val_epochs: int | None = None, on_test_epoch_start: bool = False, on_test_epoch_end: bool = False, prog_bar: bool = True)[source]¶
Bases:
ABC
,Callback
Callback to probe the representation of an embedding estimator on a dataset.
It has the following logic:
Embeds the input data (training+test) through the estimator using transform_step method (handles distributed multi-gpu forward pass).
Train the probe on the training embedding (handles multi-cpu training).
Test the probe on the test embedding and log the metrics.
This callback is abstract and should be inherited to implement the log_metrics method.
- Parameters:
train_dataloader : torch.utils.data.DataLoader
Training dataloader yielding batches in the form (X, y) for further embedding and training of the probe.
test_dataloader : torch.utils.data.DataLoader
Test dataloader yielding batches in the form (X, y) for further embedding and test of the probe.
probe : sklearn.base.BaseEstimator
The probe model to be trained on the embedding. It must implement fit and predict methods on numpy array.
every_n_train_epochs : int or None, default=1
Number of training epochs after which to run the probing. Disabled if None.
every_n_val_epochs : int or None, default=None
Number of validation epochs after which to run the probing. Disabled if None.
on_test_epoch_start : bool, default=False
Whether to run the probing at the start of the test epoch.
on_test_epoch_end : bool, default=False
Whether to run the probing at the end of the test epoch.
prog_bar : bool, default=True
Whether to display the metrics in the progress bar.
- static adapt_dataloader_for_ddp(dataloader, trainer)[source]¶
Wrap user dataloader with DistributedSampler if in DDP mode.
- extract_features(pl_module, dataloader)[source]¶
Extract features from a dataloader with the BaseEstimator.
By default, it uses the transform_step logic applied on each batch to get the embeddings with the labels. The input dataloader should yield batches of the form (X, y) where X is the input data and y is the label.
- Parameters:
pl_module : BaseEstimator
The BaseEstimator module that implements the ‘transform_step’.
dataloader : torch.utils.data.DataLoader
The dataloader to extract features from. It should yield batches of the form (X, y) where X is the input data and y is the label.
- Returns:
tuple of (z, y)
Tuple of numpy arrays (z, y) where z are the extracted features and y are the corresponding labels.
- abstract log_metrics(pl_module, y_pred, y_true)[source]¶
Log the metrics given the predictions and the true labels.
- on_train_epoch_end(trainer, pl_module)[source]¶
Called when the train epoch ends.
To access all batch outputs at the end of the epoch, you can cache step outputs as an attribute of the
pytorch_lightning.core.LightningModule
and access them in this hook:class MyLightningModule(L.LightningModule): def __init__(self): super().__init__() self.training_step_outputs = [] def training_step(self): loss = ... self.training_step_outputs.append(loss) return loss class MyCallback(L.Callback): def on_train_epoch_end(self, trainer, pl_module): # do something with all training_step outputs, for example: epoch_mean = torch.stack(pl_module.training_step_outputs).mean() pl_module.log("training_epoch_mean", epoch_mean) # free up the memory pl_module.training_step_outputs.clear()
- probing(trainer, pl_module: BaseEstimator)[source]¶
Perform the probing on the given estimator.
This method performs the following steps: 1) Extracts the features from the training and test dataloaders 2) Fits the probe on the training features and labels 3) Makes predictions on the test features 4) Computes and logs the metrics.
- Parameters:
pl_module : BaseEstimator
The BaseEstimator module that implements the transform_step.
- Raises:
ValueError : If the pl_module does not inherit from BaseEstimator or
from `TransformerMixin`.
Follow us