Menu

Deep learning for NeuroImaging in Python.

Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the gallery for the big picture.

class nidl.estimators.linear.LogisticRegression(model: Module, num_classes: int, lr: float, weight_decay: float, random_state: int | None = None, **kwargs)[source]

Bases: ClassifierMixin, BaseEstimator

LogisticRegression implementation.

This class can also be used in self supervised settings. After we have trained our encoder via self supervised learning, we can deploy it on downstream tasks and see how well it performs with little data. A common setup, which also verifies whether the model has learned generalized representations, is to perform Logistic Regression on the features. In other words, we learn a MLP that maps the representations to a class prediction. If very little data is available, it might be beneficial to dynamically encode the images during training so that we can also apply data augmentations. To freeze the input encoder, consider using LogisticRegression.freeze_encoder(). It assumes the MLP layer is named fc.

Parameters:

model : nn.Module

the encoder f(.) architecture.

num_classes : int

the number of class to predict.

lr : float

the learning rate.

temperature : float

the SimCLR loss temperature parameter.

weight_decay : float

the Adam optimizer weight decay parameter.

max_epochs : int, default=None

optionaly, use a MultiStepLR scheduler.

random_state : int, default=None

setting a seed for reproducibility.

kwargs : dict

Trainer parameters.

Notes

A batch of data must contains two elements: a tensor with images, and a tensor with the variable to predict.

Examples

>>> model = nn.Sequential(OrderedDict([
>>>     ("encoder", encoder),
>>>     ("fc", nn.Linear(latent_size, num_classes))
>>> ]))

Attributes

model

a Module containing the prediction model.

validation_step_outputs

a dictionnary with the validation predictions and associated labels in the ‘pred’, and ‘label’ keys, respectivelly.

configure_optimizers()[source]

Declare a AdamW optimizer and, optionnaly (max_epochs is defined), a MultiStepLR learning-rate scheduler.

cross_entropy_loss(batch: tuple[Tensor, Sequence[Tensor]], mode: str)[source]

Compute and log the InfoNCE loss using cross_entropy().

freeze_encoder()[source]

Freeze the input encoder. Useful for self supervised settings.

on_validation_epoch_end()[source]

Clean the validation cache at each epoch ends.

predict_step(batch: Tensor, batch_idx: int, dataloader_idx: int | None = 0)[source]

Step function called during predict(). By default, it calls forward(). Override to add any processing logic.

The predict_step() is used to scale inference on multi-devices.

To prevent an OOM error, it is possible to use BasePredictionWriter callback to write the predictions to disk or database after each batch or on epoch end.

The BasePredictionWriter should be used while using a spawn based accelerator. This happens for training strategy strategy="ddp_spawn" or training on 8 TPU cores with accelerator="tpu", devices=8 as predictions won’t be returned.

Parameters:

batch : iterable, normally a DataLoader

the current data.

batch_idx : int

the index of this batch.

dataloader_idx : int, default=0

the index of the dataloader that produced this batch (only if multiple dataloaders are used).

Returns:

out : Any

the predicted output.

training_step(batch: tuple[Tensor, Tensor], batch_idx: int, dataloader_idx: int | None = 0)[source]

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters:

batch : iterable, normally a DataLoader

the current data.

batch_idx : int

the index of this batch.

dataloader_idx : int, default=0

the index of the dataloader that produced this batch (only if multiple dataloaders are used).

Returns:

loss : STEP_OUTPUT

the computed loss:

  • Tensor - the loss tensor.

  • dict - a dictionary which can include any keys, but must include the key 'loss' in the case of automatic optimization.

  • None - in automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.

To use multiple optimizers, you can switch to ‘manual optimization’

and control their stepping:

Notes

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.

Examples

>>> def __init__(self):
>>>     super().__init__()
>>>     self.automatic_optimization = False
>>>
>>>
>>> # Multiple optimizers (e.g.: GANs)
>>> def training_step(self, batch, batch_idx):
>>>     opt1, opt2 = self.optimizers()
>>>
>>>     # do training_step with encoder
>>>     ...
>>>     opt1.step()
>>>     # do training_step with decoder
>>>     ...
>>>     opt2.step()
validation_step(batch: tuple[Tensor, Tensor], batch_idx: int, dataloader_idx: int | None = 0)[source]

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

Parameters:

batch : iterable, normally a DataLoader

the current data.

batch_idx : int

the index of this batch.

dataloader_idx : int, default=0

the index of the dataloader that produced this batch (only if multiple dataloaders are used).

Returns:

loss : STEP_OUTPUT

the computed loss:

  • Tensor - the loss tensor.

  • dict - a dictionary. can include any keys, but must include the key 'loss'.

  • None - skip to the next batch.

Notes

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

Examples

Self-Supervised Contrastive Learning with SimCLR

Self-Supervised Contrastive Learning with SimCLR

Follow us

© 2025, nidl developers