Note

This page is a reference documentation. It only explains the class signature, and not how to use it. Please refer to the user guide for the big picture.

nidl.losses.BarlowTwinsLoss

class nidl.losses.BarlowTwinsLoss(lambd=0.005)[source]

Bases: Module

Implementation of the Barlow Twins loss [1].

Compute the Barlow Twins loss, which reduces redundancy between the components of the outputs.

Given a mini-batch of size n, two embeddings z^{(1)}_b and z^{(2)}_b representing two outputs of dimension D of the same sample:

\mathcal{L}_{BT} =
\underbrace{\sum_{i} \left( 1 - C_{ii} \right)^{2}
}_{\text{invariance term}}
+ \lambda
\underbrace{\sum_{i} \sum_{j \neq i} C_{ij}^{2}
}_{\text{redundancy reduction term}}

where \lambda is a positive constant trading off the importance of the first and second terms of the loss, and where C is the cross-correlation matrix computed between the outputs of the two identical networks along the batch dimension:

C_{ij} \triangleq
\frac{\sum_{b} z^{(1)}_{b,i} \, z^{(2)}_{b,j}}
{\sqrt{\sum_{b} \left(z^{(1)}_{b,i}\right)^{2}}
\; \sqrt{\sum_{b} \left(z^{(2)}_{b,j}\right)^{2}} }

where b indexes batch samples and i, j index the vector dimension of the networks outputs.

Parameters:
lambd: float, default=5e-3

Trading off the importance of the redundancy reduction term over the invariance term.

References

[1]

Zbontar, J., et al., “Barlow Twins: Self-Supervised Learning via Redundancy Reduction.” PMLR, 2021. hhttps://proceedings.mlr.press/v139/zbontar21a

__init__(lambd=0.005)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(z1, z2)[source]
Parameters:
z1: torch.Tensor of shape (batch_size, n_features)

First embedded view.

z2: torch.Tensor of shape (batch_size, n_features)

Second embedded view.

Returns:
loss: torch.Tensor

The BarlowTwins loss computed between z1 and z2.