Note

This page is a reference documentation. It only explains the function signature, and not how to use it. Please refer to the user guide for the big picture.

nidl.metrics.contrastive_accuracy_score

nidl.metrics.contrastive_accuracy_score(z1, z2, normalize=True, topk=1, eps=1e-12)[source]

Compute the top-k contrastive accuracy between two embeddings.

This metric measures how often the true positive pair is among the top-k most similar candidates in the opposite view, in both directions:

  • For each i, treat z1[i] as a query and all rows of z2 as a retrieval database. Check whether the matching element z2[i] is within the top-k most similar vectors to z1[i].

  • Symmetrically, treat z2[i] as a query and all rows of z1 as the database, and check whether z1[i] is within the top-k neighbors.

The final score is the average of the two directional accuracies:

\text{Acc}_{k}(z_1, z_2)
= \tfrac{1}{2} \left(
    \text{Acc}_{k}(z_1 \to z_2)
  + \text{Acc}_{k}(z_2 \to z_1)
\right),

where each directional accuracy is the fraction of queries whose true pair is in the top-k most similar candidates.

Similarities are Euclidean dot-product between the embeddings. The score is in the range [0, 1], where higher is better.

Parameters:
z1torch.Tensor or np.ndarray, shape (n_samples, n_features)

Embeddings from the first view / augmentation.

z2torch.Tensor or np.ndarray, shape (n_samples, n_features)

Embeddings from the second view / augmentation. Must have the same shape as z1.

normalizebool, default=True

If True, each embedding vector is L2-normalized along the feature dimension before computing similarities. This makes the metric equivalent to using cosine similarity. If False, raw dot products are used.

topkint, default=1

The “k” in “top-k”. For each query, we check whether the true counterpart index i is contained in the indices of the top-k most similar candidates. If topk is greater than the number of samples, it is automatically clipped.

epsfloat, default=1e-12

Small constant used to avoid division by zero during normalization.

Returns:
scoretorch.Tensor or numpy scalar

The contrastive top-k accuracy:

  • If inputs are torch.Tensor → returns a 0-dim torch.Tensor.

  • If inputs are np.ndarray → returns a NumPy scalar.

Raises:
TypeError

If z1 and z2 are not both torch tensors or both NumPy arrays.

ValueError

If shapes of z1 and z2 do not match, or if they are not 2-dimensional, or if topk < 1.

Examples

>>> z1 = torch.randn(8, 128)
>>> z2 = z1 + 0.1 * torch.randn(8, 128)  # slightly perturbed positives
>>> contrastive_accuracy_score(z1, z2, topk=1)
tensor(1.)  # often close to 1 for this synthetic example

Examples using nidl.metrics.contrastive_accuracy_score

Visualization of metrics during training of PyTorch-Lightning models

Visualization of metrics during training of PyTorch-Lightning models