nidl.metrics: Available metrics

Introduction

A metric is an object (most likely a function) that allows you to compute standard scores, usually not natively present in sklearn.metrics or in torchmetrics. Most metrics in nidl handle both numpy.ndarray and torch.Tensor and returns an output consistent with the input.

Regression metrics

Functions for all regression metrics.

pearson_r(y_true, y_pred[, sample_weight, ...])

Pearson correlation coefficient between 2 arrays y_true, y_pred.

Self-supervised metrics

Functions for all self-supervised metrics.

alignment_score(z1, z2[, normalize, alpha, eps])

Compute the alignment score between two embeddings [Ref037bd6195a-1].

uniformity_score(z[, normalize, t, eps])

Compute the uniformity score of an embedding [R4616aa471b76-1]

contrastive_accuracy_score(z1, z2[, ...])

Compute the top-k contrastive accuracy between two embeddings.

procrustes_similarity(X, Y)

Procrustes similarity between two point clouds / embeddings in the Euclidean case, with global scale invariance.

procrustes_r2(X, Y)

Procrustes similarity between two point clouds / embeddings in the Euclidean case, when scale matters.

kruskal_similarity(X, Y[, spherical])

Kruskal similarity between two point clouds / embeddings.