Loss

class metatensor.models.utils.loss.TensorMapLoss(reduction: str = 'mean', weight: float = 1.0, gradient_weights: Dict[str, float] | None = None)[source]

Bases: object

A loss function that operates on two metatensor.torch.TensorMap.

The loss is computed as the sum of the loss on the block values and the loss on the gradients, with weights specified at initialization.

At the moment, this loss function assumes that all the gradients declared at initialization are present in both TensorMaps.

Parameters:
  • reduction (str) – The reduction to apply to the loss. See torch.nn.MSELoss.

  • weight (float) – The weight to apply to the loss on the block values.

  • gradient_weights (Dict[str, float] | None) – The weights to apply to the loss on the gradients.

Returns:

The loss as a zero-dimensional torch.Tensor (with one entry).

class metatensor.models.utils.loss.TensorMapDictLoss(weights: Dict[str, Dict[str, float]], reduction: str = 'mean')[source]

Bases: object

A loss function that operates on two Dict[str, metatensor.torch.TensorMap].

At initialization, the user specifies a list of keys to use for the loss, along with a weight for each key (as well as gradient weights).

The loss is then computed as a weighted sum. Any keys that are not present in the dictionaries are ignored.

Parameters:
  • weights (Dict[str, Dict[str, float]]) – A dictionary mapping keys to weights. Each weight is itself a dictionary mapping “values” to the weight to apply to the loss on the block values, and gradient names to the weights to apply to the loss on the gradients.

  • reduction (str) – The reduction to apply to the loss. See torch.nn.MSELoss.

Returns:

The loss as a zero-dimensional torch.Tensor (with one entry).