Override Architecture’s Default Parameters

In our initial tutorial, we used default parameters to train a model employing the SOAP-BPNN architecture, as shown in the following config:

# architecture used to train the model
architecture:
  name: experimental.soap_bpnn

# Mandatory section defining the parameters for system and target data of the
# training set
training_set:
  systems: "qm9_reduced_100.xyz" # file where the positions are stored
  targets:
    energy:
      key: "U0" # name of the target value

test_set: 0.1  # 10 % of the training_set are randomly split and taken for test set
validation_set: 0.1 # 10 % of the training_set are randomly split and for validation

While default parameters often serve as a good starting point, depending on your training target and dataset, it might be necessary to adjust the architecture’s parameters.

First, familiarize yourself with the specific parameters of the architecture you intend to use. We provide a list of all architectures and their parameters in the Available Architectures section. For example, the parameters of the SOAP-BPNN models are detailed at SOAP-BPNN.

Modifying Parameters (yaml)

As an example, let’s increase the number of epochs (num_epochs) and the cutoff radius of the SOAP descriptor. To do this, create a new section in the options.yaml named architecture. Within this section, you can override the architecture’s hyperparameters. The adjustments for num_epochs and cutoff look like this:

architecture:
   name: "soap_bpnn"
   model:
      soap:
         cutoff: 7.0
   training:
      num_epochs: 200

training_set:
systems: "qm9_reduced_100.xyz"
targets:
   energy:
      key: "U0"

test_set: 0.1
validation_set: 0.1

Modifying Parameters (Command Line Overrides)

For quick adjustments or additions to an options file, command-line overrides are also possibility. The changes above can be achieved by typing:

metatensor-models train options.yaml \
   -r architecture.model.soap.cutoff=7.0 architecture.training.num_epochs=200

Here, the -r or equivalent --override flag is used to parse the override flags. The syntax follows a dotlist-style string format where each level of the options is seperated by a .. For example to use single precision as the base precision for your training use -r base_precision=32

Note

Command line overrides allow adding new values to your training parameters and override the architectures as well as the parameters of your provided options file.