Preparing and loading your data

This tutorial introduces how SchNetPack stores and loads data. Before we can start training neural networks with SchNetPack, we need to prepare our data. This is because SchNetPack has to stream the reference data from disk during training in order to be able to handle large datasets. Therefore, it is crucial to use data format that allows for fast random read access. We found that the ASE database format fulfills this criterion perfectly. To further improve the performance, we internally encode properties in binary. However, as long as you only access the ASE database via the provided SchNetPack ASEAtomsData class, you don’t have to worry about that.

[1]:
from schnetpack.data import ASEAtomsData

Predefined datasets

SchNetPack supports several benchmark datasets that can be used without preparation. Each one can be accessed using a corresponding class that inherits from AtomsDataModule (a specialized PyTorchLightning DataModule), which supports automatic download, conversion and partitioning. Here, we show how to use these data sets at the example of the QM9 benchmark.

First, we have to import the dataset class and instantiate it. This will automatically download the data to the specified location.

[2]:
from schnetpack.datasets import QM9
from schnetpack.transform import ASENeighborList

qm9data = QM9(
    './qm9.db',
    batch_size=10,
    num_train=110000,
    num_val=10000,
    split_file='./split_qm9.npz',
    transforms=[ASENeighborList(cutoff=5.)]
)
qm9data.prepare_data()
qm9data.setup()

Neighbors are collected using neighborlists that can be passed to the AtomsDataModule as a preprocessing transform. These are applied to the molecules before they are batched in the data loader. We supply different environment providers using a cutoff (e.g., AseEnvironmentProvider, TorchEnvironmentProvider) that are able to handle larger molecules and periodic boundary conditions.

Let’s have a closer look at this dataset. We can find out how large it is and which properties it supports:

[3]:
print('Number of reference calculations:', len(qm9data.dataset))
print('Number of train data:', len(qm9data.train_dataset))
print('Number of validation data:', len(qm9data.val_dataset))
print('Number of test data:', len(qm9data.test_dataset))
print('Available properties:')

for p in qm9data.dataset.available_properties:
    print('-', p)
Number of reference calculations: 133885
Number of train data: 110000
Number of validation data: 10000
Number of test data: 13885
Available properties:
- rotational_constant_A
- rotational_constant_B
- rotational_constant_C
- dipole_moment
- isotropic_polarizability
- homo
- lumo
- gap
- electronic_spatial_extent
- zpve
- energy_U0
- energy_U
- enthalpy_H
- free_energy
- heat_capacity

We can load data points using zero-base indexing. The result is a dictionary containing the geometry and properties:

[4]:
example = qm9data.dataset[0]
print('Properties:')

for k, v in example.items():
    print('-', k, ':', v.shape)
Properties:
- _idx : torch.Size([1])
- rotational_constant_A : torch.Size([1])
- rotational_constant_B : torch.Size([1])
- rotational_constant_C : torch.Size([1])
- dipole_moment : torch.Size([1])
- isotropic_polarizability : torch.Size([1])
- homo : torch.Size([1])
- lumo : torch.Size([1])
- gap : torch.Size([1])
- electronic_spatial_extent : torch.Size([1])
- zpve : torch.Size([1])
- energy_U0 : torch.Size([1])
- energy_U : torch.Size([1])
- enthalpy_H : torch.Size([1])
- free_energy : torch.Size([1])
- heat_capacity : torch.Size([1])
- _n_atoms : torch.Size([1])
- _atomic_numbers : torch.Size([5])
- _positions : torch.Size([5, 3])
- _cell : torch.Size([1, 3, 3])
- _pbc : torch.Size([3])

We see that all available properties have been loaded as torch tensors with the given shapes. Keys with an underscore indicate that these names are reserved for internal use. This includes the geometry (_n_atoms, _atomic_numbers, _positions), the index within the dataset (_idx) as well as information about neighboring atoms and periodic boundary conditions (_cell, _pbc).

We can iterate the dataset partitions as follows:

[5]:
for batch in qm9data.val_dataloader():
    print(batch.keys())
    break
dict_keys(['_idx', 'rotational_constant_A', 'rotational_constant_B', 'rotational_constant_C', 'dipole_moment', 'isotropic_polarizability', 'homo', 'lumo', 'gap', 'electronic_spatial_extent', 'zpve', 'energy_U0', 'energy_U', 'enthalpy_H', 'free_energy', 'heat_capacity', '_n_atoms', '_atomic_numbers', '_positions', '_cell', '_pbc', '_idx_i_local', '_idx_j_local', '_offsets', '_idx_m', '_idx_i', '_idx_j'])

We see that additional keys have been added by the neighborlist transform defined above. These are the relative positions (_Rij) and neighbor indices (_idx_i, _idx_j). Since differrent systems can have different numbers of atoms, we don’t use separate dimensions for systems and atoms (i.e. shape [n_systems, n_atoms, …]), but store the atoms of all systems in a single dimension (i.e. shape [n_all_atoms, …]). Therefore, we additionally need to store the indices of the corresponding system for each atom in a batch (idx_m). This avoids the padding and masking that was required in previous versions of SchNetPack. The indices look as follows:

[6]:
print('System index:', batch['_idx_m'])
print('Center atom index:', batch['_idx_i'])
print('Neighbor atom index:', batch['_idx_j'])
System index: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3,
        3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
        4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6,
        6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7,
        7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9,
        9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9])
Center atom index: tensor([  0,   0,   0,  ..., 155, 155, 155])
Neighbor atom index: tensor([  1,  14,  12,  ..., 141, 146, 154])

All property names are pre-defined as class-variable for convenient access:

[7]:
print('Total energy at 0K:', batch[QM9.U0])
print('HOMO:', batch[QM9.homo])
Total energy at 0K: tensor([-450.2210, -437.8808, -333.4494, -403.1605, -387.1264, -472.7010,
        -385.8485, -470.0007, -434.0736, -453.9216], dtype=torch.float64)
HOMO: tensor([-0.2451, -0.2660, -0.2690, -0.2507, -0.2318, -0.2676, -0.2343, -0.2327,
        -0.2768, -0.2193], dtype=torch.float64)

Preparing your own data

In the following we will create an ASE database from our own data. For this tutorial, we will use a dataset containing a molecular dynamics (MD) trajectory of ethanol, which can be downloaded here.

[8]:
import os
if not os.path.exists('./uracil_dft.npz'):
    !wget http://quantum-machine.org/gdml/data/npz/md17_uracil.npz

The data set is in Numpy format. In the following, we show how this data can be parsed and converted for use in SchNetPack, so that you apply this to any other data format.

First, we need to parse our data. For this we use the IO functionality supplied by ASE. In order to create a SchNetPack DB, we require a list of ASE ``Atoms`` objects as well as a corresponding list of dictionaries [{property_name1: property1_molecule1}, {property_name1: property1_molecule2}, ...] containing the mapping from property names to values.

[9]:
from ase import Atoms
import numpy as np

# load atoms from npz file. Here, we only parse the first 10 molecules
data = np.load('./md17_uracil.npz')

numbers = data["z"]
atoms_list = []
property_list = []
for positions, energies, forces in zip(data["R"], data["E"], data["F"]):
    ats = Atoms(positions=positions, numbers=numbers)
    properties = {'energy': energies, 'forces': forces}
    property_list.append(properties)
    atoms_list.append(ats)

print('Properties:', property_list[0])
Properties: {'energy': array([-260120.11049893]), 'forces': array([[ 8.58762909e-01,  7.93783439e+00, -6.10513446e-01],
       [-1.59587602e+01, -1.38921103e+01,  1.46274031e+00],
       [ 2.38826967e+01,  6.32292299e+01, -5.38152591e+00],
       [-2.39499309e+01, -5.43241035e+00,  1.04116149e+00],
       [ 6.26392709e+01, -6.78667364e+01,  3.52832279e+00],
       [-2.38302859e+01,  8.54756612e+00, -3.88276505e-02],
       [ 2.46907632e+00,  3.17072923e-01, -9.76762146e-02],
       [ 1.47304240e+01,  1.43791066e+01, -1.46191820e+00],
       [-6.34016421e+00,  2.75814146e+00, -5.13516521e-02],
       [-1.13403416e+01, -1.98585305e+01,  1.77899100e+00],
       [-1.58345920e+01,  1.17016462e+01, -4.86097667e-01],
       [-7.32615606e+00, -1.82081008e+00,  3.16695146e-01]])}

Once we have our data in this format, it is straightforward to create a new SchNetPack DB and store it.

[10]:
%rm './new_dataset.db'
new_dataset = ASEAtomsData.create(
    './new_dataset.db',
    distance_unit='Ang',
    property_unit_dict={'energy':'kcal/mol', 'forces':'kcal/mol/Ang'}
)
new_dataset.add_systems(property_list, atoms_list)

To get a better initialization of the network and avoid numerical issues, we often want to make use of simple statistics of our target properties. The most simple approach is to subtract the mean value of our target property from the labels before training such that the neural networks only have to learn the difference from the mean prediction. A more sophisticated approach is to use so-called atomic reference values that provide basic statistics of our target property based on the atom types in a structure. This is especially useful for extensive properties such as the energy, where the single atom energies contribute a major part to the overall value. If your data comes with atomic reference values, you can add them to the metadata of your ase database. The statistics have to be stored in a dictionary with the property names as keys and the atomic reference values as lists where the list indices match the atomic numbers. For further explanation please have a look at the QM9 tutorial.

Here is an example:

[2]:
# calculate this at the same level of theory as your data
atomref = {
'energy': [314.0, 0.0, 0.0, 0.0] # atomref value for hydrogen: 314.0
}

# the supplied list is ordered by atomic number, e.g.:
atomref_hydrogen= atomref['energy'][1]

# dataset = ASEAtomsData.create(
#     './new_dataset.db',
#     distance_unit='Ang',
#     property_unit_dict={'energy':'kcal/mol'},
#     atomref=atomref
# )

In our concrete case, we only have an MD trajectory of a single system. Therefore, we don’t need to specify an atomref, since removing the average energy will working as well.

Now we can have a look at the data in the same way we did before for QM9:

[11]:
print('Number of reference calculations:', len(new_dataset))
print('Available properties:')

for p in new_dataset.available_properties:
    print('-', p)
print()

example = new_dataset[0]
print('Properties of molecule with id 0:')

for k, v in example.items():
    print('-', k, ':', v.shape)
Number of reference calculations: 133770
Available properties:
- energy
- forces

Properties of molecule with id 0:
- _idx : torch.Size([1])
- energy : torch.Size([1])
- forces : torch.Size([12, 3])
- _n_atoms : torch.Size([1])
- _atomic_numbers : torch.Size([12])
- _positions : torch.Size([12, 3])
- _cell : torch.Size([1, 3, 3])
- _pbc : torch.Size([3])

The same way, we can store multiple properties, including atomic properties such as forces, or tensorial properties such as polarizability tensors.

Using your data for training

We have now used the class ASEAtomsData to create a new ase database for our custom data. schnetpack.data.ASEAtomsData is a subclass of pytorch.data.Dataset and could be utilized for training models with pytorch. However, we use pytorch-lightning to conveniently handle the training procedure for us. This requires us to wrap the dataset in a LightningDataModule. We provide a general purpose AtomsDataModule for atomic systems in schnetpack.data.datamodule.AtomsDataModule. The data module will handle the unit conversion, splitting, batching and the preprocessing of the data with transforms. We can instantiate the data module for our custom dataset with:

[ ]:
import schnetpack as spk
import schnetpack.transform as trn

custom_data = spk.data.AtomsDataModule(
    './new_dataset.db',
    batch_size=10,
    distance_unit='Ang',
    property_units={'energy':'kcal/mol', 'forces':'kcal/mol/Ang'},
    num_train=1000,
    num_val=100,
    transforms=[
        trn.ASENeighborList(cutoff=5.),
        trn.RemoveOffsets("energy", remove_mean=True, remove_atomrefs=False),
        trn.CastTo32()
    ],
    num_workers=1,
    pin_memory=True, # set to false, when not using a GPU
)
custom_data.prepare_data()
custom_data.setup()

Please note that for the general case it makes sense to use your dataset within command line interface (see: here). For some benchmark datasets we provide data modules with download functions and more utilities in schnetpack.data.datasets. Further examples on how to use the data modules is provided in the following sections.