Commit ea03bf7b authored by peastman's avatar peastman
Browse files

Adapted more tutorials to new sequence

parent b83820ff
Loading
Loading
Loading
Loading
+0 −707

File deleted.

Preview size limit exceeded, changes collapsed.

+39 −2
Original line number Diff line number Diff line
%% Cell type:markdown id: tags:

# Tutorial Part ??: Creating Models with TensorFlow and PyTorch
# Tutorial Part 5: Creating Models with TensorFlow and PyTorch

In the tutorials so far, we have used standard models provided by DeepChem.  This is fine for many applications, but sooner or later you will want to create an entirely new model with an architecture you define yourself.  DeepChem provides integration with both TensorFlow (Keras) and PyTorch, so you can use it with models from either of these frameworks.

Actually, there are two different approaches you can take to this.  It depends on whether you want to use TensorFlow/PyTorch APIs or DeepChem APIs for training and evaluating your model.  For the former case, DeepChem's `Dataset` class has methods for easily adapting it to use with other frameworks.  `make_tf_dataset()` returns a `tensorflow.data.Dataset` object that iterates over the data.  `make_pytorch_dataset()` returns a `torch.utils.data.IterableDataset` that iterates over the data.  This lets you use DeepChem's datasets, loaders, featurizers, transformers, splitters, etc. and easily integrate them into your existing TensorFlow or PyTorch code.
## Colab

This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/05_Creating_Models_with_TensorFlow_and_PyTorch.ipynb)

## Setup

To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine.

%% Cell type:code id: tags:

``` python
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
```

%% Cell type:code id: tags:

``` python
!pip install --pre deepchem
```

%% Cell type:markdown id: tags:

There are actually two different approaches you can take to using TensorFlow or PyTorch models with DeepChem.  It depends on whether you want to use TensorFlow/PyTorch APIs or DeepChem APIs for training and evaluating your model.  For the former case, DeepChem's `Dataset` class has methods for easily adapting it to use with other frameworks.  `make_tf_dataset()` returns a `tensorflow.data.Dataset` object that iterates over the data.  `make_pytorch_dataset()` returns a `torch.utils.data.IterableDataset` that iterates over the data.  This lets you use DeepChem's datasets, loaders, featurizers, transformers, splitters, etc. and easily integrate them into your existing TensorFlow or PyTorch code.

But DeepChem also provides many other useful features.  The other approach, which lets you use those features, is to wrap your model in a DeepChem `Model` object.  Let's look at how to do that.

## KerasModel

`KerasModel` is a subclass of DeepChem's `Model` class.  It acts as a wrapper around a `tensorflow.keras.Model`.  Let's see an example of using it.  For this example, we create a simple sequential model consisting of two dense layers.

%% Cell type:code id: tags:

``` python
import deepchem as dc
import tensorflow as tf

keras_model = tf.keras.Sequential([
    tf.keras.layers.Dense(1000, activation='relu'),
    tf.keras.layers.Dropout(rate=0.5),
    tf.keras.layers.Dense(1)
])
model = dc.models.KerasModel(keras_model, dc.models.losses.L2Loss())
```

%% Cell type:markdown id: tags:

For this example, we used the Keras `Sequential` class.  Our model consists of a dense layer with ReLU activation, 50% dropout to provide regularization, and a final layer that produces a scalar output.  We also need to specify the loss function to use when training the model, in this case L<sub>2</sub> loss.  We can now train and evaluate the model exactly as we would with any other DeepChem model.  For example, let's load the Delaney solubility dataset.  How does our model do at predicting the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs)?

%% Cell type:code id: tags:

``` python
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='ECFP', splitter='random')
train_dataset, valid_dataset, test_dataset = datasets
model.fit(train_dataset, nb_epoch=50)
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
```

%% Output

    training set score: {'pearson_r2_score': 0.9787445597470444}
    test set score: {'pearson_r2_score': 0.736905850092889}

%% Cell type:markdown id: tags:

## TorchModel

`TorchModel` works just like `KerasModel`, except it wraps a `torch.nn.Module`.  Let's use PyTorch to create another model just like the previous one and train it on the same data.

%% Cell type:code id: tags:

``` python
import torch

pytorch_model = torch.nn.Sequential(
    torch.nn.Linear(1024, 1000),
    torch.nn.ReLU(),
    torch.nn.Dropout(0.5),
    torch.nn.Linear(1000, 1)
)
model = dc.models.TorchModel(pytorch_model, dc.models.losses.L2Loss())

model.fit(train_dataset, nb_epoch=50)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
```

%% Output

    training set score: {'pearson_r2_score': 0.9798256761766225}
    test set score: {'pearson_r2_score': 0.7256745385608444}

%% Cell type:markdown id: tags:

## Computing Losses

Now let's see a more advanced example.  In the above models, the loss was computed directly from the model's output.  Often that is fine, but not always.  Consider a classification model that outputs a probability distribution.  While it is possible to compute the loss from the probabilities, it is more numerically stable to compute it from the logits.

To do this, we create a model that returns multiple outputs, both probabilities and logits.  `KerasModel` and `TorchModel` let you specify a list of "output types".  If a particular output has type `'prediction'`, that means it is a normal output that should be returned when you call `predict()`.  If it has type `'loss'`, that means it should be passed to the loss function in place of the normal outputs.

Sequential models do not allow multiple outputs, so instead we use a subclassing style model.

%% Cell type:code id: tags:

``` python
class ClassificationModel(tf.keras.Model):

    def __init__(self):
        super(ClassificationModel, self).__init__()
        self.dense1 = tf.keras.layers.Dense(1000, activation='relu')
        self.dense2 = tf.keras.layers.Dense(1)

    def call(self, inputs, training=False):
        y = self.dense1(inputs)
        if training:
            y = tf.nn.dropout(y, 0.5)
        logits = self.dense2(y)
        output = tf.nn.sigmoid(logits)
        return output, logits

keras_model = ClassificationModel()
output_types = ['prediction', 'loss']
model = dc.models.KerasModel(keras_model, dc.models.losses.SigmoidCrossEntropy(), output_types=output_types)
```

%% Cell type:markdown id: tags:

We can train our model on the BACE dataset.  This is a binary classification task that tries to predict whether a molecule will inhibit the enzyme BACE-1.

%% Cell type:code id: tags:

``` python
tasks, datasets, transformers = dc.molnet.load_bace_classification(feturizer='ECFP', split='scaffold')
train_dataset, valid_dataset, test_dataset = datasets
model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
```

%% Output

    training set score: {'roc_auc_score': 0.9996116504854369}
    test set score: {'roc_auc_score': 0.7701992753623188}

%% Cell type:markdown id: tags:

## Other Features

`KerasModel` and `TorchModel` have lots of other features.  Here are some of the more important ones.

- Automatically saving checkpoints during training.
- Logging progress to the console, to [TensorBoard](https://www.tensorflow.org/tensorboard), or to [Weights & Biases](https://docs.wandb.com/).
- Custom loss functions that you define with a function of the form `f(outputs, labels, weights)`.
- Early stopping using the `ValidationCallback` class.
- Loading parameters from pre-trained models.
- Estimating uncertainty in model outputs.
- Identifying important features through saliency mapping.

By wrapping your own models in a `KerasModel` or `TorchModel`, you get immediate access to all these features.  See the API documentation for full details on them.

# Congratulations! Time to join the Community!

Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:

## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.

## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
+245 −0
Original line number Diff line number Diff line
%% Cell type:markdown id: tags:

# Tutorial Part 6: Introduction to Graph Convolutions

In this tutorial we will learn more about "graph convolutions." These are one of the most powerful deep learning tools for working with molecular data. The reason for this is that molecules can be naturally viewed as graphs.

![Molecular Graph](https://github.com/deepchem/deepchem/blob/master/examples/tutorials/basic_graphs.gif?raw=1)

Note how standard chemical diagrams of the sort we're used to from high school lend themselves naturally to visualizing molecules as graphs. In the remainder of this tutorial, we'll dig into this relationship in significantly more detail. This will let us get a deeper understanding of how these systems work.

## Colab

This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/04_Introduction_to_Graph_Convolutions.ipynb)

## Setup

To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine.

%% Cell type:code id: tags:

``` python
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
```

%% Cell type:code id: tags:

``` python
!pip install --pre deepchem
```

%% Cell type:markdown id: tags:

# What are Graph Convolutions?

Consider a standard convolutional neural network (CNN) of the sort commonly used to process images.  The input is a grid of pixels.  There is a vector of data values for each pixel, for example the red, green, and blue color channels.  The data passes through a series of convolutional layers.  Each layer combines the data from a pixel and its neighbors to produce a new data vector for the pixel.  Early layers detect small scale local patterns, while later layers detect larger, more abstract patterns.  Often the convolutional layers alternate with pooling layers that perform some operation such as max or min over local regions.

Graph convolutions are similar, but they operate on a graph.  They begin with a data vector for each node of the graph (for example, the chemical properties of the atom that node represents).  Convolutional and pooling layers combine information from connected nodes (for example, atoms that are bonded to each other) to produce a new data vector for each node.

# Training a GraphConvModel

Let's use the MoleculeNet suite to load the Tox21 dataset. To featurize the data in a way that graph convolutional networks can use, we set the featurizer option to `'GraphConv'`. The MoleculeNet call returns a training set, a validation set, and a test set for us to use. It also returns `tasks`, a list of the task names, and `transformers`, a list of data transformations that were applied to preprocess the dataset. (Most deep networks are quite finicky and require a set of data transformations to ensure that training proceeds stably.)

%% Cell type:code id: tags:

``` python
import deepchem as dc

tasks, datasets, transformers = dc.molnet.load_tox21(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
```

%% Cell type:markdown id: tags:

Let's now train a graph convolutional network on this dataset. DeepChem has the class `GraphConvModel` that wraps a standard graph convolutional architecture underneath the hood for user convenience. Let's instantiate an object of this class and train it on our dataset.

%% Cell type:code id: tags:

``` python
n_tasks = len(tasks)
model = dc.models.GraphConvModel(n_tasks, mode='classification')
model.fit(train_dataset, nb_epoch=50)
```

%% Output

    0.28185401916503905

%% Cell type:markdown id: tags:

Let's try to evaluate the performance of the model we've trained. For this, we need to define a metric, a measure of model performance. `dc.metrics` holds a collection of metrics already. For this dataset, it is standard to use the ROC-AUC score, the area under the receiver operating characteristic curve (which measures the tradeoff between precision and recall). Luckily, the ROC-AUC score is already available in DeepChem.

To measure the performance of the model under this metric, we can use the convenience function `model.evaluate()`.

%% Cell type:code id: tags:

``` python
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
print('Training set score:', model.evaluate(train_dataset, [metric], transformers))
print('Test set score:', model.evaluate(test_dataset, [metric], transformers))
```

%% Output

    Training set score: {'roc_auc_score': 0.96959686893055}
    Test set score: {'roc_auc_score': 0.795793783300876}

%% Cell type:markdown id: tags:

The results are pretty good, and `GraphConvModel` is very easy to use. But what's going on under the hood? Could we build GraphConvModel ourselves? Of course! DeepChem provides Keras layers for all the calculations involved in a graph convolution. We are going to apply the following layers from DeepChem.

-  `GraphConv` layer: This layer implements the graph convolution. The graph convolution combines per-node feature vectures in a nonlinear fashion with the feature vectors for neighboring nodes.  This "blends" information in local neighborhoods of a graph.

- `GraphPool` layer: This layer does a max-pooling over the feature vectors of atoms in a neighborhood. You can think of this layer as analogous to a max-pooling layer for 2D convolutions but which operates on graphs instead.

- `GraphGather`: Many graph convolutional networks manipulate feature vectors per graph-node. For a molecule for example, each node might represent an atom, and the network would manipulate atomic feature vectors that summarize the local chemistry of the atom. However, at the end of the application, we will likely want to work with a molecule level feature representation. This layer creates a graph level feature vector by combining all the node-level feature vectors.

Apart from this we are going to apply standard neural network layers such as [Dense](https://keras.io/api/layers/core_layers/dense/), [BatchNormalization](https://keras.io/api/layers/normalization_layers/batch_normalization/) and [Softmax](https://keras.io/api/layers/activation_layers/softmax/) layer.

%% Cell type:code id: tags:

``` python
from deepchem.models.layers import GraphConv, GraphPool, GraphGather
import tensorflow as tf
import tensorflow.keras.layers as layers

batch_size = 100

class MyGraphConvModel(tf.keras.Model):

  def __init__(self):
    super(MyGraphConvModel, self).__init__()
    self.gc1 = GraphConv(128, activation_fn=tf.nn.tanh)
    self.batch_norm1 = layers.BatchNormalization()
    self.gp1 = GraphPool()

    self.gc2 = GraphConv(128, activation_fn=tf.nn.tanh)
    self.batch_norm2 = layers.BatchNormalization()
    self.gp2 = GraphPool()

    self.dense1 = layers.Dense(256, activation=tf.nn.tanh)
    self.batch_norm3 = layers.BatchNormalization()
    self.readout = GraphGather(batch_size=batch_size, activation_fn=tf.nn.tanh)

    self.dense2 = layers.Dense(n_tasks*2)
    self.logits = layers.Reshape((n_tasks, 2))
    self.softmax = layers.Softmax()

  def call(self, inputs):
    gc1_output = self.gc1(inputs)
    batch_norm1_output = self.batch_norm1(gc1_output)
    gp1_output = self.gp1([batch_norm1_output] + inputs[1:])

    gc2_output = self.gc2([gp1_output] + inputs[1:])
    batch_norm2_output = self.batch_norm1(gc2_output)
    gp2_output = self.gp2([batch_norm2_output] + inputs[1:])

    dense1_output = self.dense1(gp2_output)
    batch_norm3_output = self.batch_norm3(dense1_output)
    readout_output = self.readout([batch_norm3_output] + inputs[1:])

    logits_output = self.logits(self.dense2(readout_output))
    return self.softmax(logits_output)
```

%% Cell type:markdown id: tags:

We can now see more clearly what is happening.  There are two convolutional blocks, each consisting of a `GraphConv`, followed by batch normalization, followed by a `GraphPool` to do max pooling.  We finish up with a dense layer, another batch normalization, a `GraphGather` to combine the data from all the different nodes, and a final dense layer to produce the global output.

Let's now create the DeepChem model which will be a wrapper around the Keras model that we just created. We will also specify the loss function so the model know the objective to minimize.

%% Cell type:code id: tags:

``` python
model = dc.models.KerasModel(MyGraphConvModel(), loss=dc.models.losses.CategoricalCrossEntropy())
```

%% Cell type:markdown id: tags:

What are the inputs to this model?  A graph convolution requires a complete description of each molecule, including the list of nodes (atoms) and a description of which ones are bonded to each other.  In fact, if we inspect the dataset we see that the feature array contains Python objects of type `ConvMol`.

%% Cell type:code id: tags:

``` python
test_dataset.X[0]
```

%% Output

    <deepchem.feat.mol_graphs.ConvMol at 0x14d0b1650>

%% Cell type:markdown id: tags:

Models expect arrays of numbers as their inputs, not Python objects.  We must convert the `ConvMol` objects into the particular set of arrays expected by the `GraphConv`, `GraphPool`, and `GraphGather` layers.  Fortunately, the `ConvMol` class includes the code to do this, as well as to combine all the molecules in a batch to create a single set of arrays.

The following code creates a Python generator that given a batch of data generates the lists of inputs, labels, and weights whose values are Numpy arrays. `atom_features` holds a feature vector of length 75 for each atom. The other inputs are required to support minibatching in TensorFlow. `degree_slice` is an indexing convenience that makes it easy to locate atoms from all molecules with a given degree. `membership` determines the membership of atoms in molecules (atom `i` belongs to molecule `membership[i]`). `deg_adjs` is a list that contains adjacency lists grouped by atom degree. For more details, check out the [code](https://github.com/deepchem/deepchem/blob/master/deepchem/feat/mol_graphs.py).

%% Cell type:code id: tags:

``` python
from deepchem.metrics import to_one_hot
from deepchem.feat.mol_graphs import ConvMol
import numpy as np

def data_generator(dataset, epochs=1):
  for ind, (X_b, y_b, w_b, ids_b) in enumerate(dataset.iterbatches(batch_size, epochs,
                                                                   deterministic=False, pad_batches=True)):
    multiConvMol = ConvMol.agglomerate_mols(X_b)
    inputs = [multiConvMol.get_atom_features(), multiConvMol.deg_slice, np.array(multiConvMol.membership)]
    for i in range(1, len(multiConvMol.get_deg_adjacency_lists())):
      inputs.append(multiConvMol.get_deg_adjacency_lists()[i])
    labels = [to_one_hot(y_b.flatten(), 2).reshape(-1, n_tasks, 2)]
    weights = [w_b]
    yield (inputs, labels, weights)
```

%% Cell type:markdown id: tags:

Now, we can train the model using `fit_generator(generator)` which will use the generator we've defined to train the model.

%% Cell type:code id: tags:

``` python
model.fit_generator(data_generator(train_dataset, epochs=50))
```

%% Output

    0.21941944122314452

%% Cell type:markdown id: tags:

Now that we have trained our graph convolutional method, let's evaluate its performance. We again have to use our defined generator to evaluate model performance.

%% Cell type:code id: tags:

``` python
print('Training set score:', model.evaluate_generator(data_generator(train_dataset), [metric], transformers))
print('Test set score:', model.evaluate_generator(data_generator(test_dataset), [metric], transformers))
```

%% Output

    Training set score: {'roc_auc_score': 0.8425638289185731}
    Test set score: {'roc_auc_score': 0.7378436684114341}

%% Cell type:markdown id: tags:

Success! The model we've constructed behaves nearly identically to `GraphConvModel`. If you're looking to build your own custom models, you can follow the example we've provided here to do so. We hope to see exciting constructions from your end soon!

%% Cell type:markdown id: tags:

# Congratulations! Time to join the Community!

Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:

## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.

## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!