Commit e61c5de5 authored by nitinprakash96's avatar nitinprakash96
Browse files

update content

parent ee9fbe2f
Loading
Loading
Loading
Loading
+134 −53
Original line number Diff line number Diff line
%% Cell type:markdown id: tags:

### TensorGraph Layers and TensorFlow eager

%% Cell type:markdown id: tags:

 In this tutorial we will look at the working of TensorGraph layer with TensorFlow eager.
 But before that let's see what exactly is TensorFlow eager.

%% Cell type:markdown id: tags:

Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. In other words, eager execution is a feature that makes TensorFlow execute operations immediately. Concrete values are returned instead of a computational graph to be executed later.
As a result:
- It allows writing imperative coding style like numpy
- Provides fast debugging with immediate run-time errors and integration with Python tools
- Strong support for higher-order gradients

%% Cell type:code id: tags:

``` python
import tensorflow as tf
import tensorflow.contrib.eager as tfe
```

%% Output

    /home/nitin/anaconda3/envs/deepchem/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
      from ._conv import register_converters as _register_converters

%% Cell type:markdown id: tags:

After importing neccessary modules, at the program startup we invoke `enable_eager_execution()`.

%% Cell type:code id: tags:

``` python
tfe.enable_eager_execution()
```

%% Cell type:markdown id: tags:

Enabling eager execution changes how TensorFlow functions behave. Tensor objects return concrete values instead of being a symbolic reference to nodes in a static computational graph(non-eager mode). As a result, eager execution should be enabled at the beginning of a program.

%% Cell type:markdown id: tags:

Note that with eager execution enabled, these operations consume and return multi-dimensional arrays as `Tensor` objects, similar to NumPy `ndarrays`

%% Cell type:markdown id: tags:

### Dense layer

%% Cell type:code id: tags:

``` python
import numpy as np
import deepchem as dc
from deepchem.models.tensorgraph.layers import Dense
from deepchem.models.tensorgraph import layers
```

%% Cell type:code id: tags:
%% Cell type:markdown id: tags:

``` python
# Initialize a tensor of shape(2,2)
inputs = tf.constant([[1.0, 2.0], [4.0, 5.0]])
```
In the following snippet we describe how to create a `Dense` layer in eager mode. The good thing about calling a layer as a function is that we don't have to call `create_tensor()` directly. This is identical to tensorflow API and has no conflict. And since eager mode is enabled, it should return concrete tensors right away.

%% Cell type:code id: tags:

``` python
# This will create a Dense layer
dense_layer = Dense(3) #Provide the number of output values as parameter value
```
# Initialize parameters
in_dim = 2
out_dim = 3
batch_size = 10

%% Cell type:markdown id: tags:
inputs = np.random.rand(batch_size, in_dim).astype(np.float32) #Input

The `create_tensor()` function doesn't perform any tensor operation but is a method of `Dense`. It takes in a list of tensors as a parameter and a boolean `reshape`, when `True` will try to reshape the inputs to all have the same shape.
layer = layers.Dense(out_dim) # Provide the number of output values as parameter. This creates a Dense layer
result = layer(inputs) #get the ouput tensors

%% Cell type:code id: tags:

``` python
x = dense_layer.create_tensor(in_layers = [inputs])
print(x)
print(result)
```

%% Output

    tf.Tensor(
    [[ 2.4837663  2.5143049 -1.3554332]
     [ 4.81487    8.494966  -4.155171 ]], shape=(2, 3), dtype=float32)
    [[-0.77381194  0.3746004  -0.40403765]
     [-0.15738854  0.22684044 -0.15630853]
     [-0.6225959   0.36593717 -0.35684004]
     [-1.1207535   0.5789795  -0.60311335]
     [-0.8380906   0.46433336 -0.46644312]
     [-0.5517248   0.38127697 -0.34426582]
     [-0.2866712   0.15383685 -0.1570929 ]
     [-0.7261716   0.52749324 -0.46574503]
     [-0.31120938  0.29527357 -0.2336567 ]
     [-0.29397526  0.16268769 -0.16352196]], shape=(10, 3), dtype=float32)

%% Cell type:markdown id: tags:

The above function call performs the same action as the below. This is because `create_tensor()` is invoked by `__call__()` object. This gives us an advantage of directly passing the tensor as a parameter while constructing a TensorGraph layer.
Creating a second `Dense` layer should produce different results.

%% Cell type:code id: tags:

``` python
y = Dense(3)(inputs) # creates a layer that outputs a tensor of shape=(2,3)
print(y)
layer2 = layers.Dense(out_dim)
result2 = layer2(_input)

print(result2)
```

%% Output

    tf.Tensor(
    [[ 0.31004214 -1.9419584  -0.17766535]
     [ 0.36857128 -3.954214    0.10880685]], shape=(2, 3), dtype=float32)
    [[ 0.512575   -0.61529964  0.037683  ]
     [ 1.0153202  -1.3131894   0.14326955]
     [ 0.6359642  -0.8698902   0.12416548]
     [ 1.1896548  -1.4594902   0.11030222]
     [ 0.48279178 -0.6796962   0.1083065 ]
     [ 0.2902518  -0.39702374  0.05667521]
     [ 0.74205726 -0.9366434   0.08790456]
     [ 0.39368296 -0.5724373   0.1015433 ]
     [ 0.2737925  -0.36055726  0.04331717]
     [ 0.10049869 -0.14127527  0.02239158]], shape=(10, 3), dtype=float32)

%% Cell type:markdown id: tags:

### Example
We can also execute the layer in eager mode to compute its output as a function of inputs. If the layer defines any variables, they are created the first time it is invoked. This happens in the same exact way that we would create a single layer in non-eager mode.

%% Cell type:markdown id: tags:

The following code snippet shows a basic architechture of how TensorGraph layer can be created in TensorFlow eager mode.
The following is also a way to create a layer in eager mode. The `create_tensor()` is invoked by `__call__()` object. This gives us an advantage of directly passing the tensor as a parameter while constructing a TensorGraph layer.

%% Cell type:markdown id: tags:
%% Cell type:code id: tags:

Import all the necessary modules and `enable_eager_execution()`.
``` python
x = layers.Dense(out_dim)(inputs)

```python
import tensorflow as tf
import tensorflow.contrib.eager as tfe
import deepchem as dc
from deepchem.models.tensorgraph.layers import Dense
print(x)
```

tfe.enable_eager_execution()
%% Output

x = [[4.]]
output = Dense(3)(x)
print(output)
```
    tf.Tensor(
    [[ 0.6063089  -0.33124802 -0.6670617 ]
     [-0.79654586 -0.5481317  -1.3282237 ]
     [ 0.09374315 -0.4724797  -1.0476099 ]
     [ 0.65573484 -0.5960071  -1.2544882 ]
     [ 0.29876447 -0.54582137 -1.1864793 ]
     [-0.26494026 -0.6005815  -1.3795348 ]
     [ 0.13266012 -0.17077649 -0.3663401 ]
     [-0.5054053  -0.8723711  -2.0188677 ]
     [-0.63919556 -0.5947314  -1.4130814 ]
     [ 0.10592984 -0.1908645  -0.41471013]], shape=(10, 3), dtype=float32)

%% Cell type:markdown id: tags:

### Conv1D layer

%% Cell type:markdown id: tags:

`Dense` layers are one of the layers defined in Deepchem. Along with it there are several others like `Conv1D`, `Conv2D`, `conv3D` etc. We show constructing a `Conv1D` layer below.
`Dense` layers are one of the layers defined in Deepchem. Along with it there are several others like `Conv1D`, `Conv2D`, `conv3D` etc. We also take a look at how to construct a `Conv1D` layer below.

%% Cell type:markdown id: tags:

Basically this layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs.
When using this layer as the first layer in a model, provide an `input_shape` argument (tuple of integers or `None`)

%% Cell type:markdown id: tags:

When the argument `input_shape` is passed in as a tuple of integers e.g (2, 3) it would mean we are passing a sequence of 2 vectors of 3-Dimensional vectors.
And when it is passed as (None, 3) it means that we want variable-length sequences of 3-dimensional vectors.

%% Cell type:code id: tags:

``` python
from deepchem.models.tensorgraph.layers import Conv1D
```

%% Cell type:code id: tags:

``` python
conv_layer = Conv1D(2, 1)
z = conv_layer.create_tensor(in_layers = [inputs])
print(z)
width = 5
in_channels = 2
filters = 3
kernel_size = 2
batch_size = 10

inputs = np.random.rand(batch_size, width, in_channels).astype(
    np.float32)

layer = layers.Conv1D(filters, kernel_size)

result = layer(inputs)
print(result)
```

%% Output

    tf.Tensor(
    [[[-1.3609632  -0.55372864]
      [-2.7219265  -1.1074573 ]]
    [[[ 0.16718751  0.41467714  0.53308594]
      [ 0.8072748   0.4133654  -0.087111  ]
      [ 1.0664566   0.0294899  -0.45529532]
      [ 0.664975    0.44647017  0.48668623]]
    
     [[ 0.7809639   0.05260526 -0.52216375]
      [ 0.38356018  0.32880962  0.38497856]
      [ 0.8670641   0.06841787 -0.4043518 ]
      [ 0.31549102  0.27886432  0.15908808]]
    
     [[ 0.73353016  0.09838264 -0.04410204]
      [ 0.8891646   0.32349965  0.32174024]
      [ 0.8444354   0.22711298 -0.46212494]
      [ 0.51807076  0.14952874 -0.00364152]]
    
     [[ 0.9953808   0.23604284 -0.10303459]
      [ 1.107084    0.23554802 -0.07045144]
      [ 0.6639515   0.49818707  0.06784821]
      [ 1.0647542   0.2207219  -0.25918287]]
    
     [[ 0.7775067   0.3673722   0.29001912]
      [ 0.8676886   0.2504439  -0.40269825]
      [ 0.79185313  0.44505388  0.3931317 ]
      [ 1.2621312   0.19907051 -0.502853  ]]
    
     [[ 0.3722831   0.3902715   0.44195828]
      [ 0.8906034   0.302661   -0.19120318]
      [ 1.0449145   0.12299128 -0.17524081]
      [ 1.0373979   0.3089181   0.20360798]]
    
     [[ 0.6527485   0.10342199 -0.6394097 ]
      [ 0.4306368   0.16461325  0.24278429]
      [ 0.6019457   0.22662912 -0.0750919 ]
      [ 0.61568844  0.29688197  0.09590039]]
    
     [[-5.443853   -2.2149146 ]
      [-6.8048162  -2.7686431 ]]], shape=(2, 2, 2), dtype=float32)
     [[ 0.50753975 -0.01569382 -0.27454144]
      [ 0.29632485  0.2529507   0.3574062 ]
      [ 0.6842948   0.49452925  0.23067072]
      [ 1.2439585   0.06419355 -0.54859996]]
    
     [[ 0.8870867   0.06665488 -0.61154896]
      [ 0.74835956 -0.00460633  0.10247512]
      [ 0.6352314   0.3038729   0.17522144]
      [ 0.5640377   0.3927437  -0.0408814 ]]
    
     [[ 0.77545214  0.3243935   0.3093651 ]
      [ 0.63182473  0.51197904 -0.05849534]
      [ 0.79702044 -0.00464861 -0.6104475 ]
      [ 0.16453324  0.20325702  0.2101992 ]]], shape=(10, 4, 3), dtype=float32)

%% Cell type:markdown id: tags:

Again it should be noted that creating a second `Conv1D` layer would producr different results.

%% Cell type:markdown id: tags:

So thats how we invoke different DeepChem layers in eager mode.

One of the other interesting point is that we can mix tensorflow layers and DeepChem layers. Since they all take tensors as inputs and return tensors as outputs, so you can take the output from one kind of layer and pass it as input to a different kind of layer. But it should be noted that tensorflow layers can't be added to a TensorGraph.

%% Cell type:code id: tags:

``` python
```

%% Cell type:markdown id: tags:

### Gradients

%% Cell type:markdown id: tags:

Finding gradients under eager mode is much similar to the `autograd` API. The computational flow is very clean and logical.
What happens is that different operations can occur during each call, all forward operations are recorded to a tape, which is then played backwards when computing gradients. After the gradients have been computed, the tape is discared.

%% Cell type:code id: tags:

``` python
def dense_squared(x):
  return Dense(1)(Dense(1)(inputs))
    return Dense(1)(Dense(1)(inputs))

grad = tfe.gradients_function(dense_squared)

print(dense_squared(3.0))
print(grad(3.0))
```

%% Output

    tf.Tensor(
    [[-0.0588641 ]
     [-0.19230397]], shape=(2, 1), dtype=float32)
    [None]

%% Cell type:markdown id: tags:

In the above example, The `gradients_function` call takes a Python function `dense_squared()` as an argument and returns a Python callable that computes the partial derivatives of `dense_squared()` with respect to its inputs.