numdifftools package

Submodules

numdifftools.core module

numerical differentiation functions:

Derivative, Gradient, Jacobian, and Hessian

Author: Per A. Brodtkorb

Created: 01.08.2008 Copyright: (c) pab 2008 Licence: New BSD

Based on matlab functions derivest.m gradest.m hessdiag.m, hessian.m and jacobianest.m version 1.0 released 12/27/2006 by John D’Errico (e-mail: woodchips@rochester.rr.com)

Also based on the python functions approx_fprime, approx_fprime_cs, approx_hess_cs, approx_hess1, approx_hess2 and approx_hess3 in the statsmodels.tools.numdiff module released in 2014 written by Josef Perktold.

numdifftools.core.dea3(v0, v1, v2, symmetric=False)[source]

Extrapolate a slowly convergent sequence

Parameters:

v0, v1, v2 : array-like

3 values of a convergent sequence to extrapolate

Returns:

result : array-like

extrapolated value

abserr : array-like

absolute error estimate

class numdifftools.core.Derivative(f, step=None, method='central', order=2, n=1, full_output=False)[source]

Bases: numdifftools.core._Derivative

Calculate n-th derivative with finite difference approximation

Parameters:

f : function

function of one array f(x, *args, **kwds)

step : float, array-like or StepGenerator object, optional

Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.

method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}

defines the method used in the approximation

order : int, optional

defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.

n : int, optional

Order of the derivative.

full_output : bool, optional

If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.

Returns:

der : ndarray

array of derivatives

See also

Gradient, Hessian

Notes

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

n
class numdifftools.core.Jacobian(f, step=None, method='central', order=2, full_output=False)[source]

Bases: numdifftools.core.Gradient

Calculate Jacobian with finite difference approximation

Parameters:

f : function

function of one array f(x, *args, **kwds)

step : float, array-like or StepGenerator object, optional

Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.

method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}

defines the method used in the approximation

order : int, optional

defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.

full_output : bool, optional

If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.

Returns:

jacob : array

Jacobian

Notes

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

If f returns a 1d array, it returns a Jacobian. If a 2d array is returned by f (e.g., with a value for each observation), it returns a 3d array with the Jacobian of each observation with shape xk x nobs x xk. I.e., the Jacobian of the first observation would be [:, 0, :]

class numdifftools.core.Gradient(f, step=None, method='central', order=2, full_output=False)[source]

Bases: numdifftools.core.Derivative

Calculate Gradient with finite difference approximation

Parameters:

f : function

function of one array f(x, *args, **kwds)

step : float, array-like or StepGenerator object, optional

Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.

method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}

defines the method used in the approximation

order : int, optional

defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.

full_output : bool, optional

If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.

Returns:

grad : array

gradient

Notes

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

class numdifftools.core.Hessian(f, step=None, method='central', full_output=False)[source]

Bases: numdifftools.core._Derivative

Calculate Hessian with finite difference approximation

Parameters:

f : function

function of one array f(x, *args, **kwds)

step : float, array-like or StepGenerator object, optional

Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.

method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}

defines the method used in the approximation

full_output : bool, optional

If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.

Returns:

hess : ndarray

array of partial second derivatives, Hessian

See also

Derivative, Hessian

Notes

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Computes the Hessian according to method as: ‘forward’ (1), ‘central’ (2) and ‘complex’ (3):

(1)\quad ((f(x + d_j e_j + d_k e_k) - f(x + d_j e_j))) / (d_j d_k)

(2)\quad  ((f(x + d_j e_j + d_k e_k) - f(x + d_j e_j - d_k e_k)) -  (f(x - d_j e_j + d_k e_k) - f(x - d_j e_j - d_k e_k)) / (4 d_j d_k)

(3)imag(f(x + i d_j e_j + d_k e_k) - f(x + i d_j e_j - d_k e_k)) /(2 d_j d_k)

where e_j is a vector with element j is one and the rest are zero and d_j is a scalar spacing steps_j.

class numdifftools.core.Hessdiag(f, step=None, method='central', order=2, full_output=False)[source]

Bases: numdifftools.core.Derivative

Calculate Hessian diagonal with finite difference approximation

Parameters:

f : function

function of one array f(x, *args, **kwds)

step : float, array-like or StepGenerator object, optional

Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=None) if step or method in in [‘complex’, ‘multicomplex’], otherwise MaxStepGenerator(step_ratio=None, num_extrap=14) The results are extrapolated if the StepGenerator generate more than 3 steps.

method : {‘central’, ‘complex’, ‘multicomplex’, ‘forward’, ‘backward’}

defines the method used in the approximationorder : int, optional defines the order of the error term in the Taylor approximation used. For ‘central’ and ‘complex’ methods, it must be an even number.

full_output : bool, optional

If full_output is False, only the derivative is returned. If full_output is True, then (der, r) is returned der is the derivative, and r is a Results object.

Returns:

hessdiag : array

hessian diagonal

Notes

Complex methods are usually the most accurate provided the function to differentiate is analytic. The complex-step methods also requires fewer steps than the other methods and can work very close to the support of a function. The complex-step derivative has truncation error O(steps**2) for n=1 and O(steps**4) for n larger, so truncation error can be eliminated by choosing steps to be very small. Especially the first order complex-step derivative avoids the problem of round-off error with small steps because there is no subtraction. However, this method fails if f(x) does not support complex numbers or involves non-analytic functions such as e.g.: abs, max, min. Central difference methods are almost as accurate and has no restriction on type of function. For this reason the ‘central’ method is the default method, but sometimes one can only allow evaluation in forward or backward direction.

For all methods one should be careful in decreasing the step size too much due to round-off errors.

Higher order approximation methods will generally be more accurate, but may also suffer more from numerical problems. First order methods is usually not recommended.

class numdifftools.core.MinStepGenerator(base_step=None, step_ratio=2, num_steps=None, offset=0, scale=None, num_extrap=0, use_exact_steps=True, check_num_steps=True)[source]

Bases: object

Generates a sequence of steps

where steps = base_step * step_ratio ** (np.arange(num_steps) + offset)

Parameters:

base_step : float, array-like, optional

Defines the base step, if None, then base_step is set to EPS**(1/scale)*max(log(1+|x|), 1) where x is supplied at runtime through the __call__ method.

step_ratio : real scalar, optional, default 2

Ratio between sequential steps generated. Note: Ratio > 1 If None then step_ratio is 2 for n=1 otherwise step_ratio is 1.6

num_steps : scalar integer, optional, default n + order - 1 + num_extrap

defines number of steps generated. It should be larger than n + order - 1

offset : real scalar, optional, default 0

offset to the base step

scale : real scalar, optional

scale used in base step. If not None it will override the default computed with the default_scale function.

class numdifftools.core.MaxStepGenerator(step_max=2.0, step_ratio=2.0, num_steps=15, step_nom=None, offset=0, num_extrap=0, use_exact_steps=False, check_num_steps=True)[source]

Bases: numdifftools.core.MinStepGenerator

Generates a sequence of steps

where
steps = base_step * step_ratio ** (-np.arange(num_steps) + offset) base_step = step_max * step_nom
Parameters:

max_step : float, array-like, optional default 2

Defines the maximum step

step_ratio : real scalar, optional, default 2

Ratio between sequential steps generated. Note: Ratio > 1

num_steps : scalar integer, optional, default n + order - 1 + num_extrap

defines number of steps generated. It should be larger than n + order - 1

step_nom : default maximum(log1p(abs(x)), 1)

Nominal step.

offset : real scalar, optional, default 0

offset to the base step: max_step * nom_step

class numdifftools.core.Richardson(step_ratio=2.0, step=1, order=1, num_terms=2)[source]

Bases: object

Extrapolates as sequence with Richardsons method

Notes

Suppose you have series expansion that goes like this

L = f(h) + a0 * h^p_0 + a1 * h^p_1+ a2 * h^p_2 + ...

where p_i = order + step * i and f(h) -> L as h -> 0, but f(0) != L.

If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does.

extrapolate(sequence, steps)[source]

numdifftools.extrapolation module

Created on 28. aug. 2015

@author: pab

class numdifftools.extrapolation.Dea(limexp=3)[source]

Bases: object

LIMEXP is the maximum number of elements the epsilon table data can contain. The epsilon table is stored in the first (LIMEXP+2) entries of EPSTAB.

class numdifftools.extrapolation.Richardson(step_ratio=2.0, step=1, order=1, num_terms=2)[source]

Bases: object

Extrapolates as sequence with Richardsons method

Notes

Suppose you have series expansion that goes like this

L = f(h) + a0 * h^p_0 + a1 * h^p_1+ a2 * h^p_2 + ...

where p_i = order + step * i and f(h) -> L as h -> 0, but f(0) != L.

If we evaluate the right hand side for different stepsizes h we can fit a polynomial to that sequence of approximations. This is exactly what this class does.

extrapolate(sequence, steps)[source]
numdifftools.extrapolation.convolve(sequence, rule, **kwds)[source]

Wrapper around scipy.ndimage.convolve1d that allows complex input.

numdifftools.extrapolation.dea3(v0, v1, v2, symmetric=False)[source]

Extrapolate a slowly convergent sequence

Parameters:

v0, v1, v2 : array-like

3 values of a convergent sequence to extrapolate

Returns:

result : array-like

extrapolated value

abserr : array-like

absolute error estimate

numdifftools.extrapolation.test_dea()[source]

numdifftools.info module

Introduction to Numdifftools

Numdifftools is a suite of tools written in Python to solve automatic numerical differentiation problems in one or more variables. Finite differences are used in an adaptive manner, coupled with a Richardson extrapolation methodology to provide a maximally accurate result. The user can configure many options like; changing the order of the method or the extrapolation, even allowing the user to specify whether complex, multicomplex, central, forward or backward differences are used. The methods provided are:

Derivative:
Computates the derivative of order 1 through 10 on any scalar function.
Gradient:
Computes the gradient vector of a scalar function of one or more variables.
Jacobian:
Computes the Jacobian matrix of a vector valued function of one or more variables.
Hessian:
Computes the Hessian matrix of all 2nd partial derivatives of a scalar function of one or more variables.
Hessdiag:
Computes only the diagonal elements of the Hessian matrix

All of these methods also produce error estimates on the result.

Numdifftools also provide an easy to use interface to derivatives calculated with AlgoPy. Algopy stands for Algorithmic Differentiation in Python. The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs.

Documentation is at: http://numdifftools.readthedocs.org/

Code and issue tracker is at https://github.com/pbrod/numdifftools.

Latest stable release is at http://pypi.python.org/pypi/Numdifftools.

To test if the toolbox is working paste the following in an interactive python session:

import numdifftools as nd
nd.test(coverage=True, doctests=True)

Getting Started

Compute 1’st and 2’nd derivative of exp(x), at x == 1:

>>> import numpy as np
>>> import numdifftools as nd
>>> fd = nd.Derivative(np.exp)        # 1'st derivative
>>> fdd = nd.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nd.Jacobian(fun)
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nd.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

Compute the same with the easy to use interface to AlgoPy:

>>> import numdifftools.nd_algopy as nda
>>> import numpy as np
>>> fd = nda.Derivative(np.exp)        # 1'st derivative
>>> fdd = nda.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nda.Jacobian(fun, method='reverse')
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nda.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

See also

scipy.misc.derivative

numdifftools.info.test_docstrings()[source]

numdifftools.limits module

Created on 27. aug. 2015

@author: pab Author: John D’Errico e-mail: woodchips@rochester.rr.com Release: 1.0 Release date: 5/23/2008

class numdifftools.limits.Limit(f, step=None, method='above', order=4, full_output=False)[source]

Bases: object

Compute limit of a function at a given point

Parameters:

f : callable

function of one array f(z, *args, **kwds) to compute the limit for. The function, f, is assumed to return a result of the same shape and size as its input, z.

step: float, complex, array-like or StepGenerator object, optional

Defines the spacing used in the approximation. Default is MinStepGenerator(base_step=step, step_ratio=4)

method : {‘above’, ‘below’}

defines if the limit is taken from above or below

order: positive scalar integer, optional.

defines the order of approximation used to find the specified limit. The order must be member of [1 2 3 4 5 6 7 8]. 4 is a good compromise.

Returns:

limit_fz: array like

estimated limit of f(z) as z –> z0

info:

Only given if full_output is True and contains the following: error estimate: ndarray

95 uncertainty estimate around the limit, such that abs(limit_fz - f(z0)*(z-z0)) < error_estimate

final_step: ndarray

final step used in approximation

class info

Bases: tuple

error_estimate

Alias for field number 0

final_step

Alias for field number 1

index

Alias for field number 2

Limit.limit(x, *args, **kwds)[source]
class numdifftools.limits.MinStepGenerator(base_step=None, step_ratio=4.0, num_steps=None, offset=0, scale=1.2, use_exact_steps=True)[source]

Bases: object

Generates a sequence of steps

where steps = base_step * step_ratio ** (np.arange(num_steps) + offset)

Parameters:

base_step : float, array-like, optional

Defines the base step, if None, then base_step is set to EPS**(1/scale)*max(log(1+|x|), 1) where x is supplied at runtime through the __call__ method.

step_ratio : real scalar, optional, default 4

Ratio between sequential steps generated.

num_steps : scalar integer, optional, default n + order - 1 + num_extrap

defines number of steps generated. It should be larger than n + order - 1

offset : real scalar, optional, default 0

offset to the base step

scale : real scalar, optional

scale used in base step. If not None it will override the default computed with the default_scale function.

numdifftools.limits.nom_step(x=None)[source]

Return nominal step

numdifftools.limits.test_docstrings()[source]
numdifftools.limits.valarray(shape, value=nan, typecode=None)[source]

Return an array of all value.

numdifftools.multicomplex module

Created on 22. apr. 2015

@author: pab

References

A METHODOLOGY FOR ROBUST OPTIMIZATION OF LOW-THRUST TRAJECTORIES IN MULTI-BODY ENVIRONMENTS Gregory Lantoine (2010) Phd thesis, Georgia Institute of Technology

USING MULTICOMPLEX VARIABLES FOR AUTOMATIC COMPUTATION OF HIGH-ORDER DERIVATIVES Gregory Lantoine, Ryan P. Russell , and Thierry Dargent ACM Transactions on Mathematical Software, Vol. 38, No. 3, Article 16, April 2012, 21 pages,

M.E. Luna-Elizarraras, M. Shapiro, D.C. Struppa1, A. Vajiac (2012) CUBO A Mathematical Journal Vol. 14, No 2, (61-80). June 2012.

Computation of higher-order derivatives using the multi-complex step method Adriaen Verheyleweghen, (2014) Project report, NTNU

class numdifftools.multicomplex.bicomplex(z1, z2)[source]

Bases: object

Creates an instance of a bicomplex object. zeta = z1 + j*z2, where z1 and z2 are complex numbers.

arccos()[source]
arccosh()[source]
arcsin()[source]
arcsinh()[source]
arctan()[source]
arctanh()[source]
arg_c()[source]
arg_c1p()[source]
static asarray(other)[source]
conjugate()[source]
cos()[source]
cosh()[source]
cot()[source]
coth()[source]
csc()[source]
csch()[source]
dot(other)[source]
exp()[source]
exp2()[source]
expm1()[source]
flat(index)[source]
imag
imag1
imag12
imag2
log()[source]
log10()[source]
log1p()[source]
log2()[source]
logaddexp(other)[source]
logaddexp2(other)[source]
static mat2bicomp(arr)[source]
mod_c()[source]

Complex modulus

norm()[source]
real
sec()[source]
sech()[source]
shape
sin()[source]
sinh()[source]
size
sqrt()[source]
tan()[source]
tanh()[source]
numdifftools.multicomplex.c_abs(z)[source]
numdifftools.multicomplex.c_atan2(x, y)[source]
numdifftools.multicomplex.c_max(x, y)[source]
numdifftools.multicomplex.c_min(x, y)[source]

numdifftools.nd_algopy module

Numdifftools.nd_algopy

This module provide an easy to use interface to derivatives calculated with AlgoPy. Algopy stands for Algorithmic Differentiation in Python.

The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs. Particular focus are functions that contain numerical linear algebra functions as they often appear in statistically motivated functions. The intended use of AlgoPy is for easy prototyping at reasonable execution speeds. More precisely, for a typical program a directional derivative takes order 10 times as much time as time as the function evaluation. This is approximately also true for the gradient.

Algoritmic differentiation

Algorithmic differentiation (AD) is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Algorithmic differentiation is not:

Symbolic differentiation, nor Numerical differentiation (the method of finite differences). These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Algoritmic differentiation solves all of these problems.

Reference

Sebastian F. Walter and Lutz Lehmann 2013, “Algorithmic differentiation in Python with AlgoPy”, in Journal of Computational Science, vol 4, no 5, pp 334 - 344, http://www.sciencedirect.com/science/article/pii/S1877750311001013

https://en.wikipedia.org/wiki/Automatic_differentiation

https://pythonhosted.org/algopy/index.html

class numdifftools.nd_algopy.Derivative(f, n=1, method='forward')[source]

Bases: numdifftools.nd_algopy._Common

Calculate n-th derivative with Algorithmic Differentiation method

Parameters:

f : function

function of one array f(x, *args, **kwds)

n : int, optional

Order of the derivative.

method : string, optional {‘forward’, ‘reverse’}

defines method used in the approximation

Returns:

der : ndarray

array of derivatives

Notes

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

class numdifftools.nd_algopy.Gradient(f, method='forward')[source]

Bases: numdifftools.nd_algopy._Common

Calculate Gradient with Algorithmic Differentiation method

Parameters:

f : function

function of one array f(x, *args, **kwds)

method : string, optional {‘forward’, ‘reverse’}

defines method used in the approximation

Returns:

grad : array

gradient

Notes

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

class numdifftools.nd_algopy.Hessdiag(f, method='forward')[source]

Bases: numdifftools.nd_algopy.Hessian

Calculate Hessian diagonal with Algorithmic Differentiation method

Parameters:

f : function

function of one array f(x, *args, **kwds)

method : string, optional {‘forward’, ‘reverse’}

defines method used in the approximation

Returns:

hessdiag : ndarray

Hessian diagonal array of partial second order derivatives.

Notes

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

class numdifftools.nd_algopy.Hessian(f, method='forward')[source]

Bases: numdifftools.nd_algopy._Common

Calculate Hessian with Algorithmic Differentiation method

Parameters:

f : function

function of one array f(x, *args, **kwds)

method : string, optional {‘forward’, ‘reverse’}

defines method used in the approximation

Returns:

hess : ndarray

array of partial second derivatives, Hessian

Notes

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

class numdifftools.nd_algopy.Jacobian(f, method='forward')[source]

Bases: numdifftools.nd_algopy._Common

Calculate Jacobian with Algorithmic Differentiation method

Parameters:

f : function

function of one array f(x, *args, **kwds)

method : string, optional {‘forward’, ‘reverse’}

defines method used in the approximation

Returns:

jacob : array

Jacobian

Notes

Algorithmic differentiation is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

numdifftools.nd_algopy.test_docstrings()[source]

numdifftools.run_benchmark module

numdifftools.test_functions module

Created on 17. mai 2015

@author: pab

numdifftools.test_functions.dcos(x)[source]
numdifftools.test_functions.ddcos(x)[source]
numdifftools.test_functions.get_function(fun_name, n=1)[source]

Module contents

Introduction to Numdifftools

Numdifftools is a suite of tools written in Python to solve automatic numerical differentiation problems in one or more variables. Finite differences are used in an adaptive manner, coupled with a Richardson extrapolation methodology to provide a maximally accurate result. The user can configure many options like; changing the order of the method or the extrapolation, even allowing the user to specify whether complex, multicomplex, central, forward or backward differences are used. The methods provided are:

Derivative:
Computates the derivative of order 1 through 10 on any scalar function.
Gradient:
Computes the gradient vector of a scalar function of one or more variables.
Jacobian:
Computes the Jacobian matrix of a vector valued function of one or more variables.
Hessian:
Computes the Hessian matrix of all 2nd partial derivatives of a scalar function of one or more variables.
Hessdiag:
Computes only the diagonal elements of the Hessian matrix

All of these methods also produce error estimates on the result.

Numdifftools also provide an easy to use interface to derivatives calculated with AlgoPy. Algopy stands for Algorithmic Differentiation in Python. The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs.

Documentation is at: http://numdifftools.readthedocs.org/

Code and issue tracker is at https://github.com/pbrod/numdifftools.

Latest stable release is at http://pypi.python.org/pypi/Numdifftools.

To test if the toolbox is working paste the following in an interactive python session:

import numdifftools as nd
nd.test(coverage=True, doctests=True)

Getting Started

Compute 1’st and 2’nd derivative of exp(x), at x == 1:

>>> import numpy as np
>>> import numdifftools as nd
>>> fd = nd.Derivative(np.exp)        # 1'st derivative
>>> fdd = nd.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nd.Jacobian(fun)
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nd.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

Compute the same with the easy to use interface to AlgoPy:

>>> import numdifftools.nd_algopy as nda
>>> import numpy as np
>>> fd = nda.Derivative(np.exp)        # 1'st derivative
>>> fdd = nda.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nda.Jacobian(fun, method='reverse')
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nda.Gradient(fun)
>>> dfun([1,2,3])
array([ 2.,  4.,  6.])

See also

scipy.misc.derivative