Fast Evaluation

In order to achieve a fast evaluation we need to precompute some things and also preallocate intermediate storage. For this we have GradientConfig and JacobianConfig:

GradientConfig(f::Polynomial{T}, [x::AbstractVector{S}])

A data structure with which the gradient of a Polynomial f can be evaluated efficiently. Note that x is only used to determine the output type of f(x).

GradientConfig(f::Polynomial{T}, [S])

Instead of a vector x a type can also be given directly.

source
JacobianConfig(F::Vector{Polynomial{T}}, [x::AbstractVector{S}])

A data structure with which the jacobian of a Vector F of Polynomials can be evaluated efficiently. Note that x is only used to determine the output type of F(x).

JacobianConfig(F::Vector{Polynomial{T}}, [S])

Instead of a vector x a type can also be given directly.

source

Evaluation

evaluate(p::Polynomial{T}, x::AbstractVector{T})

Evaluates p at x, i.e. $p(x)$. Polynomial is also callable, i.e. you can also evaluate it via p(x).

source
evaluate(g, x, cfg::GradientConfig [, precomputed=false])

Evaluate g at x using the precomputated values in cfg. Note that this is usually signifcant faster than evaluate(g, x).

Example

cfg = GradientConfig(g)
evaluate(g, x, cfg)

With precomputed=true we rely on the previous intermediate results in cfg. Therefore the result is only correct if you previouls called evaluate, or gradient with the same x.

source
evaluate(F, x, cfg::JacobianConfig [, precomputed=false])

Evaluate the system F at x using the precomputated values in cfg. Note that this is usually signifcant faster than map(f -> evaluate(f, x), F).

Example

cfg = JacobianConfig(F)
evaluate(F, x, cfg)

With precomputed=true we rely on the previous intermediate results in cfg. Therefore the result is only correct if you previouls called evaluate, or jacobian with the same x.

source
evaluate!(u, F, x, cfg::JacobianConfig [, precomputed=false])

Evaluate the system F at x using the precomputated values in cfg and store the result in u. Note that this is usually signifcant faster than map!(u, f -> evaluate(f, x), F).

Example

cfg = JacobianConfig(F)
evaluate!(u, F, x, cfg)

With precomputed=true we rely on the previous intermediate results in cfg. Therefore the result is only correct if you previouls called evaluate, or jacobian with the same x.

source

Derivatives

gradient(g, x, cfg::GradientConfig[, precomputed=false])

Compute the gradient of g at x using the precomputated values in cfg.

Example

cfg = GradientConfig(g)
gradient(g, x, cfg)

With precomputed=true we rely on the previous intermediate results in cfg. Therefore the result is only correct if you previouls called evaluate, or gradient with the same x.

source
gradient(r::GradientDiffResult)

Get the currently stored gradient in r.

source
gradient!(u, g, x, cfg::GradientConfig [, precomputed=false])

Compute the gradient of g at x using the precomputated values in cfg and store thre result in u.

Example

cfg = GradientConfig(g)
gradient(u, g, x, cfg)

With precomputed=true we rely on the previous intermediate results in cfg. Therefore the result is only correct if you previouls called evaluate, or gradient with the same x.

source
gradient!(r::GradientDiffResult, g, x, cfg::GradientConfig)

Compute $g(x)$ and the gradient of g at x at once using the precomputated values in cfg and store thre result in r. This is faster than calling both values separetely.

Example

cfg = GradientConfig(g)
r = GradientDiffResult(r)
gradient!(r, g, x, cfg)

value(r) == g(x)
gradient(r) == gradient(g, x, cfg)
source
jacobian!(u, F, x, cfg::JacobianConfig [, precomputed=false])

Evaluate the jacobian of F at x using the precomputated values in cfg.

Example

cfg = JacobianConfig(F)
jacobian(F, x, cfg)

With precomputed=true we rely on the previous intermediate results in cfg. Therefore the result is only correct if you previouls called evaluate, or jacobian with the same x.

source
jacobian!(u, F, x, cfg::JacobianConfig [, precomputed=false])

Evaluate the jacobian of F at x using the precomputated values in cfg and store the result in u.

Example

cfg = JacobianConfig(F)
jacobian!(u, F, x, cfg)

With precomputed=true we rely on the previous intermediate results in cfg. Therefore the result is only correct if you previouls called evaluate, or jacobian with the same x.

source
jacobian!(r::JacobianDiffResult, F, x, cfg::JacobianConfig)

Compute $F(x)$ and the jacobian of F at x at once using the precomputated values in cfg and store thre result in r. This is faster than computing both values separetely.

Example

cfg = GradientConfig(g)
r = GradientDiffResult(cfg)
gradient!(r, g, x, cfg)

value(r) == g(x)
gradient(r) == gradient(g, x, cfg)
source

DiffResults

GradientDiffResult(cfg::GradientConfig)

During the computation of $∇g(x)$ we compute nearly everything we need for the evaluation of $g(x)$. GradientDiffResult allocates memory to hold both values. This structure also signals gradient! to store $g(x)$ and $∇g(x)$.

Example

cfg = GradientConfig(g, x)
r = GradientDiffResult(cfg)
gradient!(r, g, x, cfg)

value(r) == g(x)
gradient(r) == gradient(g, x, cfg)
GradientDiffResult(grad::AbstractVector)

Allocate the memory to hold the gradient by yourself.

source
JacobianDiffResult(cfg::GradientConfig)

During the computation of the jacobian $J_F(x)$ we compute nearly everything we need for the evaluation of $F(x)$. JacobianDiffResult allocates memory to hold both values. This structure also signals jacobian! to store $F(x)$ and $J_F(x)$.

Example

cfg = JacobianConfig(F, x)
r = JacobianDiffResult(cfg)
jacobian!(r, F, x, cfg)

value(r) == map(f -> f(x), F)
jacobian(r) == jacobian(F, x, cfg)
JacobianDiffResult(value::AbstractVector, jacobian::AbstractMatrix)

Allocate the memory to hold the value and the jacobian by yourself.

source
value(r::GradientDiffResult)

Get the currently stored value in r.

source