/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.commons.math3.ode.nonstiff;
import org.apache.commons.math3.Field;
import org.apache.commons.math3.RealFieldElement;
import org.apache.commons.math3.exception.DimensionMismatchException;
import org.apache.commons.math3.exception.MaxCountExceededException;
import org.apache.commons.math3.exception.NoBracketingException;
import org.apache.commons.math3.exception.NumberIsTooSmallException;
import org.apache.commons.math3.linear.Array2DRowFieldMatrix;
import org.apache.commons.math3.linear.FieldMatrix;
import org.apache.commons.math3.ode.FieldExpandableODE;
import org.apache.commons.math3.ode.FieldODEState;
import org.apache.commons.math3.ode.FieldODEStateAndDerivative;
import org.apache.commons.math3.util.MathArrays;
This class implements explicit Adams-Bashforth integrators for Ordinary
Differential Equations.
Adams-Bashforth methods (in fact due to Adams alone) are explicit
multistep ODE solvers. This implementation is a variation of the classical
one: it uses adaptive stepsize to implement error control, whereas
classical implementations are fixed step size. The value of state vector
at step n+1 is a simple combination of the value at step n and of the
derivatives at steps n, n-1, n-2 ... Depending on the number k of previous
steps one wants to use for computing the next value, different formulas
are available:
- k = 1: yn+1 = yn + h y'n
- k = 2: yn+1 = yn + h (3y'n-y'n-1)/2
- k = 3: yn+1 = yn + h (23y'n-16y'n-1+5y'n-2)/12
- k = 4: yn+1 = yn + h (55y'n-59y'n-1+37y'n-2-9y'n-3)/24
- ...
A k-steps Adams-Bashforth method is of order k.
Implementation details
We define scaled derivatives si(n) at step n as:
s1(n) = h y'n for first derivative
s2(n) = h2/2 y''n for second derivative
s3(n) = h3/6 y'''n for third derivative
...
sk(n) = hk/k! y(k)n for kth derivative
The definitions above use the classical representation with several previous first
derivatives. Lets define
qn = [ s1(n-1) s1(n-2) ... s1(n-(k-1)) ]T
(we omit the k index in the notation for clarity). With these definitions,
Adams-Bashforth methods can be written:
- k = 1: yn+1 = yn + s1(n)
- k = 2: yn+1 = yn + 3/2 s1(n) + [ -1/2 ] qn
- k = 3: yn+1 = yn + 23/12 s1(n) + [ -16/12 5/12 ] qn
- k = 4: yn+1 = yn + 55/24 s1(n) + [ -59/24 37/24 -9/24 ] qn
- ...
Instead of using the classical representation with first derivatives only (yn,
s1(n) and qn), our implementation uses the Nordsieck vector with
higher degrees scaled derivatives all taken at the same step (yn, s1(n)
and rn) where rn is defined as:
rn = [ s2(n), s3(n) ... sk(n) ]T
(here again we omit the k index in the notation for clarity)
Taylor series formulas show that for any index offset i, s1(n-i) can be
computed from s1(n), s2(n) ... sk(n), the formula being exact
for degree k polynomials.
s1(n-i) = s1(n) + ∑j>0 (j+1) (-i)j sj+1(n)
The previous formula can be used with several values for i to compute the transform between
classical representation and Nordsieck vector. The transform between rn
and qn resulting from the Taylor series formulas above is:
qn = s1(n) u + P rn
where u is the [ 1 1 ... 1 ]T vector and P is the (k-1)×(k-1) matrix built
with the (j+1) (-i)j terms with i being the row number starting from 1 and j being
the column number starting from 1:
[ -2 3 -4 5 ... ]
[ -4 12 -32 80 ... ]
P = [ -6 27 -108 405 ... ]
[ -8 48 -256 1280 ... ]
[ ... ]
Using the Nordsieck vector has several advantages:
- it greatly simplifies step interpolation as the interpolator mainly applies
Taylor series formulas,
- it simplifies step changes that occur when discrete events that truncate
the step are triggered,
- it allows to extend the methods in order to support adaptive stepsize.
The Nordsieck vector at step n+1 is computed from the Nordsieck vector at step n as follows:
- yn+1 = yn + s1(n) + uT rn
- s1(n+1) = h f(tn+1, yn+1)
- rn+1 = (s1(n) - s1(n+1)) P-1 u + P-1 A P rn
where A is a rows shifting matrix (the lower left part is an identity matrix):
[ 0 0 ... 0 0 | 0 ]
[ ---------------+---]
[ 1 0 ... 0 0 | 0 ]
A = [ 0 1 ... 0 0 | 0 ]
[ ... | 0 ]
[ 0 0 ... 1 0 | 0 ]
[ 0 0 ... 0 1 | 0 ]
The P-1u vector and the P-1 A P matrix do not depend on the state,
they only depend on k and therefore are precomputed once for all.
Type parameters: - <T> – the type of the field elements
Since: 3.6
/**
* This class implements explicit Adams-Bashforth integrators for Ordinary
* Differential Equations.
*
* <p>Adams-Bashforth methods (in fact due to Adams alone) are explicit
* multistep ODE solvers. This implementation is a variation of the classical
* one: it uses adaptive stepsize to implement error control, whereas
* classical implementations are fixed step size. The value of state vector
* at step n+1 is a simple combination of the value at step n and of the
* derivatives at steps n, n-1, n-2 ... Depending on the number k of previous
* steps one wants to use for computing the next value, different formulas
* are available:</p>
* <ul>
* <li>k = 1: y<sub>n+1</sub> = y<sub>n</sub> + h y'<sub>n</sub></li>
* <li>k = 2: y<sub>n+1</sub> = y<sub>n</sub> + h (3y'<sub>n</sub>-y'<sub>n-1</sub>)/2</li>
* <li>k = 3: y<sub>n+1</sub> = y<sub>n</sub> + h (23y'<sub>n</sub>-16y'<sub>n-1</sub>+5y'<sub>n-2</sub>)/12</li>
* <li>k = 4: y<sub>n+1</sub> = y<sub>n</sub> + h (55y'<sub>n</sub>-59y'<sub>n-1</sub>+37y'<sub>n-2</sub>-9y'<sub>n-3</sub>)/24</li>
* <li>...</li>
* </ul>
*
* <p>A k-steps Adams-Bashforth method is of order k.</p>
*
* <h3>Implementation details</h3>
*
* <p>We define scaled derivatives s<sub>i</sub>(n) at step n as:
* <pre>
* s<sub>1</sub>(n) = h y'<sub>n</sub> for first derivative
* s<sub>2</sub>(n) = h<sup>2</sup>/2 y''<sub>n</sub> for second derivative
* s<sub>3</sub>(n) = h<sup>3</sup>/6 y'''<sub>n</sub> for third derivative
* ...
* s<sub>k</sub>(n) = h<sup>k</sup>/k! y<sup>(k)</sup><sub>n</sub> for k<sup>th</sup> derivative
* </pre></p>
*
* <p>The definitions above use the classical representation with several previous first
* derivatives. Lets define
* <pre>
* q<sub>n</sub> = [ s<sub>1</sub>(n-1) s<sub>1</sub>(n-2) ... s<sub>1</sub>(n-(k-1)) ]<sup>T</sup>
* </pre>
* (we omit the k index in the notation for clarity). With these definitions,
* Adams-Bashforth methods can be written:
* <ul>
* <li>k = 1: y<sub>n+1</sub> = y<sub>n</sub> + s<sub>1</sub>(n)</li>
* <li>k = 2: y<sub>n+1</sub> = y<sub>n</sub> + 3/2 s<sub>1</sub>(n) + [ -1/2 ] q<sub>n</sub></li>
* <li>k = 3: y<sub>n+1</sub> = y<sub>n</sub> + 23/12 s<sub>1</sub>(n) + [ -16/12 5/12 ] q<sub>n</sub></li>
* <li>k = 4: y<sub>n+1</sub> = y<sub>n</sub> + 55/24 s<sub>1</sub>(n) + [ -59/24 37/24 -9/24 ] q<sub>n</sub></li>
* <li>...</li>
* </ul></p>
*
* <p>Instead of using the classical representation with first derivatives only (y<sub>n</sub>,
* s<sub>1</sub>(n) and q<sub>n</sub>), our implementation uses the Nordsieck vector with
* higher degrees scaled derivatives all taken at the same step (y<sub>n</sub>, s<sub>1</sub>(n)
* and r<sub>n</sub>) where r<sub>n</sub> is defined as:
* <pre>
* r<sub>n</sub> = [ s<sub>2</sub>(n), s<sub>3</sub>(n) ... s<sub>k</sub>(n) ]<sup>T</sup>
* </pre>
* (here again we omit the k index in the notation for clarity)
* </p>
*
* <p>Taylor series formulas show that for any index offset i, s<sub>1</sub>(n-i) can be
* computed from s<sub>1</sub>(n), s<sub>2</sub>(n) ... s<sub>k</sub>(n), the formula being exact
* for degree k polynomials.
* <pre>
* s<sub>1</sub>(n-i) = s<sub>1</sub>(n) + ∑<sub>j>0</sub> (j+1) (-i)<sup>j</sup> s<sub>j+1</sub>(n)
* </pre>
* The previous formula can be used with several values for i to compute the transform between
* classical representation and Nordsieck vector. The transform between r<sub>n</sub>
* and q<sub>n</sub> resulting from the Taylor series formulas above is:
* <pre>
* q<sub>n</sub> = s<sub>1</sub>(n) u + P r<sub>n</sub>
* </pre>
* where u is the [ 1 1 ... 1 ]<sup>T</sup> vector and P is the (k-1)×(k-1) matrix built
* with the (j+1) (-i)<sup>j</sup> terms with i being the row number starting from 1 and j being
* the column number starting from 1:
* <pre>
* [ -2 3 -4 5 ... ]
* [ -4 12 -32 80 ... ]
* P = [ -6 27 -108 405 ... ]
* [ -8 48 -256 1280 ... ]
* [ ... ]
* </pre></p>
*
* <p>Using the Nordsieck vector has several advantages:
* <ul>
* <li>it greatly simplifies step interpolation as the interpolator mainly applies
* Taylor series formulas,</li>
* <li>it simplifies step changes that occur when discrete events that truncate
* the step are triggered,</li>
* <li>it allows to extend the methods in order to support adaptive stepsize.</li>
* </ul></p>
*
* <p>The Nordsieck vector at step n+1 is computed from the Nordsieck vector at step n as follows:
* <ul>
* <li>y<sub>n+1</sub> = y<sub>n</sub> + s<sub>1</sub>(n) + u<sup>T</sup> r<sub>n</sub></li>
* <li>s<sub>1</sub>(n+1) = h f(t<sub>n+1</sub>, y<sub>n+1</sub>)</li>
* <li>r<sub>n+1</sub> = (s<sub>1</sub>(n) - s<sub>1</sub>(n+1)) P<sup>-1</sup> u + P<sup>-1</sup> A P r<sub>n</sub></li>
* </ul>
* where A is a rows shifting matrix (the lower left part is an identity matrix):
* <pre>
* [ 0 0 ... 0 0 | 0 ]
* [ ---------------+---]
* [ 1 0 ... 0 0 | 0 ]
* A = [ 0 1 ... 0 0 | 0 ]
* [ ... | 0 ]
* [ 0 0 ... 1 0 | 0 ]
* [ 0 0 ... 0 1 | 0 ]
* </pre></p>
*
* <p>The P<sup>-1</sup>u vector and the P<sup>-1</sup> A P matrix do not depend on the state,
* they only depend on k and therefore are precomputed once for all.</p>
*
* @param <T> the type of the field elements
* @since 3.6
*/
public class AdamsBashforthFieldIntegrator<T extends RealFieldElement<T>> extends AdamsFieldIntegrator<T> {
Integrator method name. /** Integrator method name. */
private static final String METHOD_NAME = "Adams-Bashforth";
Build an Adams-Bashforth integrator with the given order and step control parameters.
Params: - field – field to which the time and state vector elements belong
- nSteps – number of steps of the method excluding the one being computed
- minStep – minimal step (sign is irrelevant, regardless of
integration direction, forward or backward), the last step can
be smaller than this
- maxStep – maximal step (sign is irrelevant, regardless of
integration direction, forward or backward), the last step can
be smaller than this
- scalAbsoluteTolerance – allowed absolute error
- scalRelativeTolerance – allowed relative error
Throws: - NumberIsTooSmallException – if order is 1 or less
/**
* Build an Adams-Bashforth integrator with the given order and step control parameters.
* @param field field to which the time and state vector elements belong
* @param nSteps number of steps of the method excluding the one being computed
* @param minStep minimal step (sign is irrelevant, regardless of
* integration direction, forward or backward), the last step can
* be smaller than this
* @param maxStep maximal step (sign is irrelevant, regardless of
* integration direction, forward or backward), the last step can
* be smaller than this
* @param scalAbsoluteTolerance allowed absolute error
* @param scalRelativeTolerance allowed relative error
* @exception NumberIsTooSmallException if order is 1 or less
*/
public AdamsBashforthFieldIntegrator(final Field<T> field, final int nSteps,
final double minStep, final double maxStep,
final double scalAbsoluteTolerance,
final double scalRelativeTolerance)
throws NumberIsTooSmallException {
super(field, METHOD_NAME, nSteps, nSteps, minStep, maxStep,
scalAbsoluteTolerance, scalRelativeTolerance);
}
Build an Adams-Bashforth integrator with the given order and step control parameters.
Params: - field – field to which the time and state vector elements belong
- nSteps – number of steps of the method excluding the one being computed
- minStep – minimal step (sign is irrelevant, regardless of
integration direction, forward or backward), the last step can
be smaller than this
- maxStep – maximal step (sign is irrelevant, regardless of
integration direction, forward or backward), the last step can
be smaller than this
- vecAbsoluteTolerance – allowed absolute error
- vecRelativeTolerance – allowed relative error
Throws: - IllegalArgumentException – if order is 1 or less
/**
* Build an Adams-Bashforth integrator with the given order and step control parameters.
* @param field field to which the time and state vector elements belong
* @param nSteps number of steps of the method excluding the one being computed
* @param minStep minimal step (sign is irrelevant, regardless of
* integration direction, forward or backward), the last step can
* be smaller than this
* @param maxStep maximal step (sign is irrelevant, regardless of
* integration direction, forward or backward), the last step can
* be smaller than this
* @param vecAbsoluteTolerance allowed absolute error
* @param vecRelativeTolerance allowed relative error
* @exception IllegalArgumentException if order is 1 or less
*/
public AdamsBashforthFieldIntegrator(final Field<T> field, final int nSteps,
final double minStep, final double maxStep,
final double[] vecAbsoluteTolerance,
final double[] vecRelativeTolerance)
throws IllegalArgumentException {
super(field, METHOD_NAME, nSteps, nSteps, minStep, maxStep,
vecAbsoluteTolerance, vecRelativeTolerance);
}
Estimate error.
Error is estimated by interpolating back to previous state using
the state Taylor expansion and comparing to real previous state.
Params: - previousState – state vector at step start
- predictedState – predicted state vector at step end
- predictedScaled – predicted value of the scaled derivatives at step end
- predictedNordsieck – predicted value of the Nordsieck vector at step end
Returns: estimated normalized local discretization error
/** Estimate error.
* <p>
* Error is estimated by interpolating back to previous state using
* the state Taylor expansion and comparing to real previous state.
* </p>
* @param previousState state vector at step start
* @param predictedState predicted state vector at step end
* @param predictedScaled predicted value of the scaled derivatives at step end
* @param predictedNordsieck predicted value of the Nordsieck vector at step end
* @return estimated normalized local discretization error
*/
private T errorEstimation(final T[] previousState,
final T[] predictedState,
final T[] predictedScaled,
final FieldMatrix<T> predictedNordsieck) {
T error = getField().getZero();
for (int i = 0; i < mainSetDimension; ++i) {
final T yScale = predictedState[i].abs();
final T tol = (vecAbsoluteTolerance == null) ?
yScale.multiply(scalRelativeTolerance).add(scalAbsoluteTolerance) :
yScale.multiply(vecRelativeTolerance[i]).add(vecAbsoluteTolerance[i]);
// apply Taylor formula from high order to low order,
// for the sake of numerical accuracy
T variation = getField().getZero();
int sign = predictedNordsieck.getRowDimension() % 2 == 0 ? -1 : 1;
for (int k = predictedNordsieck.getRowDimension() - 1; k >= 0; --k) {
variation = variation.add(predictedNordsieck.getEntry(k, i).multiply(sign));
sign = -sign;
}
variation = variation.subtract(predictedScaled[i]);
final T ratio = predictedState[i].subtract(previousState[i]).add(variation).divide(tol);
error = error.add(ratio.multiply(ratio));
}
return error.divide(mainSetDimension).sqrt();
}
{@inheritDoc} /** {@inheritDoc} */
@Override
public FieldODEStateAndDerivative<T> integrate(final FieldExpandableODE<T> equations,
final FieldODEState<T> initialState,
final T finalTime)
throws NumberIsTooSmallException, DimensionMismatchException,
MaxCountExceededException, NoBracketingException {
sanityChecks(initialState, finalTime);
final T t0 = initialState.getTime();
final T[] y = equations.getMapper().mapState(initialState);
setStepStart(initIntegration(equations, t0, y, finalTime));
final boolean forward = finalTime.subtract(initialState.getTime()).getReal() > 0;
// compute the initial Nordsieck vector using the configured starter integrator
start(equations, getStepStart(), finalTime);
// reuse the step that was chosen by the starter integrator
FieldODEStateAndDerivative<T> stepStart = getStepStart();
FieldODEStateAndDerivative<T> stepEnd =
AdamsFieldStepInterpolator.taylor(stepStart,
stepStart.getTime().add(getStepSize()),
getStepSize(), scaled, nordsieck);
// main integration loop
setIsLastStep(false);
do {
T[] predictedY = null;
final T[] predictedScaled = MathArrays.buildArray(getField(), y.length);
Array2DRowFieldMatrix<T> predictedNordsieck = null;
T error = getField().getZero().add(10);
while (error.subtract(1.0).getReal() >= 0.0) {
// predict a first estimate of the state at step end
predictedY = stepEnd.getState();
// evaluate the derivative
final T[] yDot = computeDerivatives(stepEnd.getTime(), predictedY);
// predict Nordsieck vector at step end
for (int j = 0; j < predictedScaled.length; ++j) {
predictedScaled[j] = getStepSize().multiply(yDot[j]);
}
predictedNordsieck = updateHighOrderDerivativesPhase1(nordsieck);
updateHighOrderDerivativesPhase2(scaled, predictedScaled, predictedNordsieck);
// evaluate error
error = errorEstimation(y, predictedY, predictedScaled, predictedNordsieck);
if (error.subtract(1.0).getReal() >= 0.0) {
// reject the step and attempt to reduce error by stepsize control
final T factor = computeStepGrowShrinkFactor(error);
rescale(filterStep(getStepSize().multiply(factor), forward, false));
stepEnd = AdamsFieldStepInterpolator.taylor(getStepStart(),
getStepStart().getTime().add(getStepSize()),
getStepSize(),
scaled,
nordsieck);
}
}
// discrete events handling
setStepStart(acceptStep(new AdamsFieldStepInterpolator<T>(getStepSize(), stepEnd,
predictedScaled, predictedNordsieck, forward,
getStepStart(), stepEnd,
equations.getMapper()),
finalTime));
scaled = predictedScaled;
nordsieck = predictedNordsieck;
if (!isLastStep()) {
System.arraycopy(predictedY, 0, y, 0, y.length);
if (resetOccurred()) {
// some events handler has triggered changes that
// invalidate the derivatives, we need to restart from scratch
start(equations, getStepStart(), finalTime);
}
// stepsize control for next step
final T factor = computeStepGrowShrinkFactor(error);
final T scaledH = getStepSize().multiply(factor);
final T nextT = getStepStart().getTime().add(scaledH);
final boolean nextIsLast = forward ?
nextT.subtract(finalTime).getReal() >= 0 :
nextT.subtract(finalTime).getReal() <= 0;
T hNew = filterStep(scaledH, forward, nextIsLast);
final T filteredNextT = getStepStart().getTime().add(hNew);
final boolean filteredNextIsLast = forward ?
filteredNextT.subtract(finalTime).getReal() >= 0 :
filteredNextT.subtract(finalTime).getReal() <= 0;
if (filteredNextIsLast) {
hNew = finalTime.subtract(getStepStart().getTime());
}
rescale(hNew);
stepEnd = AdamsFieldStepInterpolator.taylor(getStepStart(), getStepStart().getTime().add(getStepSize()),
getStepSize(), scaled, nordsieck);
}
} while (!isLastStep());
final FieldODEStateAndDerivative<T> finalState = getStepStart();
setStepStart(null);
setStepSize(null);
return finalState;
}
}