Numerical methods. Note 3
Home ] Integration of Ordinary Differential Equations Numerical Methods. Note  « 3 »

Many physical problems can be reformulated in terms of a system of ordinary differential equations y'(x)=f(x,y), with an initial condition y(0) = y0 , where y and f(x,y) are generally understood as vectors (columns). The standard libraries for these systems are netlib.org/ode and netlib.org/odepack.

Runge-Kutta methods are one-step methods where the solution y is advanced by one step h as y1=y0+hk, where k is a cleverly chosen constant. The first order Runge-Kutta method is simply the Euler's method k=k0, k0=f(x0,y0). Second order Runge-Kutta method advances the solution by an auxiliary evaluation of the derivative, fx the half-step method

k=k1/2 : k0=f(x0, y0), y1/2=y0 + h/2 k0, k1/2 = f(x1/2 , y1/2), y1 = y0 + hk1/2
or two-point method
k=1/2(k0+k1) : k1 = f(x1, y0+hk0), y1 = y0 + h 1/2 (k0+k1).
These two methods can be combined into a third order method
k= 1/6 k0 + 4/6 k1/2 + 1/6 k1.
Higher order R-K methods have been devised, with the most noted being the famous Runge-Kutta-Fehlberg fourth-fifth order method implemented in the renowned rkf45 (now superseded by "ode/rksuite").

Bulirsch-Stoer methods. The idea is to evaluate y1 with several (ever smaller) step-sizes and then make an extrapolation to step-size zero.

Multi-step and predictor-corrector methods. Multi-step methods try to use the information about the function gathered at previous steps. For example the sought function y can be approximated as

yp(x)=y1+y'1⋅(x-x1)+c(x-x1)2,
where the coefficient c can be found from the condition y(x0)=y0, which gives
c = [y0-y1-y'1⋅(x0-x1)]/(x0-x1)2.
We can now make a prediction for y2 as yp(x2) and calculate f2=f(x2,yp(x2)). Having the new information, f2, we can improve the approximation, namely
yc(x)=y1+y'1(x-x1)+c(x-x1)2+d(x-x1)2(x-x0).
The coefficient d is found from the condition y'(x2)=f2, which gives
d=[ f2-y'1-2c(x2-x1) ]/[2(x2-x1)(x2-x0)+(x2-x1)2] ,
from which we get a corrected estimate y2=yc(x2). The correction can serve as an estimate of the error.

Step-size control. Tolerance tol is the maximal accepted error consistent with the required absolute acc and relative eps accuracies

tol=(eps*|y|+acc)*√[h/b-a].
Error err is e.g. estimated from comparing the solution for a full-step and two half-steps (the Runge principle):
err=|yfull-step - ytwo-half-steps|/(2n-1),
where n is the order of the algorithm used. The next step hnext is estimated according to the prescription
hnext=hprevious*(tol/err)power*Safety,
where power ≈ 0.25, Safety ≈ 0.95. One has to pick such a formula, that the full-step and two half-step calculations use (almost) the same evaluations of the function f(x,y)!

Problems

  1. Implement a Runge-Kutta "stepper"-routine rkstep wich advances the solution by a step h and estimates the error (for a single first order ODE):
    function rkstep(f,x0,y0,h){
    	k0=f(x0,y0);
    	x12=x0+h/2; y12=y0+k0*h/2; k12=f(x12,y12);
    	y1=y0+k12*h; err=Math.abs( (k0-k12)*h/2 );
    	return [y1, err] }
    
    Here is a Haskell example.
  2. Implement an adaptive-step-size Runge-Kutta "driver"-routine wchich advances the solution from a to b (by calling rkstep with appropriate step-sizes) keeping the specified relative, eps, and absolute, acc, precision:
    function driver(f,a,b,y0,acc,eps,h){
       if(a>=b) return y0;
       if(a+h>b) h=b-a;
       trial=rkstep(f,a,y0,h); y1=trial[0]; err=trial[1];
       tol=(eps*Math.abs(y1)+acc)*Math.sqrt(h/(b-a));
       if(err>0) newh=h*Math.pow((tol/err),0.2)*0.95; else newh=2*h;
       if(err<tol) {
          a=a+h;
          return driver(f,a,b,y1,acc,eps,newh); }
       else {
          return driver(f,a,b,y0,acc,eps,newh); }
    }
    
    Here is a Haskell example.
  3. Implement a Prediction-Correction method with adaptive step-size for a single first order differential equation. Here is a Fortran and a JavaScript example.
  4. Generalize your routine for systems of ordinary differential equations. Here is a Haskell example.
  5. Make your routine remember the points (x,y) it calculated along the way. Here is a Haskell example.
  6. For a system y'=f(x) compare the effectiveness of your adapt routine with your Runge-Kutta and your Prediction-Correction routines.