# Ranges for numerical problems

published at 12.02.2015 17:06 by Karsten Ahnert

A guest blog post by Dr. Karsten Ahnert, this is a follow up on his talk ranges and iterators for numerical problems at Meeting C++ 2014:

In this blog post I am going to show some ideas how one can implement numerical algorithms with ranges. Examples are the classical Newton algorithm for finding the root of a function and ordinary differential equations. The main idea is to put the main loop of the algorithm into range such that the user can

- control how long the main loop is executed and how it should be stopped,
- customize what happens inside the loop (without touching the original main logic).

Lets start with a simple example - the Newton algorithm. The Newton algorithm finds the root of a simple one-dimensional function f(x)=0. It works by guessing an initial root x_{0} and then iterating x_{n+1} = x_{n} - f(x_{n}) / f'(x_{n}). (In higher dimensions the algorithm has to be generalized to use the Jacobian instead of the derivative f'(x)). The iteration is performed until the solution is sufficiently close to 0. In C++ a very basic implementation of the Newton algorithms might look like

template< typename T , typename F , typename DF >T newton( T x , F f , DF df ){using std::abs;T y = f(x);while( abs(y) > static_cast< T >( 1.0e-12 ) ){x = x - y / df(x);y = f(x);}return x;}

(That this algorithm might not converge if the function f is ill conditioned, but this is another story.) One drawback of this implementation is that you can not look inside the loop. This might be interesting if you want to know exactly how newton approaches zero, or if you want to compute converges rates. And this is the place where ranges enter the game.

A good introduction of modern ranges in C++ is given in by Eric Niebler in a series of blog posts. Eric also proposed his ideas for the C++ standard. Here we completely rely on his ideas and even on his current implementation, which can be found here.

## Newton method

A simple implementation of the Newton algorithm using a function template has be given above. Now, we are going to convert this function into a range. But before we start, lets have a look how the final range implementation might be used:

auto r = make_newton_range( x , f , df );auto iter = r.begin();++iter;

The very first line constructs the newton algorithm, which is now embedded in a range. To perform one Newton step we need to get an iterator from the range and increment it. Note, that incrementing the iterator actually performs one step. Maybe this is the most important point of the whole story: Substantial algorithmic work is done in the operator++.

The range is infinite, that is the iteration will never stop. (You might wonder if this is useful at all, but in a second you will see that ranges can be decorated and this decorated range can be stopped). A second point is that the range models the SinglePassRange concepts meaning that the range can be iterated only once.

Nicely, we do not directly need to deal with all the details of iterators and ranges and can use some high level function from Erics range library.

auto r2 = ranges::view::take_while( r , [f]( auto x ) {using std::abs;return abs( f(x) ) > 1.0e-12; } );ranges::for_each( r2 , [f]( auto x ) {cout << x << " " << f(x) << "\n"; } );cout << "Solution: " << r.x() << " " << r.y() << endl;

Here, the first two lines create a decorated Newton range that iterates until the absolute functional value will decrease below 1.0e-12. Then, the range is iterated, intermediate steps are written to the standard output stream and finally the result is printed.

Now, lets step into the implementation of newton_range. Luckily, there exists already a range_facade to ease the implementation of custom ranges and which provides most of the interface for a range

template< typename T , typename F , typename DF >class newton_range : public ranges::range_facade< newton_range< T , F , DF > >{friend ranges::range_access;public:newton_range( T x , F f , DF df ): m_x( std::move( x ) ) , m_f( std::move( f ) ) , m_df( std::move( df ) ){ m_y = m_f( m_x ); }T const& x( void ) const noexcept { return m_x; }T const& y( void ) const noexcept { return m_y; }private:// ...T m_x , m_y;F m_f;DF m_df;};

The range has four members: m_f is the function for which the root should be found, m_df is it derivative, m_x is the current approximation of the root, and m_y is the current function value. To finalize the range we need to implement a member class range_cursor

template< typename T , typename F , typename DF >class newton_range : public ranges::range_facade< newton_range< T , F , DF > >{// ...private:struct cursor{newton_range* m_rng;

cursor() = default;

explicit cursor( newton_range& rng ): m_rng( &rng ){}

decltype( auto ) current() const{return m_rng->m_x;}

bool done() const{return false;}

void next(){m_rng->m_x -= m_rng->m_y / m_rng->m_df( m_rng->m_x );m_rng->m_y = m_rng->m_f( m_rng->m_x );}};

cursor begin_cursor(){return cursor { *this };}}

and provide access to it via begin_cursor. The cursor holds a pointer to the range and it provides three methods: current returns the current value of the approximation of the root, done provides a Boolean if the range has reached its end. For our case of an infinite loop, done returns always false. And the cursor also provides next which performs one newton step. These methods are called from the high level interface methods provided by range_facade. And basically this is all to implement a working range.

## Ordinary differential equations

The same technique can be applied to solve ordinary differential equations (ODEs) numerically. Mathematically, an ODE has the form dx/dt = f(x) and the solution of the ODE is a function x(t) which obeys dx/dt=f(x). Solvers for ODEs typically work according to the following scheme:

- Define the initial conditions, hence define an initial time point t
_{0}and an initial value x_{0}(t_{0}) - Calculate the next value of the ODE x(t
_{0}+dt) = S(x(t_{0}),t_{0}) at the time point t_{0}+dt. S is the solver and it calculates the next value of the ODE only be knowing its current value. - Apply step 2 several times to obtain a complete solution of the ODE in the desired time interval [t
_{0},T]

The iterative scheme will now be expressed via a custom range. An implementation can be found here. It is again an infinite range and it uses the odeint library, which is part of boost. In fact, the ranges shown here will be included in odeint.

To solve ODEs with ranges consider the following code:

struct lorenz{template< typename State , typename Deriv , typename Time >void operator()( const State& x , Deriv& dxdt , Time dt ) const{dxdt[0] = sigma * ( x[1] - x[0] );dxdt[1] = R * x[0] - x[1] - x[0] * x[2];dxdt[2] = -b * x[2] + x[0] * x[1];}};using state_type = std::array< double , 3 >;auto printer = []( auto const& x ) {std::cout << x[0] << " " << x[1] << " " << x[2] << "\n"; };

boost::numeric::odeint::runge_kutta4< state_type > stepper;state_type x {{ 10.0 , 10.0 , 10.0 }};auto r = make_const_step_range( stepper , lorenz {} , x , 0.0 , 0.01 );auto first = r.begin();printer( *first );++first;printer( *first );

As in the previous example the main work is done inside the increment operator. In fact the implementation is very similar, so I will not show it here. operator++ calls the do_step method of the according solver and updates the current state and time of the ODE. Of course, using the iterators manually is cumber-stone. But you can easily use them with the algorithms from the range library, for example

auto r = make_const_step_range( stepper , lorenz {} , x , 0.0 , 0.01 );auto r2 = ranges::view::take( r , 1000 );ranges::for_each( r2 , printer );

The snippet above computes the first 1000 values of the solution of the ODE (in fact it computes an approximation of the solution).

Another use case is to solve the ODE until a break condition is fulfilled:

auto r = make_const_step_range( stepper , lorenz {} , x , 0.0 , 0.01 );auto r2 = ranges::view::take_while( r , []( auto const& x ) { return x[0] < 15.0; } );ranges::for_each( r2 , printer );

## Conclusion

We have seen how ranges can be used to solve numerical problems, especially how they can be used to implement the Newton method and to solve ordinary differential equations. They have the advantage that the user has control of the inner loop allowing.

The same technique can be applied to several other problems, like

- Iterative solvers of linear systems
- Optimization methods, like genetic algorithms, simulated annealing, BFGS, etc.

And of course, the are open questions:

- How can one express nested loops?
- Can parallelization techniques be applied to such kind of problems?