This is gsl-ref.info, produced by makeinfo version 4.0 from gsl-ref.texi. INFO-DIR-SECTION Scientific software START-INFO-DIR-ENTRY * gsl-ref: (gsl-ref). GNU Scientific Library - Reference END-INFO-DIR-ENTRY This file documents the GNU Scientific Library. Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001 The GSL Team. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License".  File: gsl-ref.info, Node: Minimization Stopping Parameters, Next: Minimization Algorithms, Prev: Minimization Iteration, Up: One dimensional Minimization Stopping Parameters =================== A minimization procedure should stop when one of the following conditions is true: * A minimum has been found to within the user-specified precision. * A user-specified maximum number of iterations has been reached. * An error has occurred. The handling of these conditions is under user control. The function below allows the user to test the precision of the current result. - Function: int gsl_min_test_interval (double X_LOWER, double X_UPPER, double EPSABS, double EPSREL) This function tests for the convergence of the interval [X_LOWER, X_UPPER] with absolute error EPSABS and relative error EPSREL. The test returns `GSL_SUCCESS' if the following condition is achieved, |a - b| < epsabs + epsrel min(|a|,|b|) when the interval x = [a,b] does not include the origin. If the interval includes the origin then \min(|a|,|b|) is replaced by zero (which is the minimum value of |x| over the interval). This ensures that the relative error is accurately estimated for minima close to the origin. This condition on the interval also implies that any estimate of the minimum x_m in the interval satisfies the same condition with respect to the true minimum x_m^*, |x_m - x_m^*| < epsabs + epsrel x_m^* assuming that the true minimum x_m^* is contained within the interval.  File: gsl-ref.info, Node: Minimization Algorithms, Next: Minimization Examples, Prev: Minimization Stopping Parameters, Up: One dimensional Minimization Minimization Algorithms ======================= The minimization algorithms described in this section require an initial interval which is guaranteed to contain a minimum -- if a and b are the endpoints of the interval and x is an estimate of the minimum then f(a) > f(x) < f(b). This ensures that the function has at least one minimum somewhere in the interval. If a valid initial interval is used then these algorithm cannot fail, provided the function is well-behaved. - Minimizer: gsl_min_fminimizer_goldensection The "golden section algorithm" is the simplest method of bracketing the minimum of a function. It is the slowest algorithm provided by the library, with linear convergence. On each iteration, the algorithm first compares the subintervals from the endpoints to the current minimum. The larger subinterval is divided in a golden section (using the famous ratio (3-\sqrt 5)/2 = 0.3189660...) and the value of the function at this new point is calculated. The new value is used with the constraint f(a') > f(x') < f(b') to a select new interval containing the minimum, by discarding the least useful point. This procedure can be continued indefinitely until the interval is sufficiently small. Choosing the golden section as the bisection ratio can be shown to provide the fastest convergence for this type of algorithm. - Minimizer: gsl_min_fminimizer_brent The "Brent minimization algorithm" combines a parabolic interpolation with the golden section algorithm. This produces a fast algorithm which is still robust. The outline of the algorithm can be summarized as follows: on each iteration Brent's method approximates the function using an interpolating parabola through three existing points. The minimum of the parabola is taken as a guess for the minimum. If it lies within the bounds of the current interval then the interpolating point is accepted, and used to generate a smaller interval. If the interpolating point is not accepted then the algorithm falls back to an ordinary golden section step. The full details of Brent's method include some additional checks to improve convergence.  File: gsl-ref.info, Node: Minimization Examples, Next: Minimization References and Further Reading, Prev: Minimization Algorithms, Up: One dimensional Minimization Examples ======== The following program uses the Brent algorithm to find the minimum of the function f(x) = \cos(x) + 1, which occurs at x = \pi. The starting interval is (0,6), with an initial guess for the minimum of 2. #include #include #include #include double fn1 (double x, void * params) { return cos(x) + 1.0; } int main (void) { int status; int iter = 0, max_iter = 100; const gsl_min_fminimizer_type *T; gsl_min_fminimizer *s; double m = 2.0, m_expected = M_PI; double a = 0.0, b = 6.0; gsl_function F; F.function = &fn1; F.params = 0; T = gsl_min_fminimizer_brent; s = gsl_min_fminimizer_alloc (T); gsl_min_fminimizer_set (s, &F, m, a, b); printf ("using %s method\n", gsl_min_fminimizer_name (s)); printf ("%5s [%9s, %9s] %9s %10s %9s\n", "iter", "lower", "upper", "min", "err", "err(est)"); printf ("%5d [%.7f, %.7f] %.7f %+.7f %.7f\n", iter, a, b, m, m - m_expected, b - a); do { iter++; status = gsl_min_fminimizer_iterate (s); m = gsl_min_fminimizer_minimum (s); a = gsl_min_fminimizer_x_lower (s); b = gsl_min_fminimizer_x_upper (s); status = gsl_min_test_interval (a, b, 0.001, 0.0); if (status == GSL_SUCCESS) printf ("Converged:\n"); printf ("%5d [%.7f, %.7f] " "%.7f %.7f %+.7f %.7f\n", iter, a, b, m, m_expected, m - m_expected, b - a); } while (status == GSL_CONTINUE && iter < max_iter); return status; } Here are the results of the minimization procedure. bash$ ./a.out 0 [0.0000000, 6.0000000] 2.0000000 -1.1415927 6.0000000 1 [2.0000000, 6.0000000] 3.2758640 +0.1342713 4.0000000 2 [2.0000000, 3.2831929] 3.2758640 +0.1342713 1.2831929 3 [2.8689068, 3.2831929] 3.2758640 +0.1342713 0.4142862 4 [2.8689068, 3.2831929] 3.2758640 +0.1342713 0.4142862 5 [2.8689068, 3.2758640] 3.1460585 +0.0044658 0.4069572 6 [3.1346075, 3.2758640] 3.1460585 +0.0044658 0.1412565 7 [3.1346075, 3.1874620] 3.1460585 +0.0044658 0.0528545 8 [3.1346075, 3.1460585] 3.1460585 +0.0044658 0.0114510 9 [3.1346075, 3.1460585] 3.1424060 +0.0008133 0.0114510 10 [3.1346075, 3.1424060] 3.1415885 -0.0000041 0.0077985 Converged: 11 [3.1415885, 3.1424060] 3.1415927 -0.0000000 0.0008175  File: gsl-ref.info, Node: Minimization References and Further Reading, Prev: Minimization Examples, Up: One dimensional Minimization References and Further Reading ============================== Further information on Brent's algorithm is available in the following book, Richard Brent, `Algorithms for minimization without derivatives', Prentice-Hall (1973), republished by Dover in paperback (2002), ISBN 0-486-41998-3.  File: gsl-ref.info, Node: Multidimensional Root-Finding, Next: Multidimensional Minimization, Prev: One dimensional Minimization, Up: Top Multidimensional Root-Finding ***************************** This chapter describes functions for multidimensional root-finding (solving nonlinear systems with n equations in n unknowns). The library provides low level components for a variety of iterative solvers and convergence tests. These can be combined by the user to achieve the desired solution, with full access to the intermediate steps of the iteration. Each class of methods uses the same framework, so that you can switch between solvers at runtime without needing to recompile your program. Each instance of a solver keeps track of its own state, allowing the solvers to be used in multi-threaded programs. The solvers are based on the original Fortran library MINPACK. The header file `gsl_multiroots.h' contains prototypes for the multidimensional root finding functions and related declarations. * Menu: * Overview of Multidimensional Root Finding:: * Initializing the Multidimensional Solver:: * Providing the multidimensional system of equations to solve:: * Iteration of the multidimensional solver:: * Search Stopping Parameters for the multidimensional solver:: * Algorithms using Derivatives:: * Algorithms without Derivatives:: * Example programs for Multidimensional Root finding:: * References and Further Reading for Multidimensional Root Finding::  File: gsl-ref.info, Node: Overview of Multidimensional Root Finding, Next: Initializing the Multidimensional Solver, Up: Multidimensional Root-Finding Overview ======== The problem of multidimensional root finding requires the simultaneous solution of n equations, f_i, in n variables, x_i, f_i (x_1, ..., x_n) = 0 for i = 1 ... n. In general there are no bracketing methods available for n dimensional systems, and no way of knowing whether any solutions exist. All algorithms proceed from an initial guess using a variant of the Newton iteration, x -> x' = x - J^{-1} f(x) where x, f are vector quantities and J is the Jacobian matrix J_{ij} = d f_i / d x_j. Additional strategies can be used to enlarge the region of convergence. These include requiring a decrease in the norm |f| on each step proposed by Newton's method, or taking steepest-descent steps in the direction of the negative gradient of |f|. Several root-finding algorithms are available within a single framework. The user provides a high-level driver for the algorithms, and the library provides the individual functions necessary for each of the steps. There are three main phases of the iteration. The steps are, * initialize solver state, S, for algorithm T * update S using the iteration T * test S for convergence, and repeat iteration if necessary The evaluation of the Jacobian matrix can be problematic, either because programming the derivatives is intractable or because computation of the n^2 terms of the matrix becomes too expensive. For these reasons the algorithms provided by the library are divided into two classes according to whether the derivatives are available or not. The state for solvers with an analytic Jacobian matrix is held in a `gsl_multiroot_fdfsolver' struct. The updating procedure requires both the function and its derivatives to be supplied by the user. The state for solvers which do not use an analytic Jacobian matrix is held in a `gsl_multiroot_fsolver' struct. The updating procedure uses only function evaluations (not derivatives). The algorithms estimate the matrix J or J^{-1} by approximate methods.  File: gsl-ref.info, Node: Initializing the Multidimensional Solver, Next: Providing the multidimensional system of equations to solve, Prev: Overview of Multidimensional Root Finding, Up: Multidimensional Root-Finding Initializing the Solver ======================= The following functions initialize a multidimensional solver, either with or without derivatives. The solver itself depends only on the dimension of the problem and the algorithm and can be reused for different problems. - Function: gsl_multiroot_fsolver * gsl_multiroot_fsolver_alloc (const gsl_multiroot_fsolver_type * T, size_t N) This function returns a pointer to a a newly allocated instance of a solver of type T for a system of N dimensions. For example, the following code creates an instance of a hybrid solver, to solve a 3-dimensional system of equations. const gsl_multiroot_fsolver_type * T = gsl_multiroot_fsolver_hybrid; gsl_multiroot_fsolver * s = gsl_multiroot_fsolver_alloc (T, 3); If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of `GSL_ENOMEM'. - Function: gsl_multiroot_fdfsolver * gsl_multiroot_fdfsolver_alloc (const gsl_multiroot_fdfsolver_type * T, size_t N) This function returns a pointer to a a newly allocated instance of a derivative solver of type T for a system of N dimensions. For example, the following code creates an instance of a Newton-Raphson solver, for a 2-dimensional system of equations. const gsl_multiroot_fdfsolver_type * T = gsl_multiroot_fdfsolver_newton; gsl_multiroot_fdfsolver * s = gsl_multiroot_fdfsolver_alloc (T, 2); If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of `GSL_ENOMEM'. - Function: int gsl_multiroot_fsolver_set (gsl_multiroot_fsolver * S, gsl_multiroot_function * F, gsl_vector * X) This function sets, or resets, an existing solver S to use the function F and the initial guess X. - Function: int gsl_multiroot_fdfsolver_set (gsl_multiroot_fdfsolver * S, gsl_function_fdf * FDF, gsl_vector * X) This function sets, or resets, an existing solver S to use the function and derivative FDF and the initial guess X. - Function: void gsl_multiroot_fsolver_free (gsl_multiroot_fsolver * S) - Function: void gsl_multiroot_fdfsolver_free (gsl_multiroot_fdfsolver * S) These functions free all the memory associated with the solver S. - Function: const char * gsl_multiroot_fsolver_name (const gsl_multiroot_fsolver * S) - Function: const char * gsl_multiroot_fdfsolver_name (const gsl_multiroot_fdfsolver * S) These functions return a pointer to the name of the solver. For example, printf("s is a '%s' solver\n", gsl_multiroot_fdfsolver_name (s)); would print something like `s is a 'newton' solver'.  File: gsl-ref.info, Node: Providing the multidimensional system of equations to solve, Next: Iteration of the multidimensional solver, Prev: Initializing the Multidimensional Solver, Up: Multidimensional Root-Finding Providing the function to solve =============================== You must provide n functions of n variables for the root finders to operate on. In order to allow for general parameters the functions are defined by the following data types: - Data Type: gsl_multiroot_function This data type defines a general system of functions with parameters. `int (* f) (const gsl_vector * X, void * PARAMS, gsl_vector * F)' this function should store the vector result f(x,params) in F for argument X and parameters PARAMS, returning an appropriate error code if the function cannot be computed. `size_t N' the dimension of the system, i.e. the number of components of the vectors X and F. `void * PARAMS' a pointer to the parameters of the function. Here is an example using Powell's test function, f_1(x) = A x_0 x_1 - 1, f_2(x) = exp(-x_0) + exp(-x_1) - (1 + 1/A) with A = 10^4. The following code defines a `gsl_multiroot_function' system `F' which you could pass to a solver: struct powell_params { double A; }; int powell (gsl_vector * x, void * p, gsl_vector * f) { struct powell_params * params = *(struct powell_params *)p; double A = (params->A); double x0 = gsl_vector_get(x,0); double x1 = gsl_vector_get(x,1); gsl_vector_set (f, 0, A * x0 * x1 - 1) gsl_vector_set (f, 1, (exp(-x0) + exp(-x1) - (1.0 + 1.0/A))) return GSL_SUCCESS } gsl_multiroot_function F; struct powell_params params = { 10000.0 }; F.function = &powell; F.n = 2; F.params = ¶ms; - Data Type: gsl_multiroot_function_fdf This data type defines a general system of functions with parameters and the corresponding Jacobian matrix of derivatives, `int (* f) (const gsl_vector * X, void * PARAMS, gsl_vector * F)' this function should store the vector result f(x,params) in F for argument X and parameters PARAMS, returning an appropriate error code if the function cannot be computed. `int (* df) (const gsl_vector * X, void * PARAMS, gsl_matrix * J)' this function should store the N-by-N matrix result J_ij = d f_i(x,params) / d x_j in J for argument X and parameters PARAMS, returning an appropriate error code if the function cannot be computed. `int (* fdf) (const gsl_vector * X, void * PARAMS, gsl_vector * F, gsl_matrix * J)' This function should set the values of the F and J as above, for arguments X and parameters PARAMS. This function provides an optimization of the separate functions for f(x) and J(x) - it is always faster to compute the function and its derivative at the same time. `size_t N' the dimension of the system, i.e. the number of components of the vectors X and F. `void * PARAMS' a pointer to the parameters of the function. The example of Powell's test function defined above can be extended to include analytic derivatives using the following code, int powell_df (gsl_vector * x, void * p, gsl_matrix * J) { struct powell_params * params = *(struct powell_params *)p; double A = (params->A); double x0 = gsl_vector_get(x,0); double x1 = gsl_vector_get(x,1); gsl_vector_set (J, 0, 0, A * x1) gsl_vector_set (J, 0, 1, A * x0) gsl_vector_set (J, 1, 0, -exp(-x0)) gsl_vector_set (J, 1, 1, -exp(-x1)) return GSL_SUCCESS } int powell_fdf (gsl_vector * x, void * p, gsl_matrix * f, gsl_matrix * J) { struct powell_params * params = *(struct powell_params *)p; double A = (params->A); double x0 = gsl_vector_get(x,0); double x1 = gsl_vector_get(x,1); double u0 = exp(-x0); double u1 = exp(-x1); gsl_vector_set (f, 0, A * x0 * x1 - 1) gsl_vector_set (f, 1, u0 + u1 - (1 + 1/A)) gsl_vector_set (J, 0, 0, A * x1) gsl_vector_set (J, 0, 1, A * x0) gsl_vector_set (J, 1, 0, -u0) gsl_vector_set (J, 1, 1, -u1) return GSL_SUCCESS } gsl_multiroot_function_fdf FDF; FDF.f = &powell_f; FDF.df = &powell_df; FDF.fdf = &powell_fdf; FDF.n = 2; FDF.params = 0; Note that the function `powell_fdf' is able to reuse existing terms from the function when calculating the Jacobian, thus saving time.  File: gsl-ref.info, Node: Iteration of the multidimensional solver, Next: Search Stopping Parameters for the multidimensional solver, Prev: Providing the multidimensional system of equations to solve, Up: Multidimensional Root-Finding Iteration ========= The following functions drive the iteration of each algorithm. Each function performs one iteration to update the state of any solver of the corresponding type. The same functions work for all solvers so that different methods can be substituted at runtime without modifications to the code. - Function: int gsl_multiroot_fsolver_iterate (gsl_multiroot_fsolver * S) - Function: int gsl_multiroot_fdfsolver_iterate (gsl_multiroot_fdfsolver * S) These functions perform a single iteration of the solver S. If the iteration encounters an unexpected problem then an error code will be returned, `GSL_EBADFUNC' the iteration encountered a singular point where the function or its derivative evaluated to `Inf' or `NaN'. `GSL_ENOPROG' the iteration is not making any progress, preventing the algorithm from continuing. The solver maintains a current best estimate of the root at all times. This information can be accessed with the following auxiliary functions, - Function: gsl_vector * gsl_multiroot_fsolver_root (const gsl_multiroot_fsolver * S) - Function: gsl_vector * gsl_multiroot_fdfsolver_root (const gsl_multiroot_fdfsolver * S) These functions return the current estimate of the root for the solver S.  File: gsl-ref.info, Node: Search Stopping Parameters for the multidimensional solver, Next: Algorithms using Derivatives, Prev: Iteration of the multidimensional solver, Up: Multidimensional Root-Finding Search Stopping Parameters ========================== A root finding procedure should stop when one of the following conditions is true: * A multidimensional root has been found to within the user-specified precision. * A user-specified maximum number of iterations has been reached. * An error has occurred. The handling of these conditions is under user control. The functions below allow the user to test the precision of the current result in several standard ways. - Function: int gsl_multiroot_test_delta (const gsl_vector * DX, const gsl_vector * X, double EPSABS, double EPSREL) This function tests for the convergence of the sequence by comparing the last step DX with the absolute error EPSABS and relative error EPSREL to the current position X. The test returns `GSL_SUCCESS' if the following condition is achieved, |dx_i| < epsabs + epsrel |x_i| for each component of X and returns `GSL_CONTINUE' otherwise. - Function: int gsl_multiroot_test_residual (const gsl_vector * F, double EPSABS) This function tests the residual value F against the absolute error bound EPSABS. The test returns `GSL_SUCCESS' if the following condition is achieved, \sum_i |f_i| < epsabs and returns `GSL_CONTINUE' otherwise. This criterion is suitable for situations where the the precise location of the root, x, is unimportant provided a value can be found where the residual is small enough.  File: gsl-ref.info, Node: Algorithms using Derivatives, Next: Algorithms without Derivatives, Prev: Search Stopping Parameters for the multidimensional solver, Up: Multidimensional Root-Finding Algorithms using Derivatives ============================ The root finding algorithms described in this section make use of both the function and its derivative. They require an initial guess for the location of the root, but there is no absolute guarantee of convergence - the function must be suitable for this technique and the initial guess must be sufficiently close to the root for it to work. When the conditions are satisfied then convergence is quadratic. - Derivative Solver: gsl_multiroot_fdfsolver_hybridsj This is a modified version of Powell's Hybrid method as implemented in the HYBRJ algorithm in MINPACK. Minpack was written by Jorge J. More', Burton S. Garbow and Kenneth E. Hillstrom. The Hybrid algorithm retains the fast convergence of Newton's method but will also reduce the residual when Newton's method is unreliable. The algorithm uses a generalized trust region to keep each step under control. In order to be accepted a proposed new position x' must satisfy the condition |D (x' - x)| < \delta, where D is a diagonal scaling matrix and \delta is the size of the trust region. The components of D are computed internally, using the column norms of the Jacobian to estimate the sensitivity of the residual to each component of x. This improves the behavior of the algorithm for badly scaled functions. On each iteration the algorithm first determines the standard Newton step by solving the system J dx = - f. If this step falls inside the trust region it is used as a trial step in the next stage. If not, the algorithm uses the linear combination of the Newton and gradient directions which is predicted to minimize the norm of the function while staying inside the trust region. dx = - \alpha J^{-1} f(x) - \beta \nabla |f(x)|^2 This combination of Newton and gradient directions is referred to as a "dogleg step". The proposed step is now tested by evaluating the function at the resulting point, x'. If the step reduces the norm of the function sufficiently then it is accepted and size of the trust region is increased. If the proposed step fails to improve the solution then the size of the trust region is decreased and another trial step is computed. The speed of the algorithm is increased by computing the changes to the Jacobian approximately, using a rank-1 update. If two successive attempts fail to reduce the residual then the full Jacobian is recomputed. The algorithm also monitors the progress of the solution and returns an error if several steps fail to make any improvement, `GSL_ENOPROG' the iteration is not making any progress, preventing the algorithm from continuing. `GSL_ENOPROGJ' re-evaluations of the Jacobian indicate that the iteration is not making any progress, preventing the algorithm from continuing. - Derivative Solver: gsl_multiroot_fdfsolver_hybridj This algorithm is an unscaled version of `hybridsj'. The steps are controlled by a spherical trust region |x' - x| < \delta, instead of a generalized region. This can be useful if the generalized region estimated by `hybridsj' is inappropriate. - Derivative Solver: gsl_multiroot_fdfsolver_newton Newton's Method is the standard root-polishing algorithm. The algorithm begins with an initial guess for the location of the solution. On each iteration a linear approximation to the function F is used to estimate the step which will zero all the components of the residual. The iteration is defined by the following sequence, x -> x' = x - J^{-1} f(x) where the Jacobian matrix J is computed from the derivative functions provided by F. The step dx is obtained by solving the linear system, J dx = - f(x) using LU decomposition. - Derivative Solver: gsl_multiroot_fdfsolver_gnewton This is a modified version of Newton's method which attempts to improve global convergence by requiring every step to reduce the Euclidean norm of the residual, |f(x)|. If the Newton step leads to an increase in the norm then a reduced step of relative size, t = (\sqrt(1 + 6 r) - 1) / (3 r) is proposed, with r being the ratio of norms |f(x')|^2/|f(x)|^2. This procedure is repeated until a suitable step size is found.  File: gsl-ref.info, Node: Algorithms without Derivatives, Next: Example programs for Multidimensional Root finding, Prev: Algorithms using Derivatives, Up: Multidimensional Root-Finding Algorithms without Derivatives ============================== The algorithms described in this section do not require any derivative information to be supplied by the user. Any derivatives needed are approximated from by finite difference. - Solver: gsl_multiroot_fsolver_hybrids This is a version of the Hybrid algorithm which replaces calls to the Jacobian function by its finite difference approximation. The finite difference approximation is computed using `gsl_multiroots_fdjac' with a relative step size of `GSL_SQRT_DBL_EPSILON'. - Solver: gsl_multiroot_fsolver_hybrid This is a finite difference version of the Hybrid algorithm without internal scaling. - Solver: gsl_multiroot_fsolver_dnewton The "discrete Newton algorithm" is the simplest method of solving a multidimensional system. It uses the Newton iteration x -> x - J^{-1} f(x) where the Jacobian matrix J is approximated by taking finite differences of the function F. The approximation scheme used by this implementation is, J_{ij} = (f_i(x + \delta_j) - f_i(x)) / \delta_j where \delta_j is a step of size \sqrt\epsilon |x_j| with \epsilon being the machine precision (\epsilon \approx 2.22 \times 10^-16). The order of convergence of Newton's algorithm is quadratic, but the finite differences require n^2 function evaluations on each iteration. The algorithm may become unstable if the finite differences are not a good approximation to the true derivatives. - Solver: gsl_multiroot_fsolver_broyden The "Broyden algorithm" is a version of the discrete Newton algorithm which attempts to avoids the expensive update of the Jacobian matrix on each iteration. The changes to the Jacobian are also approximated, using a rank-1 update, J^{-1} \to J^{-1} - (J^{-1} df - dx) dx^T J^{-1} / dx^T J^{-1} df where the vectors dx and df are the changes in x and f. On the first iteration the inverse Jacobian is estimated using finite differences, as in the discrete Newton algorithm. This approximation gives a fast update but is unreliable if the changes are not small, and the estimate of the inverse Jacobian becomes worse as time passes. The algorithm has a tendency to become unstable unless it starts close to the root. The Jacobian is refreshed if this instability is detected (consult the source for details). This algorithm is not recommended and is included only for demonstration purposes.  File: gsl-ref.info, Node: Example programs for Multidimensional Root finding, Next: References and Further Reading for Multidimensional Root Finding, Prev: Algorithms without Derivatives, Up: Multidimensional Root-Finding Examples ======== The multidimensional solvers are used in a similar way to the one-dimensional root finding algorithms. This first example demonstrates the `hybrids' scaled-hybrid algorithm, which does not require derivatives. The program solves the Rosenbrock system of equations, f_1 (x, y) = a (1 - x) f_2 (x, y) = b (y - x^2) with a = 1, b = 10. The solution of this system lies at (x,y) = (1,1) in a narrow valley. The first stage of the program is to define the system of equations, #include #include #include #include struct rparams { double a; double b; }; int rosenbrock_f (const gsl_vector * x, void *params, gsl_vector * f) { double a = ((struct rparams *) params)->a; double b = ((struct rparams *) params)->b; double x0 = gsl_vector_get (x, 0); double x1 = gsl_vector_get (x, 1); double y0 = a * (1 - x0); double y1 = b * (x1 - x0 * x0); gsl_vector_set (f, 0, y0); gsl_vector_set (f, 1, y1); return GSL_SUCCESS; } The main program begins by creating the function object `f', with the arguments `(x,y)' and parameters `(a,b)'. The solver `s' is initialized to use this function, with the `hybrids' method. int main (void) { const gsl_multiroot_fsolver_type *T; gsl_multiroot_fsolver *s; int status; size_t i, iter = 0; const size_t n = 2; struct rparams p = {1.0, 10.0}; gsl_multiroot_function f = {&rosenbrock_f, n, &p}; double x_init[2] = {-10.0, -5.0}; gsl_vector *x = gsl_vector_alloc (n); gsl_vector_set (x, 0, x_init[0]); gsl_vector_set (x, 1, x_init[1]); T = gsl_multiroot_fsolver_hybrids; s = gsl_multiroot_fsolver_alloc (T, 2); gsl_multiroot_fsolver_set (s, &f, x); print_state (iter, s); do { iter++; status = gsl_multiroot_fsolver_iterate (s); print_state (iter, s); if (status) /* check if solver is stuck */ break; status = gsl_multiroot_test_residual (s->f, 1e-7); } while (status == GSL_CONTINUE && iter < 1000); printf ("status = %s\n", gsl_strerror (status)); gsl_multiroot_fsolver_free (s); gsl_vector_free (x); return 0; } Note that it is important to check the return status of each solver step, in case the algorithm becomes stuck. If an error condition is detected, indicating that the algorithm cannot proceed, then the error can be reported to the user, a new starting point chosen or a different algorithm used. The intermediate state of the solution is displayed by the following function. The solver state contains the vector `s->x' which is the current position, and the vector `s->f' with corresponding function values. int print_state (size_t iter, gsl_multiroot_fsolver * s) { printf ("iter = %3u x = % .3f % .3f " "f(x) = % .3e % .3e\n", iter, gsl_vector_get (s->x, 0), gsl_vector_get (s->x, 1), gsl_vector_get (s->f, 0), gsl_vector_get (s->f, 1)); } Here are the results of running the program. The algorithm is started at (-10,-5) far from the solution. Since the solution is hidden in a narrow valley the earliest steps follow the gradient of the function downhill, in an attempt to reduce the large value of the residual. Once the root has been approximately located, on iteration 8, the Newton behavior takes over and convergence is very rapid. iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 1 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 2 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 3 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 4 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 5 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01 iter = 6 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01 iter = 7 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00 iter = 8 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00 iter = 9 x = 1.000 0.878 f(x) = 1.268e-10 -1.218e+00 iter = 10 x = 1.000 0.989 f(x) = 1.124e-11 -1.080e-01 iter = 11 x = 1.000 1.000 f(x) = 0.000e+00 0.000e+00 status = success Note that the algorithm does not update the location on every iteration. Some iterations are used to adjust the trust-region parameter, after trying a step which was found to be divergent, or to recompute the Jacobian, when poor convergence behavior is detected. The next example program adds derivative information, in order to accelerate the solution. There are two derivative functions `rosenbrock_df' and `rosenbrock_fdf'. The latter computes both the function and its derivative simultaneously. This allows the optimization of any common terms. For simplicity we substitute calls to the separate `f' and `df' functions at this point in the code below. int rosenbrock_df (const gsl_vector * x, void *params, gsl_matrix * J) { double a = ((struct rparams *) params)->a; double b = ((struct rparams *) params)->b; double x0 = gsl_vector_get (x, 0); double df00 = -a; double df01 = 0; double df10 = -2 * b * x0; double df11 = b; gsl_matrix_set (J, 0, 0, df00); gsl_matrix_set (J, 0, 1, df01); gsl_matrix_set (J, 1, 0, df10); gsl_matrix_set (J, 1, 1, df11); return GSL_SUCCESS; } int rosenbrock_fdf (const gsl_vector * x, void *params, gsl_vector * f, gsl_matrix * J) { rosenbrock_f (x, params, f); rosenbrock_df (x, params, J); return GSL_SUCCESS; } The main program now makes calls to the corresponding `fdfsolver' versions of the functions, int main (void) { const gsl_multiroot_fdfsolver_type *T; gsl_multiroot_fdfsolver *s; int status; size_t i, iter = 0; const size_t n = 2; struct rparams p = {1.0, 10.0}; gsl_multiroot_function_fdf f = {&rosenbrock_f, &rosenbrock_df, &rosenbrock_fdf, n, &p}; double x_init[2] = {-10.0, -5.0}; gsl_vector *x = gsl_vector_alloc (n); gsl_vector_set (x, 0, x_init[0]); gsl_vector_set (x, 1, x_init[1]); T = gsl_multiroot_fdfsolver_gnewton; s = gsl_multiroot_fdfsolver_alloc (T, n); gsl_multiroot_fdfsolver_set (s, &f, x); print_state (iter, s); do { iter++; status = gsl_multiroot_fdfsolver_iterate (s); print_state (iter, s); if (status) break; status = gsl_multiroot_test_residual (s->f, 1e-7); } while (status == GSL_CONTINUE && iter < 1000); printf ("status = %s\n", gsl_strerror (status)); gsl_multiroot_fdfsolver_free (s); gsl_vector_free (x); return 0; } The addition of derivative information to the `hybrids' solver does not make any significant difference to its behavior, since it able to approximate the Jacobian numerically with sufficient accuracy. To illustrate the behavior of a different derivative solver we switch to `gnewton'. This is a traditional newton solver with the constraint that it scales back its step if the full step would lead "uphill". Here is the output for the `gnewton' algorithm, iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 1 x = -4.231 -65.317 f(x) = 5.231e+00 -8.321e+02 iter = 2 x = 1.000 -26.358 f(x) = -8.882e-16 -2.736e+02 iter = 3 x = 1.000 1.000 f(x) = -2.220e-16 -4.441e-15 status = success The convergence is much more rapid, but takes a wide excursion out to the point (-4.23,-65.3). This could cause the algorithm to go astray in a realistic application. The hybrid algorithm follows the downhill path to the solution more reliably.  File: gsl-ref.info, Node: References and Further Reading for Multidimensional Root Finding, Prev: Example programs for Multidimensional Root finding, Up: Multidimensional Root-Finding References and Further Reading ============================== The original version of the Hybrid method is described in the following articles by Powell, M.J.D. Powell, "A Hybrid Method for Nonlinear Equations" (Chap 6, p 87-114) and "A Fortran Subroutine for Solving systems of Nonlinear Algebraic Equations" (Chap 7, p 115-161), in `Numerical Methods for Nonlinear Algebraic Equations', P. Rabinowitz, editor. Gordon and Breach, 1970. The following papers are also relevant to the algorithms described in this section, J.J. More', M.Y. Cosnard, "Numerical Solution of Nonlinear Equations", `ACM Transactions on Mathematical Software', Vol 5, No 1, (1979), p 64-85 C.G. Broyden, "A Class of Methods for Solving Nonlinear Simultaneous Equations", `Mathematics of Computation', Vol 19 (1965), p 577-593 J.J. More', B.S. Garbow, K.E. Hillstrom, "Testing Unconstrained Optimization Software", ACM Transactions on Mathematical Software, Vol 7, No 1 (1981), p 17-41  File: gsl-ref.info, Node: Multidimensional Minimization, Next: Least-Squares Fitting, Prev: Multidimensional Root-Finding, Up: Top Multidimensional Minimization ***************************** This chapter describes routines for finding minima of arbitrary multidimensional functions. The library provides low level components for a variety of iterative minimizers and convergence tests. These can be combined by the user to achieve the desired solution, while providing full access to the intermediate steps of the algorithms. Each class of methods uses the same framework, so that you can switch between minimizers at runtime without needing to recompile your program. Each instance of a minimizer keeps track of its own state, allowing the minimizers to be used in multi-threaded programs. The minimization algorithms can be used to maximize a function by inverting its sign. The header file `gsl_multimin.h' contains prototypes for the minimization functions and related declarations. * Menu: * Multimin Overview:: * Multimin Caveats:: * Initializing the Multidimensional Minimizer:: * Providing a function to minimize:: * Multimin Iteration:: * Multimin Stopping Criteria:: * Multimin Algorithms:: * Multimin Examples:: * Multimin References and Further Reading::  File: gsl-ref.info, Node: Multimin Overview, Next: Multimin Caveats, Up: Multidimensional Minimization Overview ======== The problem of multidimensional minimization requires finding a point x such that the scalar function, f(x_1, ..., x_n) takes a value which is lower than at any neighboring point. For smooth functions the gradient g = \nabla f vanishes at the minimum. In general there are no bracketing methods available for the minimization of n-dimensional functions. All algorithms proceed from an initial guess using a search algorithm which attempts to move in a downhill direction. A one-dimensional line minimisation is performed along this direction until the lowest point is found to a suitable tolerance. The search direction is then updated with local information from the function and its derivatives, and the whole process repeated until the true n-dimensional minimum is found. Several minimization algorithms are available within a single framework. The user provides a high-level driver for the algorithms, and the library provides the individual functions necessary for each of the steps. There are three main phases of the iteration. The steps are, * initialize minimizer state, S, for algorithm T * update S using the iteration T * test S for convergence, and repeat iteration if necessary Each iteration step consists either of an improvement to the line-mimisation in the current direction or an update to the search direction itself. The state for the minimizers is held in a `gsl_multimin_fdfminimizer' struct.  File: gsl-ref.info, Node: Multimin Caveats, Next: Initializing the Multidimensional Minimizer, Prev: Multimin Overview, Up: Multidimensional Minimization Caveats ======= Note that the minimization algorithms can only search for one local minimum at a time. When there are several local minima in the search area, the first minimum to be found will be returned; however it is difficult to predict which of the minima this will be. In most cases, no error will be reported if you try to find a local minimum in an area where there is more than one. It is also important to note that the minimization algorithms find local minima; there is no way to determine whether a minimum is a global minimum of the function in question.  File: gsl-ref.info, Node: Initializing the Multidimensional Minimizer, Next: Providing a function to minimize, Prev: Multimin Caveats, Up: Multidimensional Minimization Initializing the Multidimensional Minimizer =========================================== The following function initializes a multidimensional minimizer. The minimizer itself depends only on the dimension of the problem and the algorithm and can be reused for different problems. - Function: gsl_multimin_fdfminimizer * gsl_multimin_fdfminimizer_alloc (const gsl_multimin_fdfminimizer_type *T, size_t N) This function returns a pointer to a a newly allocated instance of a minimizer of type T for an N-dimension function. If there is insufficient memory to create the minimizer then the function returns a null pointer and the error handler is invoked with an error code of `GSL_ENOMEM'. - Function: int gsl_multimin_fdfminimizer_set (gsl_multimin_fdfminimizer * S, gsl_multimin_function_fdf *FDF, const gsl_vector * X, double STEP_SIZE, double TOL) This function initializes the minimizer S to minimize the function FDF starting from the initial point X. The size of the first trial step is given by STEP_SIZE. The accuracy of the line minimization is specified by TOL. The precise meaning of this parameter depends on the method used. Typically the line minimization is considered successful if the gradient of the function g is orthogonal to the current search direction p to a relative accuracy of TOL, where dot(p,g) < tol |p| |g|. - Function: void gsl_multimin_fdfminimizer_free (gsl_multimin_fdfminimizer *S) This function frees all the memory associated with the minimizer S. - Function: const char * gsl_multimin_fdfminimizer_name (const gsl_multimin_fdfminimizer * S) This function returns a pointer to the name of the minimizer. For example, printf("s is a '%s' minimizer\n", gsl_multimin_fdfminimizer_name (s)); would print something like `s is a 'conjugate_pr' minimizer'.  File: gsl-ref.info, Node: Providing a function to minimize, Next: Multimin Iteration, Prev: Initializing the Multidimensional Minimizer, Up: Multidimensional Minimization Providing a function to minimize ================================ You must provide a parametric function of n variables for the minimizers to operate on. You also need to provide a routine which calculates the gradient of the function and a third routine which calculates both the function value and the gradient together. In order to allow for general parameters the functions are defined by the following data type: - Data Type: gsl_multimin_function_fdf This data type defines a general function of n variables with parameters and the corresponding gradient vector of derivatives, `double (* f) (const gsl_vector * X, void * PARAMS)' this function should return the result f(x,params) for argument X and parameters PARAMS. `int (* df) (const gsl_vector * X, void * PARAMS, gsl_vector * G)' this function should store the N-dimensional gradient g_i = d f(x,params) / d x_i in the vector G for argument X and parameters PARAMS, returning an appropriate error code if the function cannot be computed. `int (* fdf) (const gsl_vector * X, void * PARAMS, double * f, gsl_vector * G)' This function should set the values of the F and G as above, for arguments X and parameters PARAMS. This function provides an optimization of the separate functions for f(x) and g(x) - it is always faster to compute the function and its derivative at the same time. `size_t N' the dimension of the system, i.e. the number of components of the vectors X. `void * PARAMS' a pointer to the parameters of the function. The following example function defines a simple paraboloid with two parameters, /* Paraboloid centered on (dp[0],dp[1]) */ double my_f (const gsl_vector *v, void *params) { double x, y; double *dp = (double *)params; x = gsl_vector_get(v, 0); y = gsl_vector_get(v, 1); return 10.0 * (x - dp[0]) * (x - dp[0]) + 20.0 * (y - dp[1]) * (y - dp[1]) + 30.0; } /* The gradient of f, df = (df/dx, df/dy). */ void my_df (const gsl_vector *v, void *params, gsl_vector *df) { double x, y; double *dp = (double *)params; x = gsl_vector_get(v, 0); y = gsl_vector_get(v, 1); gsl_vector_set(df, 0, 20.0 * (x - dp[0])); gsl_vector_set(df, 1, 40.0 * (y - dp[1])); } /* Compute both f and df together. */ void my_fdf (const gsl_vector *x, void *params, double *f, gsl_vector *df) { *f = my_f(x, params); my_df(x, params, df); } The function can be initialized using the following code, gsl_multimin_function_fdf my_func; double p[2] = { 1.0, 2.0 }; /* center at (1,2) */ my_func.f = &my_f; my_func.df = &my_df; my_func.fdf = &my_fdf; my_func.n = 2; my_func.params = (void *)p;