Tell me more ×
Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It's 100% free, no registration required.

Most numerical methods for quadrature treat the integrand as a black-box function. What if we have more information? In particular, what benefit, if any, can we derive from knowing the first few derivatives of the integrand? What other information might be valuable?

For derivatives in particular: error estimates for basic quadrature (rectangle/trapzoid/simpson's rules) are closely related. Perhaps there is a way to pre-select sampling resolution instead of relying on dynamic adaptivity?

I'm interested in both the univariate and multi-dimensional case.

share|improve this question
2  
Just a minor correction: The rectangle, trapezoid, and Simpson's rule are Newton-Cotes type rules, not Gaussian quadratures. – Pedro Mar 20 at 19:19

3 Answers

I think this is not quite what you had in mind, but for the sake of completeness, let's start with some basics. Most quadrature formulas such as Newton-Cotes and Gauss are based on the idea that in order to evaluate the integral of a function approximately, you can approximate the function by, e.g., a polynomial that you can then integrate exactly: $$ \int_a^b f(x) \,dx \approx \int_a^b \sum_j c_j p_j(x)\,dx = \sum_j c_j\int_a^b p_j(x) \,dx. $$

Newton-Cotes and Gauss are based on Lagrange interpolation, meaning you interpolate the given function using its values on a set of nodes $x_j$ (which are spaced uniformly for Newton-Cotes and chosen optimally in a certain sense for Gauss). In this case, $c_j = f(x_j)$, and the integrals over the polynomial nodal basis functions $p_j$ are exactly the quadrature weights.

The same approach works with Hermite interpolation, i.e., interpolation using the values of a function and its derivatives up to a certain order on a set of nodes. In the case of function and first derivative values only, you have $$ \int_a^b f(x) \,dx \approx \int_a^b \sum_j f(x_j) p_j(x) + f'(x_j) q_j(x)\,dx = \sum_j f(x_j) w_j+f'(x_j) \bar w_j. $$ (There is a Matlab implementation of this, if you'd like to see how it works.)

This is related to a variant of Gauss quadrature called Gauss-Legendre quadrature, where the nodes are chosen precisely to make the weights $\bar w_j$ vanish (which is another explanation for the fact that Gauss quadrature with $N$ nodes is exact of order $2N-1$). I think this at least partially answers your question in the second paragraph. For this reason, Gauss quadrature is usually used instead of Hermite interpolation, since you get the same order with the same number of points, but do not need derivative information.

For multidimensional quadrature, you face the problem that the number of derivatives (including mixed derivatives) you need to evaluate grows very quickly as the order increases.

Coming back to your question: A straightforward way of exploiting derivative information would be to use a subdivision of your integration domain and use a separate quadrature for every division. If you know that the derivatives of your function are large in some part of the domain, you'd use either smaller domains (in effect, a summed quadrature formula) or higher quadrature order. This is related to h- and p-adaptivity, respectively, in finite element methods.

share|improve this answer

There are a number of "corrected" integration rules which invoke derivatives of the endpoints. One simple example is the corrected trapezoidal rule. Suppose we wish to approximate the integral

$$ \int_a^bf(x)\,dx. $$

Let $n$ be an integer, and $h=(b-a)/n$. Then the trapezoidal rule

$$ T = \frac{h}{2}\bigl(f(a)+2f(a+h)+2f(a+2h)+\cdots+2f(a+(n-1)h)+f(b)\bigr) $$

provides a simple approximation to the integral, with error of order $h^2$. However, the "corrected" trapezoidal rule:

$$ T'=T-\frac{h^2}{12}\bigl(f'(b)-f'(a)\bigr) $$

vastly increases the accuracy. For example, consider

$$ I=\int_0^1 e^{-x^2}\,dx $$

and choose $n=8$. The exact value of $I$, to 14 decimal places, is

$$ 0.74682413281243 $$

And the values of $T$ and $T'$ are

$$ 0.7458656148457,\quad 0.74682363422375 $$

respectively. The errors are

$$ |I-T|=9.5851796673207534 \times 10^{-4} $$

and

$$ |I-T'|=4.9858868145236102 \times 10^{-7} $$

showing a remarkable increase in accuracy. There are further corrections involving higher derivatives, or starting from other Newton-Cotes rules or Gaussian type rules.

share|improve this answer

Apart from the methods based on Newton-Cotes mentioned in the other answers, there is what is now called Gauss-TurĂ¡n quadrature (see e.g this and this, as well as the venerable reference by Walter Gautschi). This is a generalization of the usual Gaussian quadrature, where one can now exploit the knowledge of a function's derivatives in finding an optimal set of abscissas and weights that yield a quadrature rule that integrates functions of the form $\text{polynomial}\times\text{weight function}$ exactly. As expected, to use this rule, one is now expected to be able to evaluate your function and a number of its derivatives at arbitrary real points. A search in the usual places should be able to turn up a few more references.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.