Taylor Polynomial Approximation

$$ % Define colors used throughout LaTeX explanation \require{color} \definecolor{error}{RGB}{ 255, 0, 0 } \definecolor{taylor}{RGB}{ 0, 0, 255 } \definecolor{estimate}{RGB}{ 160, 80, 0 } \definecolor{normal}{RGB}{ 0, 0, 0 } % i.e. black \definecolor{builtin}{RGB}{ 0, 180, 0 } $$

Note (Spring 2025)

I have updated my old code for drawing on the HTML canvas. (The old code, used in the TIME III Conference Presentation, is still available at graph.js.) If you have saved the old version of this web page, you can discard it.

Now the code is re-factored and split among several files:

LaTeX is now based on the most recent (and externally loaded) MathJax. The code we worked on in class is located in the files with self-descriptive names: cos.js, exp.js, tan.js, ln.js.

The general case

Suppose $n$ is a non-negative integer, $\ U$ is an open interval of the real number line, and $x_0 \in U$. Whenever a function $f$ is defined and continuously differentiable $n+1$ times on $U$, we can write the following identity for any other $x \in U$:

$$ \color{builtin} f( x ) \color{normal} = \color{taylor} \displaystyle \sum_{ i = 0 }^{ n } \frac{ 1 }{ i \ ! } \cdot \left( { \left( \frac{ d }{ d \ t } \right)^{ i } \ {\rule[-25px]{1px}{60px}} }_{ \ t = x_0 } f( t ) \right) \cdot ( x - x_0 )^i \color{normal} + \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = x_0 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } f( t ) \right) \cdot ( x - t )^n \ d \ t \color{normal} .$$

The sum is called the $ \color{taylor} \text{Taylor polynomial} $ of $f$ at $x_0$. We view it as the approximation of $ \color{builtin} f( x ) \color{normal} $, so that the integral is the $ \color{error} \text{error term} $ of that approximation.

It turns out that in many cases:

In those cases, the $ \color{taylor} \text{Taylor polynomial} $ can be used for approximating $ \color{builtin} f( x ) \color{normal} $ — perhaps only for the values of $x$ sufficiently close to $x_0$ — with an arbitrary, and guaranteed, precision $\varepsilon > 0$.

$\cos(x)$

When used for $f( x ) = \cos( x )$ and $x_0 = 0$, the general $ \color{taylor} \text{Taylor polynomial} $ approximation turns into

$$ \color{builtin} \cos( x ) \color{normal} = \color{taylor} \displaystyle \sum_{ j = 0 }^{ k } \frac{ ( -1 )^j }{ ( 2j ) \ ! } \cdot x^{2j} \color{normal} + \color{error} \frac{ 1 }{ ( 2k ) \ ! } \displaystyle \int_{ t = 0 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ 2k + 1 } \cos( t ) \right) \cdot ( x - t )^{ 2k } \ d \ t \color{normal} $$
for an even $n = 2k$ when denoting $i = 2j$.

The size of the $ \color{error} \text{integral error term} $ can be estimated from above:

$$ \left| \ \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 0 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } \cos( t ) \right) \cdot ( x - t )^{ n } \ d \ t \color{normal} \ \right| \le \color{estimate} \frac{ \left| x \right|^{ n + 1 } }{ ( n + 1 ) \ ! } \color{normal} .$$

The $ \color{estimate} \text{estimate of the error term} $ (and thus the $ \color{error} \text{error term} $ itself) converges to zero for any real $x$:

$$ \lim_{ n \rightarrow \infty } \color{estimate} \frac{ \left| x \right|^{n + 1} }{ ( n + 1 ) \ ! } \color{normal} = 0 .$$

These facts provide the basis for computing $ \color{builtin} \cos( x ) \color{normal} $ with an arbitrary guaranteed precision* $\varepsilon > 0$, as done in the source code of this HTML page. To see a demonstration of this computation, press the "Show" button below. To start from scratch, press the "Reset" button or refresh the page.

* Our computation uses floating point computer representation of real numbers, and thus suffers from all the usual limitations of floating point arithmetic. This precision can be guaranteed only modulo floating point errors.

$e^x$

When used for $f( x ) = e^x$ and $x_0 = 0$, the general $ \color{taylor} \text{Taylor polynomial} $ approximation turns into

$$ \color{builtin} e^x \color{normal} = \color{taylor} \displaystyle \sum_{ i = 0 }^{ n } \frac{ 1 }{ i \ ! } \cdot x^{i} \color{normal} + \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 0 }^{x} e^t \cdot ( x - t )^{ n } \ d \ t \color{normal} .$$

The size of the $ \color{error} \text{integral error term} $ can be estimated from above:

$$ \left| \ \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 0 }^{x} e^t \cdot ( x - t )^{ n } \ d \ t \color{normal} \ \right| \le \color{estimate} \frac{ \max \left({\rule[-5px]{0px}{25px}} 1, 3^{ \left\lceil \left| x \right| \right\rceil } \right) \cdot \left| x \right|^{ n + 1 } }{ ( n + 1 ) \ ! } \color{normal} .$$

The $ \color{estimate} \text{estimate of the error term} $ (and thus the $ \color{error} \text{error term} $ itself) converges to zero for any real $x$:

$$ \lim_{ n \rightarrow \infty } \color{estimate} \frac{ \max \left({\rule[-5px]{0px}{25px}} 1, 3^{ \left\lceil \left| x \right| \right\rceil } \right) \cdot \left| x \right|^{n + 1} }{ ( n + 1 ) \ ! } \color{normal} = 0 .$$

These facts provide the basis for computing $ \color{builtin} e^x \color{normal} $ with an arbitrary guaranteed precision* $\varepsilon > 0$, as done in the source code of this HTML page. To see a demonstration of this computation, press the "Show" button below. To start from scratch, press the "Reset" button or refresh the page.

* Our computation uses floating point computer representation of real numbers, and thus suffers from all the usual limitations of floating point arithmetic. This precision can be guaranteed only modulo floating point errors.

$\tan( x )$

When used for $f( x ) = \tan( x )$ and $x_0 = 0$, the general $ \color{taylor} \text{Taylor polynomial} $ approximation turns into something rather complicated:

$$ \color{builtin} \tan( x ) \color{normal} = \color{taylor} \displaystyle \sum_{ j = 0 }^{ k } \frac{ ( -1 )^{ j + 1 } \ 4^{ j + 1 } \left( 1 - 4^{ j + 1 } \right) \ B_{ 2j + 2 } }{ ( 2 j + 1 ) \ ! } \cdot x^{ 2j + 1 } \color{normal} + \color{error} \frac{ 1 }{ ( 2k + 1 ) \ ! } \displaystyle \int_{ t = 0 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ 2 ( k + 1 ) } \tan( t ) \right) \cdot ( x - t )^{ 2k + 1 } \ d \ t \color{normal} .$$

In the above, the terms $B_i$ are the so-called Bernoulli numbers. Even though it is done in the source code of this page, the computation of the Bernoulli numbers is a bit too far from the main subject at hand — namely the Taylor polynomials — to discuss here in full detail.

The size of the $ \color{error} \text{integral error term} $ can be estimated from above, but — as you can probably guess from the behavior of the Taylor polynomials below — that estimate can be guaranteed to converge to zero only for $x \in \left( - \frac{\pi}{2}, \frac{\pi}{2} \right)$. Having omitted a discussion of Bernoulli numbers, we are likewise leaving off the explicit estimation of the error term.

$\ln( x )$

When $f( x ) = \ln( x )$, the Taylor polynomial approximation calls for $ \displaystyle \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } \ln( t ) \right) = ( -1 )^n \frac{ n \ ! }{ t^{ n + 1 } } .$ Using these identities (that hold for any $n = 0,1,\ldots$) to compute the $ \color{taylor} \text{Taylor polynomial} $ at $x_0 = 1$, we get $ \color{taylor} \displaystyle \sum_{ i = 1 }^{ n } \frac{ ( -1 )^{ i + 1 } }{ i } \cdot ( x - 1 )^{ i } \color{normal} $. The computation of the $ \color{error} \text{integral error term} $ for the same $x_0 = 1$ yields:

$ \displaystyle \hspace{8em} \color{error} \frac{ 1 }{ n \ ! } \int_{ t = 1 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } \ln( t ) \right) \cdot ( x - t )^{ n } \ d \ t \color{normal} $

$ \displaystyle \hspace{16em} = \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 1 }^{x} \left( ( -1 )^n \frac{ n \ ! }{ t^{ n + 1 } } \right) \cdot ( x - t )^{ n } \ d \ t \color{normal} = \color{error} \displaystyle \int_{ t = 1 }^{x} ( -1 )^n \frac{ ( x - t )^{ n } }{ t^{ n + 1 } } \ d \ t \color{normal} $

$ \displaystyle \hspace{24em} = \color{error} \displaystyle \int_{ t = 1 }^{x} \left( 1 - \frac{ x } { t } \right)^n \ \frac{ 1 }{ t } \ d \ t \color{normal} .$

Thus the full Taylor polynomial approximation for $f( x ) = \ln( x )$ at $x_0 = 1$ can be expressed as:

$$ \color{builtin} \ln( x ) \color{normal} = \color{taylor} \displaystyle \sum_{ i = 1 }^{ n } \frac{ ( -1 )^{ i + 1 } }{ i } \cdot ( x - 1 )^{ i } \color{normal} + \color{error} \displaystyle \int_{ t = 1 }^{x} \left( 1 - \frac{ x } { t } \right)^n \ \frac{ 1 }{ t } \ d \ t \color{normal} .$$

For $x > 0$, the expression $ \displaystyle 1 - \frac{ x } { t } ,$ considered as a function of $t$, has the graph in the shape of a hyperbola:

The graph of $\displaystyle y = 1 - \frac{x}{t}$ with $x = \frac{1}{2}$ in the $( t, y )$-coordinate plane
The graph of $\displaystyle y = 1 - \frac{x}{t}$ with $x = \frac{3}{2}$ in the $( t, y )$-coordinate plane

This hyperbola has the vertical asymptote $t = 0$, the horizontal asymptote $y = 1$, and the $t$-intercept $t = x$. The $\displaystyle y = 1 - \frac{x}{t}$ as a function of $t$ is monotonic on the interval $t \in [ 1, x ]$*, reaching its maximum distance from zero, namely $ \left| x - 1 \right| ,$ when $t = 1$.

* We use the interval notation $t \in \left[ 1, x \right]$ to denote the set of all real numbers between $1$ and $x$ without the assumption that $1 \le x $ .

Therefore for any $x > 0$ we can estimate the size of the error term as follows:

$ \hspace{4em} \displaystyle \left| \ \color{error} \int_{ t = 1 }^{x} \left( 1 - \frac{ x } { t } \right)^n \ \frac{ 1 }{ t } \ d \ t \color{normal} \ \right| $

$ \hspace{8em} \displaystyle \le \color{estimate} \left| \ \int_{ t = 1 }^{x} \displaystyle \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} \left| 1 - \frac{ x } { t } \right| \right)^n \ \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} \frac{ 1 }{ t } \right) \ d \ t \ \right| \color{normal} = \color{estimate} \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} \left| 1 - \frac{ x } { t } \right| \right)^n \cdot \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} \frac{ 1 }{ t } \right) \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} \displaystyle \ d \ t \ \right| \color{normal} $

$ \hspace{12em} \displaystyle = \color{estimate} \left| x - 1 \right|^n \cdot \max\left( 1, \frac{1}{x} \right) \cdot \left| x - 1 \right| \color{normal} $

$ \hspace{16em} \displaystyle = \color{estimate} \left| x - 1 \right|^{ n + 1 } \cdot \max\left( 1, \frac{1}{x} \right) \color{normal} .$

The last expression $ \displaystyle \color{estimate} E_{n} ( x ) \color{normal} = \color{estimate} \left| x - 1 \right|^{ n + 1 } \cdot \max\left( 1, \frac{1}{x} \right) \color{normal} $ is directly computable. Furthermore, for any $x \in \left( 0, 2 \right)$, we have that $0 < \left| x - 1 \right| < 1$ and therefore $$ \lim_{ n \rightarrow \infty} \color{estimate} E_{n} ( x ) \color{normal} = 0 .$$

These facts provide the basis for computing $ \color{builtin} \ln( x ) \color{normal} $ with an arbitrary guaranteed precision* $\varepsilon > 0$ for any $x \in \left( 0, 2 \right) $, as done in the source code of this HTML page.

* Our computation uses floating point computer representation of real numbers, and thus suffers from all the usual limitations of floating point arithmetic. This precision can be guaranteed only modulo floating point errors.

Using a slightly different estimation, we can demonstrate that the error term converges to zero even for $x = 2$. Indeed, for any $x \in [ 1, 2 ]$ and $n = 1, 2, \ldots \ $, we have that

$ \hspace{4em} \displaystyle \left| \ \color{error} \int_{ t = 1 }^{x} \left( 1 - \frac{ x } { t } \right)^n \ \frac{ 1 }{ t } \ d \ t \color{normal} \ \right| = \left| \ \color{error} \displaystyle \int_{ t = 1 }^{x} \frac{ ( x - t )^{ n } }{ t^{ n + 1 } } \ d \ t \color{normal} \ \right| $

$ \hspace{8em} \displaystyle \le \color{estimate} \left| \ \displaystyle \int_{ t = 1 }^{x} \frac{ \displaystyle \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} | x - t |^{ n } \right) }{ t^{ n + 1 } } \ d \ t \ \right| \color{normal} = \color{estimate} \displaystyle \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} | x - t |^{ n } \right) \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} \frac{ 1 }{ t^{ n + 1 } } \ d \ t \ \right| \color{normal} $

$ \hspace{12em} \displaystyle = \color{estimate} | x - 1 |^{ n } \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} \frac{ 1 }{ t^{ n + 1 } } \ d \ t \ \right| \color{normal} = \color{estimate} | x - 1 |^{ n } \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} t^{ -n - 1 } \ d \ t \ \right| \color{normal} $

$ \hspace{16em} \displaystyle = \color{estimate} | x - 1 |^{ n } \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} d \ \frac{ t^{ -n } }{ -n } \ \right| \color{normal} = \color{estimate} | x - 1 |^{ n } \cdot \left| \rule[-25px]{0px}{70px} \ { \frac{ t^{ -n } }{ -n } \ {\rule[-25px]{1px}{60px}} }_{ \ t = 1 }^{x} \ \right| \color{normal} $

$ \hspace{24em} \displaystyle = \color{estimate} | x - 1 |^{ n } \cdot \frac{ \left| \frac{ 1 }{ x^n } - 1 \ \right| }{ n } \color{normal} \displaystyle \xrightarrow[ n \to +\infty ]{} 0 .$

As the graphs of the above Taylor polynomials suggest, the error term does not converge to zero outside of the interval $ x \in ( 0, 2] $. The proof of this fact is left as an exercise to the reader.

The non-convergence of Taylor approximation can be remedied for $x > 2$ with the help of algebraic properties of logarithms. Indeed, for any $x > 0$ and $n = 0, 1, 2, \ldots$ we have the identity: $$ \ln( x ) = \ln\left(\rule[-5px]{0px}{30px} \frac{ x }{ 2^n } \right) - n \cdot \ln\left( \frac{1}{2} \right) .$$ For any $x > 0$, we can find an $n = 0, 1, 2, \ldots$ that will place the $ \displaystyle %\ln\left(\rule[-5px]{0px}{30px} \frac{ x }{ 2^n } %\right) $ into the interval of convergence $\left( 0, 2 \right)$. Having done that, we can use the Taylor polynomial of the logarithm to approximate both the $ \displaystyle \ln\left(\rule[-5px]{0px}{30px} \frac{ x }{ 2^n } \right) $ and the $ \displaystyle \ln\left( \frac{1}{2} \right) $ on the right hand side of the above identity. Combined together, these approximations give an estimate for the $\ln( x )$ on the left hand side. The following demonstration relies on this idea.