# Q: How did mathematicians calculate trig functions and numbers like pi before calculators?

Physicist: Don’t know.  But if you’re ever stuck on a desert island, here are some tricks you can use.  The name of the game is “Taylor polynomials“.

$\sin{(x)} = \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!} x^{2n+1} = \frac{x}{1} - \frac{x^3}{1 \cdot 2 \cdot 3} + \frac{x^5}{1 \cdot 2 \cdot 3 \cdot 4 \cdot 5} - \frac{x^7}{1 \cdot 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdot 7} + \cdots$ $\cos{(x)} = \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} x^{2n} = 1 - \frac{x^2}{1 \cdot 2} + \frac{x^4}{1 \cdot 2 \cdot 3 \cdot 4} - \frac{x^6}{1 \cdot 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6} + \cdots$

All the other trig function are just combinations of sine and cosine, so this is really all you need.  Of course, you can’t add up an infinite number of terms, so if you only go up to the xL term then the error between the sum you have and the actual value of sine or cosine is no more than $\frac{x^L}{L!}$.  Now x can be pretty big, but you can use the fact that sine and cosine repeat every $2 \pi$, as well as the fact that $\sin{(x \pm \pi)} = -\sin{(x)}$ and $\cos{(x \pm \pi)} = -\cos{(x)}$, to get the “x” down to $-\frac{\pi}{2} \le x \le \frac{\pi}{2}$.  So if you sum up to the xL term, then your error will be no larger than $\frac{1}{L!} \left( \frac{\pi}{2} \right)^L$.  The “1/L!” makes this error pretty small.  Summing up to the x10 term will be accurate to within 3 parts in 100,000 at worst.

For example:

$\sin{(16)} = \sin{(16 - 2\pi)} = \sin{(16 - 4\pi)} = -\sin{(16 - 5\pi)} \approx -\sin{(0.2920)}$

Summing up to the x5 term yields:

$\sin{(16)} \approx -\sin{(0.2920)} \approx - \left( 0.2920 - \frac{0.2920^3}{6} + \frac{0.2920^5}{120} \right) = - 0.2879$

Which is accurate to at least the first 4 decimal places.

There aren’t a hell of a lot of important mathematical constants out there.  The most important are “e” and “$\pi$“.

$e^x = \lim_{m \to \infty} \left( 1 +\frac{x}{m} \right)^m \approx \sum_{n=0}^L \frac{x^n}{n!} = 1+ \frac{x}{1}+\frac{x^2}{1 \cdot 2}+\frac{x^3}{1 \cdot 2 \cdot 3}+\cdots$ with an error of no more than $\frac{x^{L+1}}{(L+1)!}$.  This is another example of a Taylor polynomial.  To calculate only e, just set x=1.

$\pi \approx 4 \sum_{n=0}^L \frac{(-1)^n}{2n+1} = 4 \left( 1 - \frac{1}{3} +\frac{1}{5} - \frac{1}{7} + \cdots \right)$ with an error of no more than $\frac{4}{2L+3}$.  One way to derive this equation is to take the Taylor series for Arctan, and plug in 1 ($\arctan{(1)} = \frac{\pi}{4}$).  This is easy to remember but slow to converge (2,000 terms to get 3 decimal places), so here’s a better one:

$\pi \approx \sqrt{12}\sum^L_{k=0} \frac{(-1)^k}{(2k+1) 3^k} = \sqrt{12}\left(1-{1\over 3\cdot3}+{1\over5\cdot 3^2}-{1\over7\cdot 3^3}+\cdots\right)$ with an error of no more than $\frac{\sqrt{12}}{(2k+3) 3^{k+1}} \sim \frac{1}{3^k}$.

Most people are under the impression that “there is no pattern in pi“, so the fact that we can write down an equation to find pi may seem a little odd.  What is generally meant by “no pattern in pi” is that there doesn’t seem to be any pattern in the decimal representation of pi (3.14159…).

The Taylor series and the approximations of pi and e above may seem cumbersome, but in most sciences you’ll find that it’s rare for anybody to go beyond the second term in a Taylor polynomial (sin(x) = x, cos(x) = 1-.5x2).  Moreover, due mostly to our crippling sloth and handsomeness, most physicists are happy to say that $\pi = e = 3$.  So if you’re striving to get things exactly right, you may actually be an engineer.

This entry was posted in -- By the Physicist, Equations, Math. Bookmark the permalink.

### 11 Responses to Q: How did mathematicians calculate trig functions and numbers like pi before calculators?

1. The more digits the better. Just a few days ago I was a little upset when I couldn’t find my friend’s birthday string in the first 200 million decimal places of pi. (Though converting it to hex and search in the first 4 billion binary digits did the job. How I need a drink, alcoholic of course!)

Friendly reminder: Pi day is coming up (see, you need at least 2 decimal places), but more importantly, it marks the beginning of spring break.

2. Scott says:

What I really want to know is, how did the Greeks (or anyone for that matter) do math and geometry without Arabic numerals?

3. Physicist says:

With an abacus.

4. Scott says:

Clearly that was the Chinese. Get your ancient civilizations straight.

5. Arabic numerals are overrated. Doesn’t matter how you write, arrange, pronounce the numbers, or even use a different base, it’s all good as long as they’re logically equivalent!

6. Monstah says:

“Moreover, due mostly to our crippling sloth and handsomeness, most physicists are happy to say that $\pi$ = e = 3. So if you’re striving to get things exactly right, you may actually be an engineer.”

sin(x) = x is almost okay, but this line gave me SERIOUS MAJOR creeps.

7. Jan Louw says:

Question: How do calculators and computer calculate special function values. I know of many possibilities, but what method is used in practise and to what degree of accuracy (normal use not high precision or special methods)?

8. The Physicist says:

I suspect the exact technique varies from system to system, but I couldn’t say for certain how any of them work. In general you can get your accuracy ludicrously high with just a little extra processor time.
We got any computer engineers who can field this?

9. The Wonderer says:

Loved pi = e = 3. So if you’re striving to get things exactly right, you may actually be an engineer. I am wanting to become an engineer and that made me giggle.

10. Ian G. says:

One trick that I’ve used (and am surprised that other people don’t) is to use Euler’s equation; specifically it means that increasing the angle translates to a complex multiplication.

That is, e^(i *(x +y)) = e^(i*x) * e^(i*y) = (cos(x) + i*sin(x)) * (cos(y) + i*sin(y)).

Start off with a known (sufficiently precise) value for cos(x) and sin(x), say for 5 degrees, you can easily compute for 10, 15, 20 and so on. What’s even better about this method is that you can easily compute the error as well.

11. Xerenarcy says:

“Question: How do calculators and computer calculate special function values. I know of many possibilities, but what method is used in practise and to what degree of accuracy (normal use not high precision or special methods)?”

it depends on which special function you’re talking about, some have special implementations while others may follow the same general iterative pattern. often the mathematical infinite series approximation is far easier to describe as a computer program.

take sine for example: sin(x) = x – x^3 / 3! + x^5 / 5! …
note that by keeping the numerator and denominator of the last term, you need very few operations to derive the next term, and addition here is trivial.
every term needs two divides in the factorial (next odd and even numbers) on the previous term and a single multiply operation (by x^2 which can be precomputed initially), and then alternate add and subtract operations.

i imagine a better (faster-converging) algorithm is used in practice but the idea remains the same – so long as there is a simple relationship between successive terms in a convergent infinite series, a computer easily adopts the algorithm in a space efficient manner.

for quirkiness some old programs that needed absolute top speed in this area would cheat somewhat with trig and log tables; rather than computing the entire function, known values would be used entirely or with additional shortcuts (such as double-angle formulae with known values for base angles).

nowdays though the fundamental math functions such as trig, logs, exponents and so on would follow this principle however they would now reside on a FPU – a chipset that deals specifically with the floating point format. to that end it helps to understand floating point numbers and how exactly a machine can express decimals, as there are additional optimizations to be found. long and short of it is floating point is a digitized exponent-form of a number (some bits reserved for exponent, some bits reserved for mantissa).

as far as accuracy, i wouldn’t expect more than 15 decimal digits on a regular calculator. otherwise it depends strongly on what the digital number format is. computers prefer to deal with floating point numbers (digital version of 15 = 1.5 x 10^1; 15 = [ +.5, +1 ]), but can deal with currency formats too (fixed-point precision).

floating point usually allots more bits to the mantissa, so you can usually expect 7 decimals accuracy for 4 byte floats and almost 16 decimals accuracy for 8 byte (64 bit) floats. there is an extension for 10 byte (80 bit) but i personally have never seen this used seriously (long double). for every-day calculations this is ample; for anything more complex, a programmer may resort to exotic schemes to encode the numbers (arbitrary precision integer libraries are common, arbitrary decimal precision is harder to implement).