Division Using only Multiply and Add

March 25, 2019

Division of numbers is often taken as a fundamental operation. It’s built into many algebraic structures, but some practical number formats though do not support division natively. Floating point arithmetic is one example, or more generally dyadic rational numbers. When I say “does not support natively” I mean that the exact division operation results in a number which is no longer the same format as the operands in the division - so for example there are many floating point numbers \( c \) such that \( \frac{1}{c} \) is not a floating point number. This holds similarly for dyadic rational numbers, even though they allow arbitrarily precise representations. Thus we must fall back to approximate division in these cases.

I show here a Newton iteration approach to approximate division in these cases. This is similar to how I computed square roots of rational numbers, except for this algorithm the starting point of the Newton algorithm actually matters. I show what condition the starting point \( x _ 0 \) should satisfy for the algorithm to work, then I show how to cheaply calculate such a starting point using floating point representation.

This algorithm computes the inverse \( \frac{1}{c} \) of a number \( c \) by taking a starting guess value \( x _ 0 \) and repeatedly iterating the sequence:

\[ x _ {n+1} = x _ n \left (2 - c x _ n \right ). \]

This iteration requires only multiplication and addition, but is very sensitive to the starting vector \( x _ 0 \).

What is a good starting value

Suppose that \( x _ n \) is such that \( 1 - c x _ n = \epsilon \). Then one iteration of Newton’s method leads to

$$ \begin{aligned} 1 - c x _ {n+1} &= 1 - c x _ n (2 - cx _ n) \\ &= (1 - c x _ n ) ^ 2 \\ &= \epsilon ^ 2 \end{aligned} $$

Thus if \(x _ 0 \) is such that \( 1 - c x _ 0 = \epsilon \) then we may proceed inductively to find

\[ 1 - c x _ {n+1} \leq \epsilon ^ {2n} \]

Therefore we want \( 0\leq \epsilon < 1 \) for a convergent iteration.

Cheap calculation of starting value

The previously mentioned number systems, floating point and dyadic rationals often have orders of magnitude explicitly built into the system. For example a floating point number \( z \) may be represented as

\[ z = (m 2 ^{-t})(2^e) \]

where \(m\) is called the significand and \(e\) is the exponent. The number \( t \) represents the number of possible binary significant figures in a fixed floating point representation, and so depends on how wide that representation is. The key property is \( 0 \leq (m 2 ^{-t}) \leq 1 \).

Thus if we take \( x _ 0 = 2^{-e} \) then we have

\[ z x _ 0 = (m 2 ^{-t}) \leq 1 \].

For floating point numbers this can often be done very cheaply with bit manipulation. Most languages offer frexp which can extract mantissa and exponent for you, making it unnecessary to explicitly write bit hacking yourself.

Python Example

import math
import pandas as pd
from tabulate import tabulate
df = pd.DataFrame(columns=['xn', '1-c*xn','estimate'])
#Get exponent of c
#Form initial guess
#Form epsilon for a-priori error estimate
for i in range(0,niter):
    df=df.append( { "xn": x, "1-c*xn" : (1-c*x),"epsilon^(2n)":eps**(2*i)}, ignore_index=True)

Here is a nice formatted version of the calculate iterates

xn 1-c*xn epsilon^(2n)


This short post was about Newton’s algorithm for computing the inverse of a number - formulated in such a way so as to only require multiplies and adds.

I proved a simple a-priori convergence estimate and showed an example of it with Python.

The a-priori estimate (last column) is fairly conservative. In my experience this is pretty typical, it’s always “one iteration behind” when estimating error. Thus the a-posteriori estimate (middle column) is better to use for actual algorithm termination. The a-priori estimate is still useful in the context of formal proof systems like Coq because they require functions written in them to provably halt.