Notes on Multiplication

This is another subject that required quite a lot of thought. I believe I've found a way to make it all work, but there are a number of subtleties involved that I'll try to explain here.

The basic algorithm I'll be using is simple: for each bit of the first operand, multiply the second operand by it, shifted by an appropriate number of places, and add up the results. This is particularly easy to do in binary, since multiplying by a single bit can be accomplished with an AND gate.

In what follows, I'm going to call the first operand the multiplicand and the second operand the multiplier, because the second operand (the one that gets added to the product) will be the one held in the Multiplier Register.

Scaling

EDSAC numbers are meant to be interpreted as fractions in the range -1 to 1, and the multiplication algorithm needs to produce results consistent with this interpretation. Given two signed n bit operands with an assumed point between bits n-1 and n-2, we want a 2n-1 bit result with an assumed point between bits 2n-2 and 2n-3.

Here's an example with n = 5:
0.75 * 0.6875 = 0.515625

0.1 1 0 0
0.1 0 1 1
-----------------
0 0 0 0 0
0 0 0 0 0
0 1 0 1 1
0 1 0 1 1
0 0 0 0 0
-----------------
0.1 0 0 0 0 1 0 0
From this we can see that to get a correctly aligned result, the most significant bit of the multiplier (i.e. its sign bit) needs to be aligned with the bit from the multiplicand that we are multiplying it by.

Negative numbers

We need to make sure we get the right results with negative operands.

As long as the multiplicand is positive, we don't need to do anything special -- the multiplier can be of either sign, and as long as we sign-extend it when shifting, two's-complement arithmetic will produce the correct result. For example,

0.75 * -0.3125 = -0.234375

0.1 1 0 0
1.1 0 1 1
-----------------
0 0 0 0 0
0 0 0 0 0
1 1 1 1 0 1 1
1 1 1 0 1 1
0 0 0 0 0
-----------------
1.1 1 0 0 0 1 0 0
The bits resulting from sign extension are shown in red.

But, we need to be careful when the multiplicand is negative -- naively treating the sign bit of the multiplicand the same way as its other bits will not give the right result.

To see what needs to be done, consider that a number in the range -1 <= x < 0 can be written as -1 + y, where y is what results from taking the bits after the point and interpreting them as a positive number. In other words, we can take the bits after the point as having positive binary weights for both positive and negative numbers, as long as we give the sign bit a weight of -1.

What this means is that we can carry out the multiplication algorithm as usual for the bits of the multiplicand after the point, ignoring its sign; then, if the multiplicand is negative, subtract the multplier from the product. Example:
-0.25 * 0.6875 = -0.171875

1.1 1 0 0
0.1 0 1 1
-----------------
0 0 0 0 0
0 0 0 0 0
0 1 0 1 1
0 1 0 1 1
1 0 1 0 1
-----------------
1.1 1 0 1 0 1 0 0
Here, the bits shown in red result from taking the two's complement of the multiplicand.

Shifting strategy

The above examples are presented as though the multiplier is being shifted to the right by varying amounts, but I'm actually going to do it slightly differently: at each step, shift the product one place right and then add the multiplier to the most significant bits of the product.

This avoids having to shift the multiplier, leaving the contents of the Multiplier Register undisturbed. It also allows me to process the multiplicand bits in order from least significant to most significant, so that I can keep the multiplicand in the S-register and make use of its right-shift capability.

What's more, it turns out that, with a slight modification to the circuitry I already have, I can perform the shifting and adding together in one pass.

Where to put the unused bit?

For simplicity, BREDSAC is based on 18-bit cycles. But a short word has only 17 bits, and a long word 35 bits, meaning there is one unused bit in each short or long word. Early on, I chose to put this bit at the most significant end, and extend the sign bit into it. However, while thinking about multiplication, I realised it would be more convenient to put the unused bit at the least significant end.

To see why, consider the 5-bit example from before, extended on the left with an unused 6th bit, shown here in red:
0 0.1 1 0 0
0 0.1 0 1 1
---------------------
0 0 0 0 0 0
0 0 0 0 0 0
0 0 1 0 1 1
0 0 1 0 1 1
0 0 0 0 0 0
0 0 0 0 0 0
---------------------
0 0.0 1 0 0 0 0 1 0 0
Notice that the product is out by a factor of 0.5. Each of the unused bits adds one extra bit to the product, resulting in it having two extra bits on the left, whereas we only want one. Getting a correctly scaled result would require an extra shifting step at the end.

On the other hand, if we put the unused bit on the right, things look like this:
0.1 1 0 0 0
0.1 0 1 1 0
---------------------
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 1 0 1 1 0
0 1 0 1 1 0
0 0 0 0 0 0
---------------------
0.1 0 0 0 0 1 0 0 0 0
There are still two extra bits in the product, but they're on the right, where they don't make any difference to the value interpreted as a fraction.

For this reason, I've decided to revise the design so that the unused bit is at the least significant end. In the next instalment, I'll detail the changes required to make this happen.