The Bistromathique is an Arbitrary Precision Calculator EPITA project where the main focus is optimization. The input can be in any base (up to 250) and the following operations have to be performed: Addition, Subtraction, Multiplication, Division, Modulo.
Going back and forth a textual representation to a computable representation is the first part of the project and several optimizations are possible at this step.
Encoding the numbers as binary allows you to make the best use of the processor calculation unit. There is no space lost and the number being represented as an uninterrupted series of 0 and 1 can be added and subtracted directly with the hardware.
However, in order to convert the number for a base B representation to a base 2 requires roughly n divisions by B. And to reconvert it back requires another n multiplications. This can be affordable for applications that uses the same numbers again and again but in our case, this cost is prohibitive.
The other major drawback is that you are required to have the multiplications and divisions (and therefore addition and subtraction) in order to have the numbers in the right form to start working on the operations ... In summary, you have to code all these operations before being able to test them!
The binary representation not being possible, the solution that comes in mind is to put each digit into a
char (8 bits). Since we are limited up to a base of 250, any base will fit. This comes at a price, there is a lot of space wasted, and it increases as the base size decreases.
We are in a 1 to 1 relation between the input and the represented number, this way we can store the represented number inside the input. This prevents from allocating memory at this step. The conversion can be efficiently done, each character goes through vector of 256 slots storing its representation and a magic value used as a non valid input. One final trick is to add a non number at the end of the input to remove a end of input check in the loop.
char can store values up to 256. If we are working on base 16 we could store two digits into 1
char and thus divide by 2 the number of digits. Additions are therefore 2 times faster and even more for n log n operations such as multiplication.
Calculating the number of digits to fill into a char is quite straightforward however some caution has to be taken during the transition. We have to know in advance the length of the input to decide how to start filling the digits. If you can fit 2 digits per char, if the number is even then it goes as expected, but if the number is odd you have to place 0 at the beginning to fill the gap.
    ->  
   ->  [4?]
   ->  
This method works well but requires 2 passes for each number and a special case at the beginning of each number. Not enough to stop us from using it.
Using a larger representation
Now that we filled the char at its maximum, why stick with it? This should ring a bell since you know that common architecture is 32-bit, and from now we are talking about a 8-bit representation. We would be tempted to change the
char to an
int (32-bit) or even a
long (64-bit) that could benefit from register optimizations.
Moving away from a
char, we lose the assumption that the output being smaller (or equal) than the input. Without it, this is no longer possible to work in place. We now have to compare the cost of a reduction in the number of operations and the cost of the memory allocation.
The overhead happens for values that are represented in less than 4 characters for an int representation. This is a maximum 3-characters overhead possible for each number, which is negligible. We could eat the operation character but we are left with 2 characters. I haven't found a solution to stay in place yet and this minor problem prevents from accessing a great speed up.
To be continued ...