How to recover the Borrow Bit

In a similar way as the carry bit for addition it is possible to recover the borrow bit for substraction, just based on the highest bits of three numbers that we deal with during the operation.

With this program, a subtraction operation of an 8-bit CPU can be simulated exhaustively


#!/usr/bin/perl

my $x, $x, $bi;

my %tab = ();

for ($bi = 0; $bi <= 1; $bi++) {     for ($x = 0; $x < 256; $x++) {         for ($y = 0; $y < 256; $y++) {             my $zz = $x - $y - $bi;             my $b  = $zz < 0 ? 1 : 0;             my $c  = 1 - $b;             my $z = ($zz + 256) & 0xff;             my $xs = $x >> 7;
            my $ys = $y >> 7;
            my $zs = $z >> 7;
            my $key = "$xs:$ys:$zs";
            $tab{$key} //= $b;
            my $bb = $tab{$key};
            if ($bb != $b) {
                print "b=$b bb=$bb c=$c xs=$xs ys=$ys zs=$zs x=$x y=$y z=$z zz=$zz bi=$bi\n";
            }
        }
    }
}

for my $key (sort keys %tab) {
    $key =~ m/(\d+):(\d+):(\d+)/;
    $xs=$1;
    $ys=$2;
    $zs=$3;
    $b =$tab{$key};
    $c = 1 - $b;
    $bb = $xs & $ys & $zs | !$xs & ($ys | $zs);
    print "b=$b bb=$bb c=$c xs=$xs ys=$ys zs=$zs\n";
}

This gives an idea, what is happening. But in real life, probably a 64bit-CPU is used, but the concepts would work with longer or shorter CPU words the same way.

So we subtract two unsigned 64-bit integers x and y and an incoming borrow bit i\in\{0, 1\} to a result

    \[z\equiv x-y-i \mod 2^{64}\]

with

    \[0 \le z < 2^{64}\]

using the typical „long long“ of C. We assume that

    \[x=2^{63}x_h+x_l\]

where

    \[x_h \in \{0,1\}\]

and

    \[0 \le x_l < 2^{63}.\]

In the same way we assume y=2^{63}y_h + y_l and z=2^{63}z_h + z_l with the same kind of conditions for x_h, y_h, z_h or x_l, y_l, z_l, respectively.

Now we have

    \[-2^{63} \le  x_l-y_l-i \le 2^{63}-1\]

and we can see that

    \[x_l - y_l - i = z_l-2^{63}u\]

for some

    \[u\in \{0,1\}\]

.
And we have

    \[x-y-i = z-2^{64}b\]

where

    \[b\in\{0,1\}\]

is the borrow bit.
When combining we get

    \[2^{63}x_h - 2^{63}y_h + z_l - 2^{63}u = 2^{63}x_h + x_l -2^{63}y_h-x_l -i = x-y-i = z - 2^{64}b = 2^{63}z_h + z_l -2^{64}b\]

When looking just at the highest visible bit and the borrow bit, this boils down to

    \[z_h-2b = x_h - y_h - u\]

This leaves us with eight cases to observe for the combination of x_h, y_h and u:

x_hy_huz_hb
00000
00111
01011
01101
10010
10100
11000
11111

Or we can check all eight cases and find that we always have

    \[b = x_h \wedge y_h \wedge z_h \vee \neg x_h \wedge (y_h \vee z_h)\]

So the result does not depend on u anymore, allowing to calculate the borrow bit by temporarily casting x, y and z to (signed long long) and using their sign.
We can express this as „use y_h \wedge z_h if x_h=1 and use y_h \vee z_h if x_h = 0„.

The incoming borrow bit i does not change this, it just allows for x_l - y_l - d \ge -2^{64}, which is sufficient for making the previous calculations work.

The basic operations add, adc, sub, sbb, mul, xdiv (div is not available) have been implemented in this library for C. Feel free to use it according to the license (GPL). Addition and subtraction could be implemented in a similar way in Java, with the weirdness of declaring signed longs and using them as unsigned. For multiplication and division, native code would be needed, because Java lacks 128bit-integers. So the C-implementation is cleaner.

Share Button

Borrow and Carry Bit for Subtraction

Similar to the usage of the carry bit when adding there are mechanisms for subtracting that allow to integrate the result of subtraction of the lower bits into the subtraction of the next higher block of bits, where necessary.

There are two ways to do this, that are trivially equivalent by a simple not operation:

  • borrow bit (also borrow flag)
  • carry bit (also carry flag)

Often CPUs use what is the carry bit for addition and interpret it as borrow bit for subtraction.

Here are the two ways:

Borrow Bit

It is assumed, that the CPU-word is n bits long, so calculations are performed modulo N=2^n. Further it is assumed, that the range 0\ldots N-1 is preferred, so all sign issues are excluded for the moment.

When subtracting two numbers x, y from each other like x-y and y > x, the provided result is

    \[x-y+N \equiv x-y \mod N\]

and the borrow bit is set (b=1), to indicate that the subtraction caused „underflow“, which had to be corrected by added N in order to get into the desired range.

In the „normal“ case where y \le x, the provided result is simply

    \[x-y\]

and the borrow bit is not set (b=0).

The next round of subtraction takes the borrow bit into account and calculates x-y-b, where the condition becomes y+b > x and the result is

    \[x-y-b+N \equiv x-y \mod N\]

or

    \[x-y-b\]

respectively. This is how some of the older readers used to do it in school on paper, but of course with N=10.

Now the typical integer arithmetic of current CPUs uses Two’s complement, which means that -y=(\mathrm{NOT}\; y)+1. Combining this with the previous results in calculating

    \[x-y = x + (\mathrm{NOT}\; y) + 1 - b \mod N\]

At this point some CPU-designers found it more natural, to use the carry bit c=1-b instead of the borrow bit b.

Carry Bit

When subtracting two numbers x, y from each other like x-y and we have y > x, the provided result is

    \[x-y+N \equiv x-y \mod N\]

and the carry bit is not set (c=0), to indicate that the subtraction caused „underflow“, which had to be corrected by added N in order to get into the desired range.

In the „normal“ case where y \le x, the provided result is simply

    \[x-y\]

and the carry bit is set (c=1).

The next round of subtraction takes the borrow bit into account and calculates x-y-1+c, where the condition becomes y+1-c > x and the result is

    \[x-y-1+c+N \equiv x-y \mod N\]

or

    \[x-y-1+c\]

respectively.

Now two’s complement with -y=(\mathrm{NOT}\; y)+1 this can be written as

    \[x-y = x + (\mathrm{NOT}\; y) + 1 - b \mod N\]

or with c=1-b

    \[x-y = x + (\mathrm{NOT}\; y) + c \mod N\]

These two ways are really equivalent and easily transformed into each other. Neither of them provides big advantages, apart from the fact that we have the unnecessary confusion, because it depends on the CPU design, which of the two variants is used.

Recovery of Borrow or Carry bit

The borrow bit is calculated and used during subtractions that are done at assembly language level, but higher languages like C or Java do provide access to this information. It is relatively easy to recover the carry bit in the case of addition based on x, y and x+y \mod N.

This possible as well for the subtraction. Quite easily the comparison between x and y or y+b could be done before the subtraction. This would work, but it is kind of inefficient, because under the hood the comparison is just a subtraction with discarding the result and keeping the flags. So the subtraction is performed twice. This might not be such a bad idea, because a compiler could recognize it or the work of subtracting twice could be neglectable compared to the logic for an „optimized“ recovery, based on some logic expression of certain bits from x, y and x-y \mod N.

Links

Share Button

Rounding to Rational Numbers

Usually we think of rounding as a way of approximately expressing numbers with many decimal places by numbers with fewer decimal places.

Regular readers of this blog have already encountered, that this concept can be extended and generalized, as is mentioned in the article about Geometric and Harmonic Rounding and Residue Class Rounding or Rounding with Sum.
Now computers and humans can deal with rational numbers and sometimes that is the best way.

Idea 1: Read the Double as Rational

Most typical non-integer computer numbers are actually already rational numbers of the form \frac{n}{2^m} with a relatively large power of two in the denominator. But as soon as we perform divisions, we leave the accurate world and round, usually in the way that the machine throws in front of our feet out of the box. But all other floating point operations can result in rounding as well. So rational numbers can be a way to go. So we can just naturally convert a double or float number into a rational number and continue working with that.

Idea 2: Go for human readable fractions

Let us think of human readers. We are kind of comfortable with fractions like \frac{1}{12}, \frac{1}{10}, \frac{1}{8} and multiples of them, so basically every fraction that can be expressed with a denominator of 1, 2, 3, 4, 5, 6, 8, 10 or 12. The 12 is because of the omnipresence of the analog clock, the 10 because it is just one more decimal digit and the 8 because it is about the number of successive halving that we can still easily cope with. The idea would be to round to the nearest such fraction and to prefer the lower denominator when two adjacent values are equally far away. This works out well, because our set of allowed fractions between 0 and 1 is just

    \[\{0, \frac{1}{12}, \frac{1}{10}, \frac{1}{8}, \frac{1}{6}, \frac{1}{5}, \frac{1}{4}, \frac{1}{3}, \frac{3}{8}, \frac{2}{5}, \frac{5}{12}, \frac{1}{2}, \frac{7}{12}, \frac{3}{5}, \frac{5}{8}, \frac{2}{3}, \frac{3}{4}, \frac{4}{5}, \frac{5}{6}, \frac{7}{8}, \frac{9}{10}, \frac{11}{12}, 1\}\]

and it is more or less trivial to program such a rounding algorithm, by just hard coding the limits, normalizing to the interval [0,1) and finding the right slot by binary search, for example. This is what we usually want to make numbers human readable and understandable. If we want more accuracy we can often just use the trick of going to % or or just shifting units by multiples of 1000, depending on what we are measuring or counting. This is often a bit better than just decimal numbers, and we can more often solve the issue of rounding with sum when some of the values we want to round are the same and just won’t come out the same of our rounding procedure.

Idea 3: Use continuous fractions

With continuous fractions it is possible to express any real number in the form

    \[a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{ \ddots + \cfrac{1}{a_n} }}}\]

or

    \[a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \cfrac{1}{\ddots}}}}\]

For integers this is trivial, just use a_0. For negative numbers we just take the negative of the continuous fraction of the absolute value, so we can assume a non-integral positive number r_0.
This allows determining a_0 by just taking the integer part of it.
Then we continue with r_1 = \frac{1}{r_0 - a_0} and so on. Either we end up with an integer a_n at some point, which is the case for rational numbers or the continuous fraction continues infinitely, which is the case for irrational numbers. Now taking the continuous fraction just up to the first n elements, as in the first form and ignoring what comes after that, can in turn be converted into a rational number, by just doing some trivial rational arithmetic. It turns out, that this is a very good approximation for that size of the denominator. The funny thing is that this works even for irrational numbers under certain conditions. For example could we assume that we are dealing with numbers of the form x+y\sqrt{D} for some fixed rational D and rational numbers x and y. It is relatively easy to approximate x+y\sqrt{D}=x_0+y_0\sqrt{D} with the largest integer a_0 that is less or equal. Now we can calculate

    \[\frac{1}{x_k+y_k\sqrt{D}-a_k}=\frac{x_k-a_k-y_k\sqrt{D}}{(x_k-a_k)^2-y_k^2\cdot D}=x_{k+1}+y_{k+1}\sqrt{D}\]

and thus use this algorithm as another way to calculate rational approximations of square roots. In this case the continuous fraction becomes periodic and there is a surprising lot of interesting mathematics behind it, if you like to dig deeper.

Share Button

How to calculate Square Roots and Cubic Roots

The functions sqrt and sometimes even cbrt are commonly available, but it is nice to see how they can be calculated.

There are several approaches, but the most popular ones are Newton’s method and an algorithmic formulation of how roots are taken manually, for those old enough to still have learned it in school. Earlier measurements that I did many years ago showed that the Newton approximation is slower, but it would be worth to do newer measurements.

So we have an equation y = x^2 or y=x^3 and want to find x or a well defined approximation of x when we know y. Mathematically speaking we want to assume that y is constant and we want to find an x for which f(x)=x^2-y=0 or g(x)=x^3-y=0. If we guess such an x and then draw the tangent at the curve of the function at the point (x, f(x)) or (x, g(x)), then the intersection point of the tangent can be used as the next approximation. This method converges in the case of these two functions (and some others) and is reasonably fast. Now the tangent has the linear equation

    \[\frac{y-y_0}{x-x_0}=f'(x_0)\]

where y_0=f(x_0) and f'(x)=\frac{df(x)}{dx} is the derivative of f(x). We want to solve this equation for y=0 and thus we get

    \[x-x_0 = -\frac{f(x_0)}{f'(x_0)}\]

and thus

    \[x=x_0-\frac{f(x_0)}{f'(x_0)}\]

As an iteration rule

    \[\bigwedge_{i=0}^\infty x_{i+1}=x_i-\frac{f(x_i)}{f'(x_i)}\]

In case of the sqare root we can just start with an estimation by shifting half the length to the right, but avoiding zero, which is important because of the division. Then we get for an appropriate n

    \[x_0 = 2^{-n}y\]

    \[x_{i+1}=x_i-\frac{x_i^2-y}{2x_i}=\frac{x_i+y/x_i}{2}\]

The last form is quite intuitive, even without calculus. As I said this converges usefully fast and there is tons of math around to describe the behavior, speed, precision and convergence of the calculations performed in this algorithm. Written in Ruby just for integers, this is quite simple. Convergence is simply discovered by the fact that the result does not change any more, which may fail in some cases, where intermediate results oscillate between two values, but just for the purpose of benchmarking it seems to be sufficient:

def sqrt_newton(x)
  if (x == 0) then
    return 0
  end
  y0 = x
  u0 = x
  while (u0 > 0) do
    y0 >>= 1
    u0 >>= 2
  end
  y0 = [1, y0].max
  yi = y0
  yi_minus_1 = -1
  loop do
    yi_plus_1 = (yi + x/yi) >> 1;
    if (yi_minus_1 == yi_plus_1) then
      return [yi, yi_minus_1].min
    elsif (yi == yi_plus_1) then
      return yi
    end
    yi_minus_1 = yi
    yi = yi_plus_1
  end
end

The newton algorithm tends to oscillate between two approximations, so this termination criteria takes into account y_{i-1}, y_i and y_{i+1} and uses the lower of the two oscillating values. This results in calculating the largest integer y such that y^2\le x and (y+1)^2 > x.

For the third root we get for an appropriate n

    \[x_0 = 2^{-n}y\]

    \[x_{i+1}=x_i-\frac{x_i^3-y}{3x_i^2}=\frac{x_i+y/x_i^2}{3}\]

Again this is a useful way and there is math around on when to stop for a desired precision.
Read Wikipedia for the convergence issues.

There is another approach, that people used to know when doing calculations on paper was more important than today. For the decimal system it works like this:

1. 7 3 2 0 5
----------------------
/ 3.00 00 00 00 00
/\/ 1 = 20*0*1+1^2
-
2 00
1 89 = 20*1*7+7^2
----
11 00
10 29 = 20*17*3+3^2
-----
71 00
69 24 = 20*173*2+2^2
-----
1 76 00
0 = 20*1732*0+0^2
-------
1 76 00 00
1 73 20 25 = 20*17320*5+5^2
----------
2 79 75
(source Wikipedia)
We group the digits to the left and to the right of the decimal point in groups of two. The highest possible square of an integral number that is below or equal to the leftmost group (03 in the example above) is used for the first digit of the result (1 in the example above). This square is subtracted and the next group is appended (200 in the example). Assuming that y_n is the result already calculated and x_n is what we have achieved after the subtraction and the appending of the next group, we search for a digit z_n such that u_n = 20\cdot y_n\cdot z_n + z_n^2 \le x_n. z_n is chosen in such a way that it yields the maximum possible u_n wich is still \le x_n. Subtracting u_n from x_n and appending the next group allows for the next iteration.

Now this can be turned into an algorithm. The first approach is to just switch from decimal system to binary system. Then for each iteration step we have to deal just with the possible values of 0 and 1, which greatly simplifies the algorithm. Here is a simple ruby program that would do this:

def split_to_words(x, word_len)
  bit_pattern = (1 << word_len) - 1   words = []   while (x != 0 || words.length == 0) do     w = x & bit_pattern     x = x >> word_len
    words.unshift(w)
  end
  words
end

def sqrt_bin(x)
  if (x == 0) then
    return 0
  end
  xwords = split_to_words(x, 2)
  xi = xwords[0] - 1
  yi = 1
  1.upto(xwords.length-1) do |i|
    xi = (xi << 2) + xwords[i]     d0 = (yi << 2) + 1     r  = xi - d0     b  = 0     if (r >= 0) then
      b  = 1
      xi = r
    end
    yi = (yi << 1) + b   end   return yi end

It seems that the two solutions yield the same results, but the sqrt_newton outperforms sqrt_bin by a factor of two.

Now we should reconsider, if base 2 is really the best choice. Actually we can use any power of 2 as a base and efficiently work with that. Apart from the initial first step, which is done by using an extended version of sqrt_bin, the next steps are estimated by division and trying neighboring values to get the exact result. This makes use of the fact that the equation we need to solve
u_n = 2\cdot b\cdot y_n\cdot z_n + z_n^2 \le x_n with the maximum z_n fullfilling this equation, where b is the base to which we are working, witch was 10 or 2 above and could now be a power of 2. As soon as y_n\cdot b has a certain size, the influence of z_n^2 becomes less relevant. We can consider the maximum posible value for z_n, which is b-1 and thus solve 2\cdot b\cdot y_n\cdot z_n\le x_n and 2\cdot b\cdot y_n\cdot z_n\le x_n-(b-1)^2, each for the maximum z_n fullfilling the equation. This can be calculated by simple division. If the range between the two solutions is small enough, then each value in the range can be tried to find the actual accurate solution for 2\cdot b\cdot y_n\cdot z_n + z_n^2 \le x_n and this is more efficient than working just bitwise. This method sqrt_word seems to outperform sqrt_newton for longer numbers, for example around 60 decimal digits with word_length=16. So the most promising approach seems to be to optimize the implementation and parameters of sqrt_word. The issue of termination, which has been properly addressed in the newton implementation, is already dealt with in this implementation. For more serious analysis it would be interesting to implement the algorithms in C or even in assembly language. So this is the final result for square roots, with some checks added:

def check_is_nonneg_int(x, name)
  raise TypeError, "#{name}=#{x.inspect} must be Integer" unless (x.kind_of? Integer) && x >= 0
end

def check_word_len(word_len, name="word_len")
  unless ((word_len.kind_of? Integer) && word_len > 0 && word_len <= 1024)
    raise TypeError, "#{name} must be a positive number <= 1024"
  end
end

def split_to_words(x, word_len)
  check_is_nonneg_int(x, "x")
  check_word_len(word_len)
  bit_pattern = (1 << word_len) - 1
  words = []
  while (x != 0 || words.length == 0) do
    w = x & bit_pattern
    x = x >> word_len
    words.unshift(w)
  end
  words
end

def sqrt_bin(x)
  yy = sqrt_bin_with_remainder(x)
  yy[0]
end

def sqrt_bin_with_remainder(x)
  check_is_nonneg_int(x, "x")
  if (x == 0) then
    return [0, 0]
  end

  xwords = split_to_words(x, 2)
  xi = xwords[0] - 1
  yi = 1

  1.upto(xwords.length-1) do |i|
    xi = (xi << 2) + xwords[i]
    d0 = (yi << 2) + 1
    r  = xi - d0
    b  = 0
    if (r >= 0) then
      b  = 1
      xi = r
    end
    yi = (yi << 1) + b
  end
  return [yi, xi]
end

def sqrt_word(x, n = 16)
  check_is_nonneg_int(x, "x")
  check_is_nonneg_int(n, "n")

  n2 = n << 1
  n1 = n+1
  check_word_len(n2, "2*n")
  if (x == 0) then
    return 0
  end

  xwords = split_to_words(x, n2)
  if (xwords.length == 1) then
    return sqrt_bin(xwords[0])
  end

  xi = (xwords[0] << n2) + xwords[1]
  a  = sqrt_bin_with_remainder(xi)
  yi = a[0]
  if (xwords.length <= 2) then
    return yi
  end

  xi = a[1]
  2.upto(xwords.length-1) do |i|
    xi = (xi << n2) + xwords[i]
    d0 = (yi << n1)
    q  = (xi / d0).to_i
    j  = 10
    was_negative = false
    while (true) do
      d = d0 + q
      r = xi - (q * d)
      break if (0 <= r && (r < d || was_negative))
      if (r < 0) then
        was_negative = true
        q = q-1
      else
        q = q+1
      end
      j -= 1
      if (j <= 0) then
        break
      end
    end
    xi = r
    yi = (yi << n) + q
  end
  return yi
end

def sqrt_newton(x)
  check_is_nonneg_int(x, "x")
  if (x == 0) then
    return 0
  end
  y0 = x
  u0 = x
  while (u0 > 0) do
    y0 >>= 1
    u0 >>= 2
  end
  y0 = [1, y0].max
  yi = y0
  yi_minus_1 = -1
  loop do
    yi_plus_1 = (yi + x/yi) >> 1;
    if (yi_minus_1 == yi_plus_1) then
      return [yi, yi_minus_1].min
    elsif (yi == yi_plus_1) then
      return yi
    end
    yi_minus_1 = yi
    yi = yi_plus_1
  end
end

This is the approach that has been built into the LongDecimal library, ignoring Newton. The examples have been added to github.

The algorithms can be extended to cubic roots or any higher roots. In this case, the nth root of \sum_{j=0}^{m}a_j b^{n(m-j)} is calculated by starting with the maximal integral number z_0 with z_0^n \le a_0 and the subsequently finding numbers z_j fullfilling an equation of the form {n}\choose{k}\sum_{k=1}^n (by_j)^{n-k}z_j^k \le x_i. This is always easy to handle for base two, by just testing the two possible solutions. For higher bases and n=3 it involves solving an quadratic equation, once the numbers are high enough to neglect the term z_j^3. For n=4 it is just possible to take the square root of the square root. For higher values of n and bases other than 2 it becomes really difficult to tame this algorithm. So I intend to constrain myself to square roots and cube roots. I have not explored, if it is useful to calculate the cube root with a higher base than 2 and which approach provides the best performance for cube roots. Even the square root calculation can possibly be tuned a bit. Maybe this will be addressed in another article.

Share Button

How to draw lines, circles and other curves

These ideas were developed more than 30 years without knowing that they were already known at that time…

Today the graphics cards can easily do things like this in very little time. And today’s CPUs are of course really good at multiplying. So this has lost a lot of its immediate relevance, but it is a fun topic and why not have some fun…

Let us assume we have a two dimensional coordinate system and a visible area that goes from x_{\min} to x_{\max} and y_{\min} to y_{\max}. Coordinates are discrete.

In this world we can easily measure an angle against a (directed) line parallel to the x-axis, for example up to an accuracy of 45^\circ=\frac{\pi}{4}:

  • y=0 \wedge x > 0 \implies \alpha = 0 (= 0^\circ)
  • 0 < y < x \implies 0 < \alpha < \frac{\pi}{4}(=45^\circ)
  • 0 < y = x \implies \alpha = \frac{\pi}{4}
  • 0 < x < y \implies \frac{\pi}{4} < \alpha < \frac{\pi}{2}(=90^\circ)
  • x = 0 \land y > 0\implies \alpha = \frac{\pi}{2}
  • x < 0 \land y > 0 \land |x| < |y|\implies \frac{\pi}{2} < \alpha < \frac{3\pi}{4}(=135^\circ)
  • x < 0 \land y > 0 \land -x = y\implies \alpha = \frac{3\pi}{4}(=135^\circ)

So let us assume we have a curve that is described by a polynomial function in two variables x and y, like this:

    \[f(x, y) = \sum_{j=0}^m\sum_{k=0}^n a_{j,k}x^jy^k = 0\]

We have to apply some math to understand that the curve behaves nicely in the sense that it does not behave to chaotic in scales that are below our accuracy, that it is connected etc. We might possibly scale and move it a bit by substituting something like c_1u+c_2 for x and c_3v+c_4 for y.

For example we may think of

  • line: f(x,y)=ax+by+c
  • circle: f(x, y)=x^2+y^2-r^2
  • eclipse: f(x, y)=\frac{x^2}{a^2}+\frac{y^2}{b^2}-1

We can assume our drawing is done with something like a king of chess. We need to find a starting point that is accurately on the curve or at least as accurately as possible. You could use knights or other chess figures or even fictive chess figures..

Now we have a starting point (x_0, y_0) which lies ideally exactly on the curve. We have a deviation from the curve, which is f(x_0, y_0)=d_0. So we have f(x_n, y_n)=d_n. Than we move to x_{n+1}=x_n + s and y_{n+1}=y_n+t with s, t = \{-1, 0, 1\}. Often only two or three combinations of (s, t) need to be considered. When calculating d_{n+1} from d_n for the different variants, it shows that for calculating d_{n+1}-d_n the difference becomes a polynomial with lower degree, because the highest terms cancel out. So drawing a line between two points or a circle with a given radius around a given point or an ellipse or a parabola or a hyperbola can be drawn without any multiplications… And powers of n-th powers of x can always be calculated with additions and subtractions only from the previous x-values, by using successive differences:
d_{m,1}=(x-m)^n-(x-m-1)^n
d_{m,l+1}=d_{m+1,l}-d_{m,l}
These become constant for l=n, just as the lth derivatives, so by using this triangle, successive powers can be calculated with some preparational work using just additions.
It was quite natural to program these in assembly language, even in 8-bit assembly languages that are primitive by today’s standards. And it was possible to draw such figures reasonably fast with only one MHz (yes, not GHz).

We don’t need this stuff any more. Usually the graphics card is much better than anything we can with reasonable effort program. Usually the performance is sufficient when we just program in high level languages and use standard libraries.

But occasionally situations occur where we need to think about how to get the performance we need:
Make it work,
make it right,
make it fast,
but don’t stop after the first of those.

It is important that we choose our steps wisely and use adequate methods to solve our problem. Please understand this article as a fun issue about how we could write software some decades ago, but also as an inspiration to actually look into bits and bytes when it is really helping to get the necessary performance without defeating the maintainability of the software.

Share Button

Intervals

Intervals are subsets of a universe, that are defined by upper and lower boundaries. Typically we think about real numbers, but any totally ordered universe allows the definition of intervals.

Intervals are defined by lower and upper boundaries, which can be a limiting number or unlimited, typically written as \infty for the upper bound and -\infty for the lower bound. The boundaries can be included or excluded. So the following combinations exist for a universe X:

(-\infty, \infty)=X
unlimited
(-\infty, a] = \{ x \in X : x \le a\}
half open, lower unlimited
(-\infty, a) = \{ x \in X : x < a\}
open, lower unlimited
[b, \infty) = \{ x \in X : x \ge b\}
half open, upper unlimited
(b, \infty) = \{ x \in X : x > b\}
open, upper unlimited
(c, d) = \{ x \in X : c < x < d\}
open
[c, d) = \{ x \in X : c \le x < d\}
half open
(c, d] = \{ x \in X : c < x \le d\}
half open
[c, d] = \{ x \in X : c \le x \le d\}
closed
\emptyset
it is sometimes useful to consider the empty set as an interval as well

The words „open“ and „closed“ refer to our usual topology of real numbers, but they do not necessarily retain their topological meaning when we extend the concept to our typical data types. a, b, c and d in the notation above do not have to be members of X, as long as the comparison is defined between them and all members of X. So we could for example meaningfully define for X=\mathbb{Q} the interval (\sqrt{2}, \pi) = \{ x\in\mathbb{Q} : \sqrt{2} < x < \pi\}.

As soon as we do not imply X=\mathbb{R} we always have to make this clear… And \mathbb{R} is kind of hard to really work with in software on computers with physically limited memory and CPU power.

Intervals have some relevance in software systems.

We sometimes have a business logic that actually relies on them and instead programming somehow around it, it is clearer and cleaner to actually work with intervals. For example, we can have a public transport scheduling system and we deal with certain time intervals in which different schedules apply than during the rest of the day. Or we have a system that records downtimes of servers and services and these are quite naturally expressed as intervals of some date-time datatype. It is usually healthy to consider all the cases mentioned above rather than ignoring the question if the boundary with probability zero of actually happening or having ugly interval limits like 22:59:59.999.

The other case is interval arithmetic. This means, we do floating point calculations by taking into account that we have an inaccuracy. So instead of numbers we have intervals I. When we add two intervals, we get I+J=\{x+y : x\in I \wedge y\in J\}. In the same way we can multiply and subtract and even divide, as long as we can stay clear of zero in the denominator. Or more generally we can define f(I_1, I_2, \ldots, I_n)=\{f(x_1,x_2,\ldots,x_n) : x_1\in I_1 \wedge x_2 \in I_2 \wedge\ldots\wedge x_n\in I_n\}.
It does of course require some mathematical thinking to understand, if the result is an interval again or at least something we can deal with reasonably. Actually we are usually happy with replacing the result by an interval that is possibly a superset of the real result, ideally the minimal superset that can be expressed with our boundary type.

At this point we will probably discover a desire to expand the concept of intervals in a meaningful way to complex numbers. We can do this by working with open disks like \{ z \in \mathbb{C} :  |z-z_0| < d\} or closed disks like \{ z \in \mathbb{C} :  |z-z_0| \le d\}. Or with rectangles based on two intervals I and J like \{ z = x+iy \in \mathbb{C} : x \in I \wedge y \in J \}.

These two areas are quite interesting and sometimes useful. Libraries have been written for both of them.

Often we discover, that intervals alone are not quite enough. We would like to do set operations with intervals, that is

union
A\cup B
intersection
A \cap B
set difference
A \setminus B

While the intersection works just fine, as long as we include the empty set \emptyset as an interval, unions and differences lead us to non-intervals. It turns out that interval-unions, sets that can be expressed as a union of a finite number of intervals, turn out to be a useful generalization, that is actually what we want to work with rather than with intervals. In this case we can drop the empty set as interval and just express it as the union of zero intervals.

There are some questions coming up, that are interesting to deal with:

normalization
Can we normalize interval-unions to some canonical form that allows safe and relyable comparison for equality?
adjacency
is our universe X actually discrete, so we can express all unlimited boundaries with closed boundaries?
interval lengths
Do we have a meaningful and useful way to measure the length of an interval or the total length of an interval-union, as long as they are limited? Or even for unlimited intervals?
collection interfaces
Do we want to implement a Set-interface in languages that have sets and an understanding of sets that would fit for intervals
implementation
How can we implement this ourselves?
implementation
Can we find useful implementations?

Having written a java library to support interval-unions on arbitrary Comparable types once in a project and having heard a speech about an interval library in Scala that ended up in using interval-unions in a pretty equivalent way, it might be interesting to write in the future about how to do this or what can be found in different languages to support us. For interval arithmetic some work has been done to create extensions or libraries for C and Fortran, that support this, while I was a student. So this is pretty old stuff and interesting mostly for the concepts, even if we are not going to move to Fortran because of this.

If there is interest I will write more about actual implementations and issues to address when using or writing them.

Links

Share Button

Carry Bit, Overflow Bit and Signed Integers

It has already been explained how the Carry Bit works for addition. Now there was interest in a comment about how it would work for negative numbers.

The point is, that the calculation of the carry bit does not have any dependency on the sign. The nature of the carry bit is that it is meant to be used for the less significant parts of the addition. So assuming we add two numbers x and y that are having k and l words, respectively. We assume that n=\max(k,l) and make sure that x and y are both n words long by just providing the necessary number of 0-words in the most significant positions. Now the addition is performed as described by starting with a carry bit of 0 and adding with carry x[0]+y[0], then x[1]+y[1] and so on up to x[n-1]+y[n-1], assuming that x[0] is the least significant word and x[n-1] the most significant word, respectively. Each addition includes the carry bit from the previous addition. Up to this point, it does not make any difference, if the numbers are signed or not.

Now for the last addition, we need to consider the question, if our result still fits in n words or if we need one more word. In the case of unsigned numbers we just look at the last carry bit. If it is 1, we just add one more word in the most significant position with the value of 1, otherwise we are already done with n words.

In case of signed integers, we should investigate what can possibly happen. The input for the last step is two signed words and possibly a carry bit from the previous addition. Assuming we have m-Bit-words, we are adding numbers between -2^{m-1} and 2^{m-1}-1 plus an optional carry bit c. If the numbers have different signs, actually an overflow cannot occur and we can be sure that the final result fits in at most n words.

If both are not-negative, the most significant bits of x[n-1] and y[n-1] are both 0. An overflow is happening, if and only if the sum x[n-1]+y[n-1]+c \ge 2^{n-1}, which means that the result „looks negative“, although both summands were not-negative. In this case another word with value 0 has to be provided for the most significant position n to express that the result is \ge 0 while maintaining its already correctly calculated result. It cannot happen that real non-zero bits are going into this new most significant word. Consequently the carry bit can never become 1 in this last addition step.

If both are negative, the most significant bits of x[n-1] and y[n-1] are both 1. An overflow is happening, if and only if the sum x[n-1]+y[n-1]+c \lt 2^{n-1}, which means that the result „looks positive or 0“, although both summands were negative. In this case another word with value 2^n-1 or -1, depending on the viewpoint, has to be prepended as new most significant word. In this case of two negative summands the carry bit is always 1.

Now typical microprocessors provide an overflow flag (called „O“ or more often „V“) to deal with this. So the final addition can be left as it is in n words, if the overflow bit is 0. If it is 1, we have to signal an overflow or we can just provided one more word. Depending on the carry flag it is 0 for C=0 or all bits 1 (2^n-1 or -1, depending on the view point) for C=1.

The overflow flag can be calculated by o := \mathrm{signbit}(x) = \mathrm{signbit}(y) \land \mathrm{signbit}(x+y\mod 2^n) \ne \mathrm{signbit}(x).
There are other ways, but they lead to the same results with approximately the same or more effort.

The following table shows the possible combinations and examples for 8-Bit arithmetic and n=1:

x<0 or x≥0y<0 or y≥ 0(x+y)%2^8 < 0 or ≥ 0Overflow BitCarry Bitadditional word neededvalue additional wordExamples (8bit)
x≥0y≥0≥000no-0+0
63+64
x≥0y≥0<010yes064+64
127+127
x≥0y<0≥000 or 1no-65+(-1)
127+(-127)
x≥0y<0<000 or 1no-7+(-8)
127+(-128)
0+(-128)
x<0y≥0≥000 or 1no--9 + 12
-1 + 127
-127+127
x<0y≥0<000 or 1no--128+127
-128+0
-1 + 0
x<0y<0≥011yes-1-64 + (-65)
-128+(-128)
x<0y<0<001no--1 + (-1)
-1 + (-127)
-64 + (-64)

If you like, you can try out examples that include the carry bit and see that the concepts still work out as described.

Links

Share Button

How to calculate transcendental functions

There is sometimes need to calculate transcendental functions like \sin, \exp, \log or \tan^{-1}. We get them from the library and the library relies on implementations in the CPU for most of them. This is true, if we like to do them in „double“ format, which is the standard way of doing floating point arithmetic. But it can be interesting how these can be calculated to a given precision or to calculate functions that are not in the library and not easily composed from the library functions. There are many ways to do this and actually the naïve way of using the Taylor-series

    \[f(x) = \sum_{j=0}^\infty a_j (x-x_0)^j\]

is often not such a bad idea, if done correctly.
We know from math what to use for the coefficients a_j and for which ranges of x this converges. For limited fixed precision it is possible to tune the coefficients a bit and get better results with a fixed number of summands. For arbitrary precision we need to be more flexible and cannot be prepared for this exact precision.

Now mathematically we can often have a converging series, for example if we have

    \[f(x) = \sum_{j_0}^\infty \frac{x^j}{j^2}.\]

This converges for |x|\le 1, but the convergence is not necessarily computer friendly. It can be proved easily, that this series converges for |x| \le 1, but for |x|=1 it converges slowly. To give an idea, if we are calculating with 100 digits after the decimal point then we would still have single terms in the area of our desired precision for j=10^{50} and since they get smaller only slowly, we would have to go much further. This is impossible to use.

As a rule of thumb the coefficients are not our friends. They may or may not converge towards zero, but we really have to rely on the (x-x_0)^j-part to get diminishing summands. A good idea is to consider |x-x_0| \le \frac{1}{2} if the coefficients are bounded, which they usually are in real life examples. That means that there is a boundary C>0 such that for each i we have |a_j|<C. So we absolutely need to use some mathematical knowledge about the function in order to get reasonable convergence.

In case of periodic functions like the trigonometric functions, we can normalize x to values within one „period“, but that will reduce x or x-x_0 only to a range of [-\pi, \pi). Using some common trigonometric formulas, we can actually reduce this to the range [0, \pi/2], which is still not good enough. In this case we have to use formulas like \sin(3x)=3\sin(x)-4\sin^3(x) and similar formulas for other trigonometric functions. These allow us to move to smaller values of x. For the exponential function, we have even easier ways. Let n be a natural number such that |\frac{x}{n}| < \frac{1}{2}. Then we let y=\frac{x}{n} and we can calculate z=e^y=\exp(y). Now we have exp(x)=e^x=e^{ny}=(e^y)^n=z^n and we just need to take the n-th power of the intermediate result. This can be calculated using algorithms like square and multiply or even some improvements over that.

In the end we will end up writing a lot of code for different cases which are optimized in different ways for some function. For example the power p(x,y)=x^y is a function in two parameters, that has quite a wild behavior and for writing an implementation that provides reasonable performance and precision we need to handle a lot of cases. Just look at the power function of the standard Java library, which is written in native C-code. Its beauty is not the conciseness, but having some understanding about what it takes to do this well you might eventually appreciate the given implementation, even if you not only use it, but also read it.

Now dealing with the precision is a delicate question, which again requires mathematics. As a general rule we usually need to use more precision for intermediate results. A good tool is to take the derivative or the partial derivatives in case of functions with multiple parameters to see how much changes in that parameter influence changes of the value. The Taylor theorem gives some definite, but possibly hard to apply answers. And it can also be useful to look at lower and upper bounds for the operations performed.

When writing such functions, unit tests are a big deal. Often they are not so hard to write, if we have inverse functions to rely on or if we can increase the precision and see that the lower precision is at least as precise as it claims to be. In some cases existing implementations for double can be used to check if the calculation is correct for smaller precisions.

Most of all it is important to think and use some mathematics or get help for this from somebody with appropriate knowledge.

Just to give you a hint: There are tons of transcendental functions that do not exist in standard libraries and that may be interesting to use. For some of them there are libraries. For some we still need to find libraries or write them.

Share Button

Modular Arithmetic

We have some articles in this blog about integers of typical programming languages and how they work. Time to introduce the underlying mathematical concepts, that have been covered implicitly until now, since they are also interesting in many other aspects. And besides, this is a very beautiful area of mathematics.

Mathematics that we learn in school is mostly inspired by what is needed for physics. This was quite a good choice 100 years ago, because it gave some motivation to why we do certain things and it was the area, where math was applied. Of course also chemistry and engineering, but these are somewhat similar aspects of mathematics as we use in physics. Now physics and chemistry make use of quite interesting areas of mathematics like group theory or non Euclidean geometry, but these are kind of advanced areas beyond what we typically learn in school. at least in the countries where I went to school. So it is about real numbers, some trigonometry, real analysis (calculus) and maybe complex numbers.

Since more than 50 years mathematics is heavily used in informatics as well, if we abstract informatics away from computers, even longer, because for example algorithms and cryptography have been used for several thousand years already, but that was a small niche and became main stream by the existence of computers. And for informatics and computer science we need different areas of mathematics. Analysis is not the so important, though not irrelevant. One area is information theory, which is based on probability theory and statistics. Numerical calculations have to a great extent remained a domain of mathematics itself, so this connection may be strong, but it is applied mathematicians using computers and using knowledge from IT to program them better, not the other way round. Still numerical analysis is somewhat important, but not really what most of us need very often.

The areas of mathematics that are really interesting for informatics are discrete mathematics, algebra and number theory. There is enough material about this on the web, but for now we will deal with modular arithmetic, which is kind of in the intersection of discrete mathematics, algebra and number theory.

We start with the integral numbers:

    \[{\Bbb Z} = \{\ldots,-3, -2, -1, 0, 1, 2, 3, 4,\ldots\}\]

Now we take any positive integral number m \in {\Bbb N} with m \ge 2.
We say that two integral numbers x and y are congruent modulo m:

    \[x \equiv y \pmod m\]

if and only if x-y can be divided by m. We might also say that there is a k \in {\Bbb Z} such that y = x + k m.
Now we can make interesting observations:
We assume, that we have pairs of numbers such that

    \[u \equiv v \pmod m\]

and

    \[x \equiv y \pmod m\]

Then we can observe that also

    \[u+x \equiv v+y \pmod m\]

    \[u-x \equiv v-y \pmod m\]

    \[u\cdot x \equiv v\cdot y \pmod m\]

This can be proven easily.
We assume as above

    \[y = x + k \cdot m\]

and similarly

    \[v = u + l \cdot m\]

Then we have

    \[y+v = x+u+(k+l)m \equiv x+u \pmod m\]

    \[y-v = x-u+(k-l)m \equiv x-u \pmod m\]

    \[yv = xu+kum + lvm + klm^2 \equiv x-u \pmod m\]

We call a set of all numbers of \Bbb Z that are congruent to each other a remainder class and write this as

    \[\bar x = x + m{\Bbb Z}\]

There are exactly m remainder classes modulo m and usually we use a representation system of

    \[0,1,\ldots m-1\]

or for even m we often use

    \[-\frac{m}{2}, -\frac{m}{2}+1,\ldots,-1,0,1,\dots,\frac{m}{2}-1\]

or for odd m we often use

    \[-\frac{m-1}{2}, -\frac{m-1}{2}+1,\ldots,-1,0,1,\dots,\frac{m-1}{2}\]

We observe these representation systems when we do division with remainder, written as % in many programming languages, but it is necessary to do some quick research on which representation system % uses and which one we want to use and possibly adjust the result. The corresponding division may not be /, but we can obtain it by subtracting our remainder from the dividend and dividing that, which should be an exact division.

Now we need to define a ring. A ring R is a set with operations + and \cdot such that the following rules apply:

  1. For any members x, y \in R we also have x+y \in R, x-y \in R and x\cdot y \in R. This is usually not mentioned, because it is part of how we define these operations in the first place in most mathematical texts.
  2. Addition is communicative: For any members x, y \in R we have x+y=y+x.
  3. Addition has a neutral element 0: For any member x \in R we have x+0=0+x=x.
  4. Addition has inverse elements: For any member x \in R we have a member x'\in R such that x+x'=x'+x=0. Usually we write -x for this inverse element of x and we write x-y instead of x+(-y).
  5. Addition is associative: For any members x, y, z \in R we have (x+y)+z=x+(y+z). We can omit the parentheses here and write x+y+z instead.
  6. Multiplication has a neutral element 1: For any member x \in R we have x\cdot 1=1\cdot x=x.
  7. Multiplication is associative: For any members x, y, z \in R we have (x\cdot y)\cdot z=x\cdot (y\cdot z). We can omit the parentheses here and write x\cdot y\cdot z or even xyz instead.
  8. Multiplication in conjunction with addition is distributive: For any members x, y, z \in R we have (x + y)\cdot z = x\cdot z + y\cdot z and z\cdot (x+y)=z\cdot x + z\cdot y.

If the multiplication is also communicative, we call it a commutative ring. If there is a multiplicative inverse for any element other than 0, we call it a skew field. And if both conditions hold, we call it a field.

Now we can see that \Bbb Z is actually a communicative ring.

And these remainder classes modulo m also form a ring. We call it {\Bbb Z}/m{\Bbb Z} or sometimes also {\Bbb Z}_m, but I do not use the second form, because it is ambiguous with something else (p-adic numbers). If m=p is a prime number, then {\Bbb Z}/p{\Bbb Z} is actually a field and in this case we may write {\Bbb F}_p instead of {\Bbb Z}/p{\Bbb Z}. Or GF(p) in some literature, if you prefer that. Why is it a field?

Now we have an extension of the Euclid algorithm to calculate the gcd of two numbers. This also yields numbers u and v such that g=\gcd(x,y)=ux+vy. So these numbers exist. For a prime number p and a remainder class \bar x \ne 0 we know that x is not a multiple of p and since p is prime we know that

    \[1=\gcd(x,p)=ux+vp\]

. This yields a multiplicative inverse for \bar x because

    \[u\cdot x \equiv 1 \pmod p\]

.

Now we often see m as a power of 2 and the modular arithmetic, at least +, -, *, is what is sold to us as integer arithmetic of Java, C or C#.

On the other hand it can be interesting and useful to use modular arithmetic for other values of m. Interesting are mostly prime numbers, which can be relatively small like 2, 3 or 5, but also really big. For non-primes we have null-factors, that is numbers x, y \not\equiv 0 \pmod m such that x\cdot y \equiv 0 \pmod m. This breaks some fundamental mathematical assumption for integers and fields, but is perfectly correct for this modular ring.

In our daily life modular arithmetic is actually quite common. We have the week days with m=7, the hours of the clock with m=12 or m=24, the minutes and seconds of the clock with m=60 and quite a bit of m=2, which we do not really see as modular arithmetic, but maybe as boolean arithmetic with + being the „exclusive or“, \cdot being the „and“ etc.

Share Button

Integers in Perl 6

The language Perl 6 has been announced to be production ready by the beginning of this year. Its implementation is Rakudo, while the Perl 6 programming language itself is an abstract language definition that allows any language implementation that passes the test suite to call itself an Perl 6 implementation. The idea is not totally new, we see the Ruby language being implemented more than once (Ruby, JRuby, Rubinius, IronRuby), but we can also learn from the Ruby guys that it is a challenge to keep this up to date and eventually it is likely that one implementation will fall back or go its own way at some point of time.

Perl 6 is also called „Perl“ as part of its name, but quite different from its sister language Perl, which is sometimes called „Perl 5“ to emphasize the distinction, so it is absolutely necessary to call it „Perl 6“ or maybe „Rakudo“, but not just „Perl“.

Even though many things can be written in a similar way, a major change to Perl 5 is the way of dealing with numeric types. You can find an article describing Numeric Types in Perl [5]. So now we will see how to do the same things in Perl 6.

Dealing with numeric types in Perl 6 is neither like in Perl 5 nor like what we are used to in many other languages.

So when we just use numbers in a naïve way, we get long integers automatically:

my $f = 2_000_000_000;
my $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

creates this output:

0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
8 256000000000000000000000000000000000000000000000000000000000000000000000000
9 512000000000000000000000000000000000000000000000000000000000000000000000000000000000

This is an nice default, similar to what Ruby, Clojure and many other Lisps use, but most languages have a made a choice that is weird for application development.

Now we can also statically type this:

my Int $f = 2_000_000_000;
my Int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

and we get the exact same result:

0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
8 256000000000000000000000000000000000000000000000000000000000000000000000000
9 512000000000000000000000000000000000000000000000000000000000000000000000000000000000

Now we can actually use low-level machine integers which do an arithmetic modulo powers of 2, usually 2^{32} or 2^{64}:

my int $f = 2_000_000_000;
my int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

and we get the same kind of results that we would get in java or C with (signed) long, if we are on a typical 64-bit environment:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

We can try it in Java. I was lazy and changed as little as possible and the "$" is allowed as part of the variable name by the language, but of course not by the coding standards:

public class JavaInt {
    public static void main(String[] args) {
        long $f = 2_000_000_000;
        long $p = 1;
        for (int $i = 0; $i < 10; $i++) {
            System.out.println($i + " " +  $p);
            $p *= $f;
        }
    }
}

We get this output:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

And we see, with C# we get the same result:

using System;

public class CsInt {

    public static void Main(string[] args) {
        long f = 2000000000;
        long p = 1;
        for (int i = 0; i < 10; i++) {
            Console.WriteLine(i + " " +  p);
            p *= f;
        }
    }
}

gives us:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

If you like, you can try the same in C using signed long long (or whatever is 64 bits), and you will get the exact same result.

Now we can simulate this in Perl 6 also using Int, to understand what int is really doing to us. The idea has already been shown with Ruby before:

my Int $MODULUS = 0x10000000000000000;
my Int $LIMIT   =  0x8000000000000000;
sub mul($x, $y) {
    my Int $result = ($x * $y) % $MODULUS;
    if ($result >= $LIMIT) {
        $result -= $MODULUS;
    } elsif ($result < - $LIMIT) {
        $result += $MODULUS;
    }
    $result;
}

my Int $f = 2_000_000_000;
my Int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p = mul($p, $f);
}

and we get the same again:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

The good thing is that the default has been chosen correctly as Int and that Int allows easily to do integer arithmetic with arbitrary precision.

Now the question is, how we actually get floating point numbers. This will be covered in another blog posting, because it is a longer story of its own interest.

Share Button