Rounding to Rational Numbers

Usually we think of rounding as a way of approximately expressing numbers with many decimal places by numbers with fewer decimal places.

Regular readers of this blog have already encountered, that this concept can be extended and generalized, as is mentioned in the article about Geometric and Harmonic Rounding and Residue Class Rounding or Rounding with Sum.
Now computers and humans can deal with rational numbers and sometimes that is the best way.

Idea 1: Read the Double as Rational

Most typical non-integer computer numbers are actually already rational numbers of the form \frac{n}{2^m} with a relatively large power of two in the denominator. But as soon as we perform divisions, we leave the accurate world and round, usually in the way that the machine throws in front of our feet out of the box. But all other floating point operations can result in rounding as well. So rational numbers can be a way to go. So we can just naturally convert a double or float number into a rational number and continue working with that.

Idea 2: Go for human readable fractions

Let us think of human readers. We are kind of comfortable with fractions like \frac{1}{12}, \frac{1}{10}, \frac{1}{8} and multiples of them, so basically every fraction that can be expressed with a denominator of 1, 2, 3, 4, 5, 6, 8, 10 or 12. The 12 is because of the omnipresence of the analog clock, the 10 because it is just one more decimal digit and the 8 because it is about the number of successive halving that we can still easily cope with. The idea would be to round to the nearest such fraction and to prefer the lower denominator when two adjacent values are equally far away. This works out well, because our set of allowed fractions between 0 and 1 is just

    \[\{0, \frac{1}{12}, \frac{1}{10}, \frac{1}{8}, \frac{1}{6}, \frac{1}{5}, \frac{1}{4}, \frac{1}{3}, \frac{3}{8}, \frac{2}{5}, \frac{5}{12}, \frac{1}{2}, \frac{7}{12}, \frac{3}{5}, \frac{5}{8}, \frac{2}{3}, \frac{3}{4}, \frac{4}{5}, \frac{5}{6}, \frac{7}{8}, \frac{9}{10}, \frac{11}{12}, 1\}\]

and it is more or less trivial to program such a rounding algorithm, by just hard coding the limits, normalizing to the interval [0,1) and finding the right slot by binary search, for example. This is what we usually want to make numbers human readable and understandable. If we want more accuracy we can often just use the trick of going to % or or just shifting units by multiples of 1000, depending on what we are measuring or counting. This is often a bit better than just decimal numbers, and we can more often solve the issue of rounding with sum when some of the values we want to round are the same and just won’t come out the same of our rounding procedure.

Idea 3: Use continuous fractions

With continuous fractions it is possible to express any real number in the form

    \[a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{ \ddots + \cfrac{1}{a_n} }}}\]

or

    \[a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \cfrac{1}{\ddots}}}}\]

For integers this is trivial, just use a_0. For negative numbers we just take the negative of the continuous fraction of the absolute value, so we can assume a non-integral positive number r_0.
This allows determining a_0 by just taking the integer part of it.
Then we continue with r_1 = \frac{1}{r_0 - a_0} and so on. Either we end up with an integer a_n at some point, which is the case for rational numbers or the continuous fraction continues infinitely, which is the case for irrational numbers. Now taking the continuous fraction just up to the first n elements, as in the first form and ignoring what comes after that, can in turn be converted into a rational number, by just doing some trivial rational arithmetic. It turns out, that this is a very good approximation for that size of the denominator. The funny thing is that this works even for irrational numbers under certain conditions. For example could we assume that we are dealing with numbers of the form x+y\sqrt{D} for some fixed rational D and rational numbers x and y. It is relatively easy to approximate x+y\sqrt{D}=x_0+y_0\sqrt{D} with the largest integer a_0 that is less or equal. Now we can calculate

    \[\frac{1}{x_k+y_k\sqrt{D}-a_k}=\frac{x_k-a_k-y_k\sqrt{D}}{(x_k-a_k)^2-y_k^2\cdot D}=x_{k+1}+y_{k+1}\sqrt{D}\]

and thus use this algorithm as another way to calculate rational approximations of square roots. In this case the continuous fraction becomes periodic and there is a surprising lot of interesting mathematics behind it, if you like to dig deeper.

Share Button

Accident Languages

Some commonly used languages have been quite well designed or at least would have been considered so at the time when they appeared. Even if they have their weaknesses, they should be good for some purposes.

Now beauty of programming languages is highly subjective. So I do not claim any universal truth to this. But I consider the popularity of certain languages as an accident, at least for the time when they became highly popular. This does not preclude that good stuff has been done with them or that they have matured to some extent or even that their potential replacements of the time of their appearance have lost relevance. I do not mention the classics COBOL and Fortran, because they were early pioneers of high level languages. And looking at these two languages, one of them does not deserve to be mentioned together with the other one.

Anyway, here we go:

Pascal

Pascal was in certain areas popular before C took off. To a great extent this was true because it was used as a teaching language and because Turbo Pascal ran well on a PC with 4.77 MHz and 256k Memory and Floppy Disks only in the early 1980s. This was the hardware that many people could afford with some pain (prices around 5000 DEM in Germany). And it transported some interesting concepts to a wider public. People who knew Algol 68 used to consider it a step backwards. Anyway, in the later 1980s C was available to a broader public, because affordable hardware that could run C-compilers reasonably fast and conveniently came into existence. C does have its flaws, but it is a well designed language and I personable consider the step from Pascal to C one of the most positive ever encountered.

Python

Python is a good language with a lot of power. But I see it in a similar space as Ruby, which to me appears as one of the most beautifully designed languages. So why use Python if Ruby could serve the same purpose. While Ruby took off with Rails for some time, Python kind of took over where people were looking for a replacement for Fortran and in Data science. It now has really superior libraries in many areas and has achieved a very strong position in Devops, where it has taken over to a large extent from Perl and Ruby. So Python has really gained traction and I do recommend learning it.

PHP

PHP came out as an easier to use Perl clone for web development. In the 1990s and early 2000s CGI was a common way to do web application development and Perl was strong in this area. I still consider Perl the better designed language, even though it does have its flaws and weaknesses. A more critical view would be that PHP was an inferior clone that copied only what they understood. In terms of building a great open source community it would have been more desirable, if the PHP guys had contributed to developing frameworks and libraries that would have made Perl better. But competition is a good thing also, even in open source. And PHP has clearly won the competition for the web space against Perl and now has good library and framework support. Some of the negative reputation of PHP comes from the fact, that unskilled developers write a lot of PHP code, because they came from the web design space and just added a bit of PHP to their HTML code. This is not to blame on the language. On the positive side I would mention that a lot of really great software has been written in PHP, some of which I use on a daily basis, like Wikipedia and its underlying software MediaWiki, WordPress and a lot of others.

VBA

VBA comes as a development and scripting language for MS-Office, which is a useful thing. Like PHP it attracts non-developers who write ugly code, because they use MS-Office and enrich it with a bit of VBA, which is not to blame on the language. I have not really tried it enough, so I let others speak:

We do have to admit, that sometimes non-developers „get the job done“ in a fraction of the time that professional developers need using VBA and Excel. The drawback is, that this solution is really limited, because it depends on some excel sheets on drive C: of a certain team member. Or of course shared ones, but who is using it when? How many copies are there around? Data and program are together in one file…
Anyway, a lot of useful stuff is done with VBA and as long as we do not miss the point when it is time to switch to something else, this might not even be a bad idea.

JavaScript

JavaScript was developed in 1995 and gave some functionality to the browser. The original idea was to enable applets, but applets have lost their relevance long ago and JavaScript itself can now do everything that applets could do and much more than that. JavaScript did transport interesting concepts to a wider public, like lambda expressions or anonymous functions, that had been around for ages in Lisp, but only been known to a small fraction of the software developers. But the legitimate question is, why a new language had to be invented for this, instead of adding an existing and mature language like Perl or Tcl to the browser. They were good candidates for this in 1995, I would not recommend to add them now, because we do have more adequate choices now. Anyway, JavaScript lacked and to my knowledge still lacks decent integer types. All numbers are double precision floating point, which does express integers of up to 53 bits accurately, but for counting pixels some kind of sufficiently long integer would have been a useful thing. There was some hope coming up that Dart might be established as a successor to JavaScript. But not even Google with Chrome was influential enough to get this established. It did survive as one of many languages that can be „compiled“ or „transpiled“ to JavaScript, as TypeScript, CoffeeScript, Clojure, Scala and many other languages can now be transpiled to JavaScript. I was told that even JavaScript itself is transpiled in order to support different languages of the language from one source code base. New hope is coming with WebAssembly, which will establish a common binary language (like JavaVM) on all browsers and allow compiling languages to this target instead of JavaScript. Again, as PHP and VBA, many non developers use JavaScript, again coming from the web design background. And again, we have today very good web applications that heavily rely on browser side logic, which is implemented in JavaScript or compiled to JavaScript. This allows for much better user experience and saves power on the server which no longer needs to do the rendering. Moving JavaScript to the server side with Node.js looks like a weird idea. We have better languages to work with on the server side than JavaScript. But again, competition between concepts, languages and technologies is also a good thing.

Conclusion

All these languages have somehow reached a relatively high popularity, even though they were not really the most advanced choices. But a lot of good stuff has been done with them and is still being done for example with JavaScript, Python and PHP today. That is what counts most.

Share Button

UUIDs revisited

UUIDs have proven useful in many circumstances.
We have basically two main variants:

  • The UUID is calculated as a combination of the Ethernet-MAC-address, the timestamp and a counter.
  • The UUID is calculated using a good random number generator

While variant 1 provides for a good uniqueness, there are some issues with it. Today we use mostly virtualized servers, which means that the MAC-address is coming from a configuration file and no longer guaranteed to be world wide unique. And we give away some information with the UUID, that we do not necessarily want to give away.

Variant 2 can be proven to have an acceptably low risk of collisions, but this is only true when using really good random number generators, which cannot always be guaranteed. Also it introduces an uncertainty in an area where we do not need it. We need to worry about this uniqueness, at least a little but, which is unnecessary.

So the question is, if we can rethink variant 1.

Assuming, that our software runs on our server farm. There may be a few hundred or thousand or even millions of virtual or physical servers. Now the organization does have a way to uniquely identify their servers. Of course we only need to consider the servers that are relevant for the application. Maybe an ID for the service instance instead of the server is even better. We may assume a numerical ID or something or have a table to map IP addresses and the like to such an ID. Some thinking is still required on how to do this. We can fill the digits that we do not need with random numbers.

Putting this ID instead of the MAC address solves the issue of configurable MAC address.

The next problem, that timestamps can be abused to find out something that should not be found out could be resolved by running the timestamp part or even the ID part (including a random number) through a symmetric encryption or simply some bijective function that is kept as a secret.

In many circumstances there is nothing wrong with customizing the UUID-generation to some „local“ standard, if this is well understood and carefully implemented.

Share Button