Observer Effect

Scientists have to deal with the observer effect, which means that observing something actually changes it. Typically we think of quantum physics, where this effect is very strong and surprising and closely related to the Heisenberg Uncertainty Principle, but it is actually something that in a more abstract sense is present in a multitude of situtations. Just think of human interaction. If we want to find out about people, we can ask them. But this conversation actually changes the people, sometimes in a way that we can neglect or tolerate.

But we also have this in the case of IT. If we think of a software and we want to observe if the software behaves well, we need ways to observe the software. Very often we use logging, sometimes monitoring tools, and sometimes debugging or even profiling. We think that they do not hurt us, apart from using resources, but we have to be quite careful. The example of logging is quite good, because it is quite common and usually something that we do a lot, without wasting too much thoughts about it.

Now logging slows our application down that is known already. Now we tend to use a slightly less noisy log level, because terabytes of log are still a pain, even today. But usually the messages are calculated and then discarded by the logging framework. With functional language features there are quite elegant ways to deal with this, by just passing a function that calculates the message on demand instead of passing the message. It has always been possible, but too clumsy to actually do it, unless the logging framework can rely on macro facilities, even such simple ones as the C-preprocessor. The deferred evaluation has its dangers as well. If an object that is passed as an ingrediant for a potential message already changes, while the message is created, we might get funny effects. Maybe only in the log, but maybe it could crash the application or stop the main program flow from doing its work. We need to be careful, unless the object is immutable.

In case of Hibernate or JPA or similar frameworks this can be specially interesting, even with eager message calculation. Accessing attributes of the object can actually lead to database operations. They can fail. They can create load, maybe deadlocks. They can have lost their transaction. A lot of things can happen in places far away from where we assumed to do the DB-work. This actually changes the objects. Do we want such operations to occur during logging? Maybe differently depending on the log level? Immutability is our friend, especially in conjunction with JPA, but that is a long story. We may at least be lucky that we actually have some tables that we only read. We can make the objects „pseudo-immutable“, but still JPA-magic must mutate them at least during the read operation. It is tempting to let tools generate the toString-methods of objects, but it is very dangerous here. We should avoid including any potentially lazily loaded attributes in the toString-output, because otherwise they will be loaded during the logging or even worse differently depending on how we log.

The next thing is the NullPointerException during logging. It is quite common in Java, for example. And we do not want to burden our program logic with NullPointerExceptions from the logging, especially not with those that occur only sporadically. So it is a good idea to be careful and to test well. Only the combination is possibly good enough.

Modern times create more demand for some kind of real multi threading, not in the JEE-sense with a couple of EJBs that can run in parallel, but with massively parallel operations. Even though we have a multitude of logging frameworks and unifying logging frameworks and even more of them, there is a common weakness that they tend to share. Writing into one single target is achieved by some kind of synchronizing, which can slow our application down and change the timing behavior in ways that we did not desire. Asynchronous logging could be good, but in a way this only shifts the problem a bit.

Share Button

Clojure-Art

It is an interesting idea to generate colorful images using or music. In both areas Clojure seems to be quite attractive. Not having explored the music side, I did find the idea of creating images fun and inspiring. It also shows us something about the functions we are working with, if we learn to read the images right, but that will come or not, depending on the circumstances. It is useful not to be too scared of some mathematics when reading this.

Now the challenge is to create an image on a two dimensional array of points, for example 1000×1000 pixel, with x- and y-coordinates ranging from 0 to 999. Each pixel needs to be colored. While it is very interesting to explore different color models, we can for simplicity assume that we need 3 numbers each ranging from 0 to 255 for the red, green and blue channels. This is how most displays work, more or less. Now the goal is to create something that looks good. And of course is reasonable to program, otherwise we could just color one million points individually using for example GIMP, but a million is a lot.

Now we can apply any function on x and y and play around with functions like exp, log, sqrt, sin, cos, tan, sec, csc, sinh, … and of course the basic operations +, -, * and /. It turns out that in most cases we do not get interesting images, but experience will show what is promising to explore. I tried to create pictures by keeping the three channels fairly independent, but this did not work so well. It seems that it is better to keep some connection. One approach that actually works quite well is to consider the pair (x,y) as a complex number z = x+iy and to apply just one complex function on it, again exp, log, sqrt, sin, …. are good building blocks. Now these complex functions have a tendency to grow to infinity somewhere. While real functions can avoid this issue by constraining themselves just to one strait line on the plane, complex functions almost have to go to infinity somewhere. By making the square small enough or by changing the scale we can avoid this, but it imposes quite severe constraints. The Riemann Sphere allows us to map any complex number to a point on the surface of a sphere. With some scaling we can already get to RGB-space and get coordinates that are using, but not exceeding the desired range. There are more ways to visualize complex numbers, but this is a possibility worth exploring.

Another way is to just use functions that calculate a real number and to apply a \sin to it. With some shifting and scaling the values will be between 0 and 255 only and there are nor abrupt changes in color, unless the function we calculated is very steep or very chaotic. Using phase shifts by \frac{2\pi}{3} and \frac{4\pi}{3} the three color channels can be served and we get nice rainbow-waves like the following:

Clojure Art: angle + log(r)

Clojure Art: angle + log(r)

Another experiment was to just assume the HSV-model and to calculate the colors from assuming the function is the H-part. But this ended up looking like plastic and I did not like it too much.

An important issue to observe is that functions may end up in exceptions. I wrapped the functions, so that they do not stop the calculation of the image half way through, but instead provide default values in cases where an exceptions occured.

It can also be fun to explore bitwise-functions like bitxor or even functions like the p-adic exponential function, which yields totally different kind of images.

I have put some of the code from my experiments into Github and licensed it with the GPL, so you can use it as a starting point. Others have worked with this as well, for example Clojure Art on Tumblr, Clojure Art Collective on github, another „clojure art“ on github or creative computing with clojure on O’Reilly’s blog.

Enjoy it and learn some Clojure. I sometimes use this when teaching Clojure.

Share Button

DB Persistence without UPDATE and DELETE

When exploring the usage of databases for persistence, the easiest case is a database that does only SELECT. We can cache as much as we like and it is more or less the functional immutable world brought to the database. For working on fixed data and analyzing data this can sometimes be useful.

Usually our data actually changes in some way. It has been discussed in this Blog already, that it would be possible to extend the idea of immutability to the database, which would be achieved by allowing only INSERT and SELECT. Since data can correlate, an INSERT in a table that is understood as a sub-entity via a one-to-many-relationship by the application actually is mutating the containing entity. So it is necessary to look at this in terms of the actual OR-mapping of all applications that are running on that DB schema.

Life can be simple, if we actually have self contained data as with MongoDB or by having a JSON-column in PostgreSQL, for example. Then inter-table-relations are eliminated, but of course it is not even following the first normal form. This can be OK or not, but at least there are good reasons why best practices have been introduced in the relational DB world and we should be careful about that. Another approach is to avoid the concept of sub entities and only work with IDs that are foreign keys. We can query them explicitly when needed.

An interesting approach is to have two ID-columns. One is an id, that is unique in the DB-table and increasing for newly created data. One is the entity-ID. This is shared between several records referring to different generations of the same object. New of them are generated each time we change something and persist the changes and in a simple approach we just consider the newest record with that entity-ID valid. It can of course be enhanced with validFrom and validTo. Then each access to the database also includes a timestamp, usually close to current time, but kept constant across a transaction. Only records for which validFrom <= timestamp < validTo are considered, and within these the newest. The validFrom and validTo can form disjoint intervals, but it is up to the application logic if that is needed or not. It is also possible to select the entry with the highest ID among the records with a given entityID and timestamp-validTo/From-condition. Deleting records can be simulated by this as well, by allowing a way to express a "deleted" record, which means that in case we find this deleted record by our rules, we pretend not having found anything at all. But still referential integrity is possible, because the pre-deletion-data are still there. This concept of having two IDs has been inspired by a talk on that I saw during Clojure Exchange 2017: Immutable back to front.

Share Button

Lazy Collections, Strings or Numbers

The idea is, that we have data that is obtained or calculated to give us on demand as much of it as we request. But it is not necessarily initially present. This concept is quite common in the functional world, where we in a way hide the deprecated concept of state in such structures, by the way in a way that lets use retain the benefits that led to the desire for statelessness.

Actually the concept is quite old. We have it for I/O in Unix and hence in Linux since the 1970ies. „Everything is a file“, at least as long as we constrain ourselves to a universal subset of possible file operations. It can be keyboard input, a named or anonymous pipe, an actual file, a TCP-connection, to name the most important cases. These are „lazy“ files, behave more or less like files as far as sequential reading is concerned, but not for random access reading. The I/O-concept has been done in such a way that it takes the case into account that we want to read n bytes, but get only m < n bytes. This can happen with files when we reach their end, but then we can obtain an indication that we reached the end of the file, while it is perfectly possible that we read less then we want in one access, but eventually get \ge n bytes including subsequent reads. Since the API has been done right, but by no means ideal, it generalizes well to the different cases that exist in current OS environments.

We could consider a File as an array of bytes. There is actually a way to access it in this way by memory-mapping it, but this assumes a physically present file. Now we could assume that we think of the array as a list that is optimized for sequential access and iterating, but not for random access. Both list types actually exist in languages like Java. Actually the random access structure can be made lazy as well, within certain constraints. If the source is actually sequential, we can just assume that the data is obtained up to the point where we actually read. The information about the total length of the stream may or may not be available, it is always available somehow in the case of structures that are completely available in memory. This random access on lazy collections works fine if the reason of laziness is to actually save us from doing expensive operations to obtain data that we do not actually need or to obtain them in parallel to the computation that processes the data. But we loose another potential drawback in this case. If the data is truly sequential, we can actually process data that is way beyond our memory capacity.

So the concept transfers easily from I/O-streams to lists and even arrays, most naturally to iterables that can be iterated only once. But we can easily imagine that this also applies to Strings, which can be seen a sequence of characters. If we do not constrain us to what a String is in C or Java or Ruby, but consider String to be a more abstract concept, again possibly dropping the idea of knowing the length or having a finite length. Just think of the output of the Unix command „yes“ or „cat /dev/zero“, which is infinite, in a theoretical way, but the computer won’t last forever in real life, of course. And we always interrupt the output at some time, usually be having the consumer shut down the connection.

Even numbers can be infinite. For real numbers this can happen only after the decimal point, for p-adic numbers it happens only before the decimal point, if you like to look into that. Since we rarely program with p-adic numbers this is more or less an edge case that is not part of our daily work, unless we actually do math research. But we could have integers with so many digits that we actually obtain and process them sequentially.

Reactive programming, which is promoted by lightbend in the Reactive Manifesto relies heavily on lazy structures, in this case data streams. An important concept is the so called „backpressure“, that allows the consumer to slow down the producer, if it cannot read the data fast enough.

Back to the collections, we can observe different approaches. Java 8 has introduced streams as lazy collections and we need to transform collections into streams and after the operation a stream back into a collection, at least in many real life situations. But putting all into one structure has some drawbacks as well. But looking at it from an abstract point of view this does not matter. The java8-streams to not implement a collection interface, but they are lazy collections from a more abstract point of view.

It is interesting that this allows us to relatively easily write nested loops where the depth of the nesting is a parameter that is not known at compile time. We just need a lazy collections of n-tuples, where n is the actual depth of the nesting and the contents are according to what the loops should iterate through. In this case we might or might not know the size of the collection, possibly not fitting into a 32-bit-integer. We might be able to produce a random member of the collection. And for sure we can iterate through it and stop the iteration wherever it is, once the desired calculation has been completed.

Share Button

Is Java becoming non-free?

We are kind of used to the fact that Java is „free“.
It has been free in the sense of „free beer“ pretty much forever.
And more recently also „free“ in the sense of „free speech“.

In spite of the fact that we read that „Oracle is going to monetize on Java“, as can be read in articles like this, it is remaining like that, at least for now. This is also written in the article.
But it seems that they are looking for loopholes. For example we download and install Java SE including X, Y and Z, because it comes like that. Agree to hundred pages of license text and confirm having read and understood everything, as always… Now we really need X, which is the JDK, which is actually free. But we just accidentally also install Y and Z, which we do not need, but which has a price tag on which they are trying to get us.

Even if nothing will really happen, issues like that help undermining the trust in the platform in general, not only for Java, but also for other JVM-languages. Eventually there could be forks like we have seen with LibreOffice vs. OpenOffice or with mariaDB vs. mySQL, which kind of took over by avoiding the ties to Oracle. Solaris seems to have a similar fork, but in this case people are just moving to Linux anyway, so the issue is less relevant.

These prospects are not desirable, but I think we do not have to panic, because there are ways to solve this that are going to be pursued if necessary. Maybe it is a good idea to be more careful when installing software. And to think twice when starting a new project if Oracle or PostgreSQL is the right DB product in the long term, taking into consideration Oracle’s attitude towards loyal long term customers.

It is regrettable. Oracle has great technology from their own history and from SUN in databases, Java including the surrounding universe, Solaris and hardware. Let us hope that they will stay reasonable at least with Java.

Share Button

JMS

Java has always not just been a language, but it brought us libraries and frameworks. Some of them proved to be bad ideas, some become hyped without having any obvious advantages, but some were really good.

In the JEE-stack, messaging (JMS) was included pretty much from the beginning. In those days, when Java belonged to Sun Microsystems and Sun did not belong to Oracle, an aim was to support databases, which was in those days mostly Oracle, via JDBC and so called Message oriented middleware, which was available in the IBM-world via JMS. JMS is a common interface for messaging, that is like sending micro-email-message not between human, but between software components. It can be used within one JVM, but even between geographically distant servers, provided a safe network connection exists. Since we all know EMail this is in principle not too hard to understand, but the question is, what it really means and if it brings us something that we do not already have otherwise.

We do have web services as an established way to communicate between different servers across the network and of course they can also be used locally, if desired. Web services are neither the first nor the only way to communicate between servers nor are they the most efficient way. But I would say that they are the way how we do it in typical distributed applications that are not tied to any legacy. In principal web services are network capable and synchronous. This is well understood and works fine for many applications. But it also forces us to block processes or threads while waiting for responses, thus occupying valuable resources. And we tend to loose responsiveness, because of the waiting for the response. It needs to be observed that DB-access is typically only available synchronously. In a way understandable because of the transactions, but it also blocks resources to a huge extent, because we know that the performance of many applications is DB driven.

Now message based software architectures think mostly asynchronously. Sending a message is a „fire and forget“. There is such a thing as making message transactional, but this has to be understood correctly. There is one transaction for sending the message. It is guaranteed that the message is sent. Delivery guarantees can only be given to a limited extent, because we do not know anything about the other side and if it is at all working. This is not checked as part of the transaction. We can imagine though that the messaging system has its own transactional database and stores the message there within the transaction. It then retries delivering it forever, until it succeeds. Then it is deleted from this store as part of the receiving transaction. Both these transactions can be part of a distributed transaction and thus be combined with other transactions, usually against databases, for a combined transaction. This is what we usually have in mind when talking about this. I have to mention that the distributed transaction, usually based on the so called two phase commit, is not quite as water proof as we might hope, but it can be broken by construction of a worst case scenario regarding the timing of failures of network and systems. But it is for practical purposes reasonable good to use.

While it is extremely interesting to investigate purely message based architectures, especially in conjunction with functional paradigm, this may not be the only choice. Often it is a good option to use a combination of messaging with synchronous services.

We should observe that messaging is a more abstract concept. It can be implemented by some middle ware and even be accessible by a standardized kind of interface like JMS. But it can also be more abstract as a queuing system or as something like Akka uses for its internal communication. And messaging is not limited to Java or JVM languages. Interoperability does impose some constraints on how to use it, because it bans usage of Object-messages which store serialized Java objects, but there are ways to address this by using JSON or BSON or XML or Protocol Buffers as message contents.

What is interesting about JMS and messaging in general are two major communication modes. We can have queues, which are point to point connections. Or we can have „topics“, which are channels into which messages are sent. They are then received by all current subscribers of the topic. This is interesting to notify different components about an event happening in the system, while possibly details about the event must be queried via synchronous services or requested by further messaging via queues.

Generally JMS in Java has different implementations, usually there are those coming with the application servers and there are also some standalone implementations. They can be operated via the same interface, at least as long as we constrain us to the common set of functionality. So we can exchange the JMS implementation for the whole platform (which is a nightmare in real life), but we cannot mix them, because the wire protocol is usually incompatible. There is now something like a standard network protocol for messaging, which is followed by some, but not all implementations.

As skeptical as I am against Java Enterprise edition, I do find the JMS part of enterprise Java very interesting and worthwhile exploring for projects that have a size and characteristics justifying this.

Share Button

Collection Libraries

The standard libraries of newer programming languages usually contain so called collection libraries.

Collections can usually be Lists, Sets, Maps or specialization of these.

They cover quite a lot and we start seeing variants that are built on immutability and variants that allow mutability and as always the hybrid in Ruby, that combines these and does an irreversible transition using the freeze method.

There are some interesting collection types other than these, most often we find the Bag as fourth member in the club and then more complex and more specific collections.

What they all have in common is storing a finite number of elements in a certain structure.

Some languages like Clojure, Haskell or Perl 6 use so called lazy collections. That can mean that the members are not actually stored, but that there are methods to calculate them on demand. This allows for very interesting, expressive and beautiful programming, if used properly. Typically a Range of integers is provided as a lazy collection. But there can also be quite interesting lazy collections that are a little bit more sophisticated. Some allow random access to the nth element, like arrays or vectors or arrayLists, some only via iteration.

Interesting lazy collections could be multi-dimensional ranges. Assume we have an array of integers [n_0, n_1, n_2, ...., n_{m-1}] where even m is only known at runtime. Then it is a challenge that sometimes occurs to do a loop like this:

for (i_0 = 0\ldots n_0-1) {
for (i_1 = 0\ldots n_1-1) {
for (i_2 = 0\ldots n_2-1) {
\cdots
}
}
}

Which is kind of hard to write, because we cannot nest the loops if we do not know how deeply they need to be nested.

But if we have a multi-range collection and do something like this

Collection> mr = new MultiRange([n_0, n_1, n_2, ...., n_{m-1});
for (List li : mr) {
\cdots
}

and this beast becomes quite approachable.

A similar one, that is sometimes needed, is a lazy collection containing all the permutations of the n numbers \left\{0\ldots n-1\right\}. Again we only want to iterate over it and possibly not complete the iteration.

Another interesting idea is to perform the set operations like union, intersection and difference lazily. That means that we have a collection class Union, that implements the union of its members. Testing for membership is trivial, iteration does involve some additional structure to avoid duplicates. Intersection and difference are even easier, because they cannot produce duplicates.

What is also interesting is Sets built from intervals. Intervals can be defined in any base set {\mathrm T} (type) that supports comparisons like <, <=, ... We have

  • an open interval (a,b)=\left\{x \in {\mathrm T} : a < x < b\right\}
  • an left half-open interval (a,b]=\left\{x \in {\mathrm T} : a < x \le b\right\}
  • an right half-open interval [a,b)=\left\{x \in {\mathrm T} : a \le x < b\right\}
  • a closed interval [a,b]=\left\{x \in {\mathrm T} : a \le x \le b\right\}

Of these we can create unions and intersections and in the end can always reduce this to unions of intervals. Adjacent intervals can sometimes be merged, overlapping intervals always. If {\mathrm T} supports the concept of successors, than even closed intervals with different limits can be discovered to be adjacent, for example [1,2] and [3,4] for {\mathrm T}={\Bbb Z}. Often this cannot be assumed, for example if we are working with rational numbers with arbitrarily long integers as numerator and denominator.

So these are three concepts to get memory saving, easy to use lazy collections.

Share Button

Alpine Perl Workshop

On 2016-09-02 and 2016-09-03 I was able to visit the Alpine Perl Workshop. This was a Perl conference with around 50 participants, among them core members of the Perl community. We had mostly one track, so the documented information about the talks that were given is actually quite closely correlated to the list of talks that I have actually visited.

We had quite a diverse set of talks about technical issues but also about the role of Perl in projects and in general. The speeches were in English and German…

Perl 6 is now a reality. It can be used together with Perl 5, there are ways to embed them within each other and they seem to work reasonably well. This fills some of the gaps of Perl 5, since the set of modules is by far not as complete as for Perl 5.

Perl 5 has since quite a few years established a time boxed release schedule. Each year they ship a new major release. The previous two releases are supported for bugfixes. The danger that major Linux distributions remain on older releases has been banned. Python 3 has been released in 2008 and still in 2016 Python 2.7 is what is usually used and shipped with major Linux distributions. It looks like Perl 5 is there to stay, not be replaced by Perl 6, which is a quite different language that just shares the name and the community. But the recent versions are actually adopted and the incompatible changes are so little that they do not hurt too much, usually. An advantage of Perl is the CPAN repository for libraries. It is possible to test new versions against a ton of such libraries and to find out, where it might break or even providing fixes for the library.

An interesting issue is testing of software. For continuous integration we can now find servers and they will run against a configurable set of Perl versions. But using different Linux distributions or even non-Linux-systems becomes a more elaborate issue. People willing to test new versions of Perl or of libraries on exotic hardware and OS are still welcome and often they discover a weakness that might be of interest even for the mainstream platforms in the long run.

I will leave it with this. You can find more information in the web site of the conference.

And some of the talks are on youtube already.

It was fun to go there, I learned a lot and met nice people. It would be great to be able to visit a similar event again…

Share Button

Integers in Perl 6

The language Perl 6 has been announced to be production ready by the beginning of this year. Its implementation is Rakudo, while Perl 6 itself is an abstract language definition that allows any language implementation that passes the test suite to call itself an Perl 6 implementation. The idea is not totally new, we see the Ruby language being implemented more than once (Ruby, JRuby, Rubinius, IronRuby), but we can also learn from the Ruby guys that it is a challenge to keep this up to date and eventually it is likely that one implementation will fall back or go its own way at some point of time.

Perl 6 is also called „Perl“ as part of its name, but quite different from its sister language Perl, which is sometimes called „Perl 5“ to emphasize the distinction, so it is absolutely necessary to call it „Perl 6“ or maybe „Rakudo“, but not just „Perl“.

Even though many things can be written in a similar way, a major change to Perl 5 is the way of dealing with numeric types. You can find an article describing Numeric Types in Perl [5]. So now we will see how to do the same things in Perl 6.

Dealing with numeric types in Perl 6 is neither like in Perl 5 nor like what we are used to in many other languages.

So when we just use numbers in a naïve way, we get long integers automatically:

my $f = 2_000_000_000;
my $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

creates this output:

0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
8 256000000000000000000000000000000000000000000000000000000000000000000000000
9 512000000000000000000000000000000000000000000000000000000000000000000000000000000000

This is an nice default, similar to what Ruby, Clojure and many other Lisps use, but most languages have a made a choice that is weird for application development.

Now we can also statically type this:

my Int $f = 2_000_000_000;
my Int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

and we get the exact same result:

0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
8 256000000000000000000000000000000000000000000000000000000000000000000000000
9 512000000000000000000000000000000000000000000000000000000000000000000000000000000000

Now we can actually use low-level machine integers which do an arithmetic modulo powers of 2, usually 2^{32} or 2^{64}:

my int $f = 2_000_000_000;
my int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

and we get the same kind of results that we would get in java or C with (signed) long, if we are on a typical 64-bit environment:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

We can try it in Java. I was lazy and changed as little as possible and the "$" is allowed as part of the variable name by the language, but of course not by the coding standards:

public class JavaInt {
    public static void main(String[] args) {
        long $f = 2_000_000_000;
        long $p = 1;
        for (int $i = 0; $i < 10; $i++) {
            System.out.println($i + " " +  $p);
            $p *= $f;
        }
    }
}

We get this output:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

And we see, with C# we get the same result:

using System;

public class CsInt {

    public static void Main(string[] args) {
        long f = 2000000000;
        long p = 1;
        for (int i = 0; i < 10; i++) {
            Console.WriteLine(i + " " +  p);
            p *= f;
        }
    }
}

gives us:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

If you like, you can try the same in C using signed long long (or whatever is 64 bits), and you will get the exact same result.

Now we can simulate this in Perl 6 also using Int, to understand what int is really doing to us. The idea has already been shown with Ruby before:

my Int $MODULUS = 0x10000000000000000;
my Int $LIMIT   =  0x8000000000000000;
sub mul($x, $y) {
    my Int $result = ($x * $y) % $MODULUS;
    if ($result >= $LIMIT) {
        $result -= $MODULUS;
    } elsif ($result < - $LIMIT) {
        $result += $MODULUS;
    }
    $result;
}

my Int $f = 2_000_000_000;
my Int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p = mul($p, $f);
}

and we get the same again:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

The good thing is that the default has been chosen correctly as Int and that Int allows easily to do integer arithmetic with arbitrary precision.

Now the question is, how we actually get floating point numbers. This will be covered in another blog posting, because it is a longer story of its own interest.

Share Button

Some Thoughts about Incompleteness of Libraries

Selfwritten Util Libraries

Today we have really good libraries with our programming languages and they cover a lot of things. The funny thing is, that we usually end up writing some Util-classes like StringUtil, CollectionUtil, NumberUtil etc. that cover some common tasks that are not found in the libraries that we use. Usually it is no big deal and the methods are trivial to write. But then again, not having them in the library results in several slightly different ad hoc solutions for the same problem, sometimes flawless, sometimes somewhat weak, that are spread throughout the code and maybe eventually some „tools“, „utils“ or „helper“ classes that unify them and cover them in a somewhat reasonable way.

Imposing Util Libraries on all Developers

In the worst case these self written library classes really suck, but are imposed on the developers. Many years ago it was „company standard“ to use a common library for localizing strings. The concept was kind of nice, but it had its flaws. First there was a company wide database for localizing strings in order to save on translation costs, but the overhead was so much and the probability that the same short string means something different in the context of different applications was there. This could be addressed by just creating a label that somehow included the application ID and bypassing this overhead, whenever a collision was detected. What was worse, the new string made it into a header file and that caused the whole application to be recompiled, unless a hand written make file skipped this dependency. This was of course against company policy as well and it meant a lot of work. In those days compilation of the whole application took about 8 (eight!) hours. Maybe seven. So after adding one string it took 8 hours of compile time to continue working with it. Anyway, there was another implementation for the same concept for another operating system, that used hash tables and did not require recompilation. It had the risk of runtime errors because of non-defined strings, but it was at least reasonable to work with it. I ported this library to the operating system that I was using and used it and during each meeting I had do commit to the long term goal of changing to the broken library, which of course never happened, because there were always higher priorities.

I thing the lesson we can already learn is that such libraries that are written internally and imposed on all developers should be really done very well. Senior developers should be involved and if the company does not have them, hired externally for the development. Not to do the whole development, but to help doing it right.

Need for Util libraries

So why not just go with the given libraries? Or download some more? Depending on the language there are really good libraries around. Sometimes that is the way to go. Sometimes it is good to write a good util-libarary internally. But then it is important to do it well, to include only stuff that is actually needed or reasonably likely needed and to avoid major effort for reinventing the wheel. Some obscure libraries actually become obsolete when the main default library gets improved.

Example: Trigonometric and other Mathematical Functions

Most of us do not do a lot of floating point arithmetic and subsequentially we do not need the trigonometric functions like \sin and \cos, other transcendental functions like \exp and \log or functions like cube root (\sqrt[3]{x}) a lot. Where the default set of these functions ends is somewhat arbitrary, but of course we need to go to special libraries at some point for more special functions. We can look what early calculators used to have and what advanced math text books in schools cover. We have to consider the fact, that the commonly used set of trigonometric functions differs from country to country. Americans tend to use six of them, \sin, \cos, \tan, \cot, \sec and \csc, which is kind of beautiful, because it really completes the set. Germans tend to use only \sin, \cos, \tan and \cot, which is not as beautiful, but at least avoids the division by zero and issue of transforming \tan to \cot.  Calculators usually had only \sin, \cos and \tan. But they offered them in three flavors, with modes of „DEG“, „RAD“ and „GRAD“. The third one was kind of an attempt to metricize degrees by having 100 {\rm gon} instead of 90^\circ for an right angle, which seems to be a dead idea.  Of course in advanced mathematics and physics the „RAD“, which uses \frac{\pi}{2} instead of 90^\circ is common and that is what all programming languages that I know use, apart from the calculators. Just to explain the functions for those who are not familiar with the whole set, we can express the last four in terms of \sin and \cos:

  • \tan(x) = \frac{\sin(x)}{\cos(x)} (tangent)
  • \cot(x) = \frac{\cos(x)}{\sin(x)} (cotangent)
  • \sec(x) = \frac{1}{\cos(x)} (secans)
  • \csc(x) = \frac{1}{\sin(x)} (cosecans)

Then we have the inverse trigonometric functions, that can be denoted with something like \arcsin or \sin^{-1} for all six trigonometric functions. There is an irregularity to keep in mind. We write \sin^n(x) instead of (\sin(x))^n for n=2,3,4,\ldots, which is the multiplication of that number of \sin(x) terms. And we use \sin^{-1}(x) to apply the function „\sin-1 time, which is actually the inverse function. Mathematicians have invented this irregularity and usually it is convenient, but it confuses those who do not know it. From these functions many programming languages offer only the \tan^{-1} assuming the others five can be created from that. This is true, but cumbersome, because it needs to differentiate a lot of cases using something like if, so there are likely to be many bugs in software doing this. Also these ad hoc implementations loose some precision.

It was also common to have a conversion from polar coordinates to rectangular (p2r) coordinates and vice versa (r2p), which is kind of cool and again easy, but not too trivial to do ad hoc. Something like atan2 in FORTRAN, which does the essence of the harder r2p operation, would work also, depending on hon convenient it is to deal with multiple return values. We can then do r2p using r=\sqrt{x^2+y^2}, \phi ={\rm atan2}(x, y) and p2r by x=r \sin(\phi) and y = r \cos(\phi).

The hyperbolic functions like \sinh, their inverses like \arsinh or \sinh^{-1} are rarely used, but we find them on the calculator and in the math book, so we should have them in the standard floating point library. There is only one flavor of them.

Logarithms and exponential functions are found in two flavors on calculators: \log(x)=\log_{10}(x)=\lg(x) and \ln(x)=\log_{e}(x) and 10^x and e^x=\exp(x). The log is kind of confusing, because in mathematics and physics and in most current programming language we mean \log(x)=\log_{e}(x) (natural logarithm). This is just a wrong naming on calculators, even if they all did the same mistake across all vendors and probably still do in the scientific calculator app on the phone or on the desktop. As IT people we tend to like the base two logarithm {\rm ld}(x)=\log_2(x), so I would tend to add that to the list. Just to make the confusion complete, in some informatics text books and lectures the term „\log“ refers to the base two logarithm. It is a bad habit and at least the laziness should favor writing the correct „{\rm ld}„.

Then we usually have power functions x^y, which surprisingly many programming languages do not have. If they do, it is usually written as x ** y or pow(x, y), square root, square and maybe cube root and cube.  Even though the square root and the cube root can be expressed as powers using \sqrt(x)=x^\frac{1}{2} and \sqrt[3](x)=x^\frac{1}{3} it is better to do them as dedicated functions, because they are used much more frequently than any other power with non-integral exponents and it is possible to write optimized implementations that run faster and more reliably then the generic power which usually needs to go via log and exp. Internal optimization of power functions is usually a good idea for integral exponents and can easily be achieved, at least if the exponent is actually of an integer type.

Factorial and binomial coefficient are usually used for integers, which is not part of this discussion. Extensions for floating point numbers can be defined, but they are beyond the scope of advanced school mathematics and of common scientific calculators. I do not think that they are needed in a standard floating point library. It is of its own interest what could be in an „advanced math library“, but \sec and \tanh^{-1} and {\rm ld} for sure belong into the base math library.

That’s it. It would be easy to add all these into the standard library of any programming language that does floating point arithmetic at all and it would be helpful for those who work with this and not hurt at all those who do not use it, because this stuff is really small compared to most of our libraries. So this would be the list

  • sin, cos, tan, cot, sec, csc in two flavors
  • asin, acos, atan, acot, asec, acsc (standing for \sin^{-1}…) in two flavors
  • p2r, r2p (polar coordinates to rectangular and reverse) or atan2
  • sinh, cosh, tanh, coth, sech, csch
  • asinh, acosh, atanh, acoth, asech, acsch (for \sinh^{-1}…)
  • exp, log (for e^x and logarithm base e)
  • exp10, exp2, log10, log2 (base 10 and base 2, I would not rely on knowledge that ld and lg stand for log2 and log10, respectively, but name them like this)
  • sqrt, cbrt (for \sqrt{x} and \sqrt[3]{x})
  • ** or pow with double exponent
  • ** or pow with integer exponent (maybe the function with double exponent is sufficient)
  • \frac{1}{x}, x^2, x^3, x^\frac{1}{y} are maybe actually not needed, because we can just write them using ** and /

Actually pretty much every standard library contains sin, cos, tan, atan, exp, log and sqrt.

Java

Java is actually not so bad in this area. It contains the tan2, sinh, cosh, tanh, asin, acos, atan, log10 and cbrt functions, beyond what any library contains. And it contains conversions from degree to radiens and vice versa. And as you can see here in the source code of pow, the calculations are actually quite sophisticated and done in C. It seems to be inspired by GNU-classpath, which did a similar implementation in Java. It is typical that a function that has a uniform mathematical definition gets very complicated internally with many cases, because depending on the parameters different ways of calculation provide the best precision. It would be quite possible that this function is so good that calling it with an integer as a second parameter, which is then converted to a double, would actually be good enough and leave no need for a specific function with an integer exponent. I would tend to assume that that is the case.

In this github project we can see what a library could look like that completes the list above, includes unit tests and works also for the edge cases, which ad hoc solution often do not. What could be improved is providing the optimal possible precision for any legitimate parameters, which I would see as an area of further investigation and improvement. The general idea is applicable to almost any programming language.

Two areas that have been known for a great need of such additional libraries are collections and Date&Time. I would say that really a lot what I would wish from a decent collection library has been addressed by Guava. Getting Date and time right is surprisingly hard, but just thing of the year-2000-problem to see the significance of this issue. I would say Java had this one messed up, but Joda Time was a good solution and has made it into the standard distribution of Java 8.

Summary

This may serve as an example. There are usually some functions missing for collections, strings, dates, integers etc. I might write about them as well, but they are less obvious, so I would like to collect some input before writing about that.

libc on Linux seem to contain sin, cos, tan, asin, acos, atan, atan2, sinh, cosh, tanh, asinh, acosh, atanh, sqrt, cbrt, log10, log2, exp, log, exp10, exp2. Surprisingly Java does not make use of these functions, but comes up with its own.

Actually a lot of functionality is already in the CPU-hardware. IEEE-recommendations suggest quite an impressive set of functions, but they are all optional and sometimes the accuracy is poor.

But standard libraries should be slightly more complete and ideally there would be no need to write a „generic“ util-library.  Such libraries should only be needed for application specific code that is somewhat generic across some projects of the organization or when doing a real demanding application that needs more powerful functionality than can easily be provided in the standard library. Ideally these can be donated to the developers of the standard library and included in future releases, if they are generic enough. We should not forget, even programming languages that are main stream and used by thousands of developers all over the world are usually maintained by quite small teams, sometimes only working part time on this. But usually it is hard to get even a good improvement into their code base for an outsider.

So what functions do you usually miss in the standard libraries?

Share Button