Logging

Deutsch

Software often contains a logging functionality. Usually entries one or sometimes multiple lines are appended to a file, written to syslog or to stdout, from where they are redirected into a file. They are telling us something about what the software is doing. Usually we can ignore all of it, but as soon as something with „ERROR“ or worse and more visible stack traces can be found, we should investigate this. Unfortunately software is often not so good, which can be due to libraries, frameworks or our own code. Then stack traces and errors are so common that it is hard to look into or to find the ones that are really worth looking into. Or there is simply no complete process in place to watch the log files. Sometimes the error shows up much later than it actually occurred and stack traces do not really lead us to the right spot. More often than we think logging actually introduces runtime errors, that were otherwise not present. This is related to a more general concept, which is called observer effect, where logging actually changes the business logic.

It is nice that log files keep to some format. Usually they start with a time stamp in ISO-format, often to the millisecond. Please add trailing zeros to always have 3 digits after the decimal point in this case. It is preferable to use UTC, but people tend to stick to local date and time zones, including the issues that come with switching to and from daylight saving time. Usually we have several processes or threads that run simultaneously. This can result in a wild mix of logging entries. As long as even multiline entries stay together and as long as beginning and end of one multiline entry can easily be recognized, this can be dealt with. Tools like splunk or simple Perl, Ruby or Python scripts can help us to follow threads separately. We could actually have separate logs for each thread in the first place, but this is not a common practice and it might hit OS-limitations on the number of open files, if we have many threads or even thousands of actors as in Erlang or Akka. Keeping log entries together can be achieved by using an atomic write, like the write system call in Linux and other Posix systems. Another way is to queue the log entries and to have a logger thread that processes the queue.

Overall this area has become very complex and hard to tame. In the Java world there used to be log4j with a configuration file that was a simple properties file, at least in the earlier version. This was so good that other languages copied it and created some log4X. Later the config file was replaced by XML and more logging frame works were added. Of course quite a lot of them just for the purpose of abstracting from the large zoo of logging frameworks and providing a unique interface for all of them. So the result was, that there was one more to deal with.

It is a good question, how much logic for handling of log files do we really want to see in our software. Does the software have to know, into which file it should log or how to do log rotation? If a configuration determines this, but the configuration is compiled into the jar file, it does have to know… We can keep our code a bit cleaner by relying on program functionality without code, but this still keeps it as part of the software.

Log files have to please the system administrator or whoever replaced them in a pure devops shop. And in the end developers will have to be able to work with the information provided by the logs to find issues in the code or to explain what is happening, if the system administrator cannot resolve an issue by himself. Should this system administrator have to deal with a different special complex setup for the logging for each software he is running? Or should it be necessary to call for developer support to get a new version of the software with just another log setting, because the configurations are hard coded in the deployment artifacts? Interesting is also, what happens when we use PAAS, where we have application server, database etc., but the software can easily move to another server, which might result in losing the logs. Moving logs to another server or logging across the network is expensive, maybe more expensive than the rest of this infrastructure.

Is it maybe a good idea to just log to stdout, maintaining a decent format and to run the software in such a way that stdout is piped into a log manager? This can be the same for all software and there is one way to configure it. The same means not only the same for all the java programs, but actually the same for all programs in all languages that comply to a minimal standard. This could be achieved using named pipes in conjunction with any hard coded log file that the software wants to use. But this is a dangerous path unless we really know what the software is doing with its log files. Just think of what weird errors might happen if the software tries to apply log rotation to the named pipe by renaming, deleting, creating new files and so on. A common trick to stop software from logging into a place where we do not want this is to create a directory with the name of the file that the software usually uses and to write protect this directory and its parent directory for the software. Please find out how to do it in detail, depending on your environment.

What about software, that is a filter by itself, so its main functionality is to actually write useful data to stdout? Usually smaller programs and scripts work like this. Often they do not need to log and often they are well tested relyable parts of our software installation. Where are the log files of cp, ls, rm, mv, grep, sort, cat, less,…? Yes, they do tend to write to stderr, if real errors occur. Where needed, programs can turn on logging with a log file provided on the command line, which is also a quite operations friendly approach. Named pipes can help here.

And we had a good logging framework in place for many years. It was called syslog and it is still around, at least on Linux.

A last thought: We spend really a lot of effort to get well performing software, using multiple processes, threads or even clusters. And then we forget about the fact that logging might become the bottle neck.

Share Button

Some thoughts about String equality

Of course Strings are today in some way Unicode. In this article we assume code points as the building blocks of Strings. That means for example in the Java-world, that we are talking about one code point being comprised of one Java character for typical European languages, using Latin, Greek or Cyrillic alphabets including extensions to support all languages typically using these alphabets, for example. But when moving to Asian languages, a code point can also consist of two Java characters and there are Strings that are illegal from Unicode perspective, because they contain characters that should be combined in a way that cannot be combined properly. So here we assume, that Strings consist of sequences of bytes or two-byte characters or whatever encoding that properly express a sequence of code points. There are many interesting issues when dealing with some Asian languages that we will not cover here today.

Now there are a lot of possibilities to create Strings, that look the same, but are actually different. We are not talking about „0“ and „O“ or „1“ and „l“ and „I“ that might look similar in some fonts, but should not look similar, because we actually depend on their distinctness, even on their visual distinctness. Unfortunately we have the bad habit of using traditional typewriter fonts, that make it hard to distinguish these, for source code, where it would be so crucial. But for today, we just assume that we always look hard enough to solve this issue.

The classical example of what looks the same is whitespace. We have ordinary space “ “ and no break space “ „, that are meant to look exactly the same, but to expose a slightly different behavior. There are tons of possibilities to create exactly the same look with different combinations of whitespace. But this is kind of a special case, because in terms of semantics often carries little information and we want to disregard it to some extent when comparing strings. Typical examples are stripping of leading and trailing whitespace of the string or of the lines contained within it and replacing tabulators with the number of spaces that would be equivalent. Or even to replace any amount of adjacent whitespace within a line by a single space. Again, handling of different whitespace code points might require different rules, so it is good to be careful in not putting to much logic and it is better to rely on a library to at least apply exactly the same rules in equivalent situations.

Another example that we actually might know is that certain characters look the same or almost the same in the Cyrillic, Greek and Latin alphabets. I try to give an idea of the meaning of the Greek and Cyrillic characters, but they depend on the language, the dialect and even the word, the word form or the actual occurrence of the letter in the word…

LatinCyrillicGreekmeaning of Cyrillic Lettermeaning of Greek letter
AАAlike Latinlike Latin
BВBlike Latin VBeta (like V in new Greek)
CСlike Latin S
EЕElike LatinEpsilon (like Latin E)
ГHlike Latin GGamma (like Latin G)
HНΗlike Latin NEta (like Latin I in new Greek)
JЈSerbian Ј, like German J
KКΚlike LatinKappa (like Latin K)
MМΜlike LatinMu (like Latin M)
NΝNu (like Latin N)
OОΟlike LatinOmikron (like Latin O)
PРΡlike Latin RRho (like Latin R)
ПΠlike Latin PPi (like Latin P)
TТΤlike LatinTau (like Latin T)
ФΦlike Latin FPhi (like Latin F)
XХΧlike German CHChi (like German CH)
YУΥlike Latin UUpsilon (like Latin U)
ZΖZeta (like German Z)
IІΙUkrainian IIota (like Latin I)

In this case we usually want the characters to look the same or at least very similar, because that is how to correctly display them, but we do want them to be different when comparing strings.

While these examples are kind of obvious, there is another one that we tend to ignore, but that will eventually catch us. There are so called combining characters, that should actually be named „combining code points“, but here we go. That means that we can put them after a letter and they will combine to form a letter with diacritical marks. A typical example is the letter „U“ that can be combined with two dots “ ̈ ̈“ to form an „Ü“, which looks the same as the „Ü“ that is composed of one code point. It is meant to look the same, but it also has the same meaning, at least for most purposes. What we see is the Glyph. We see the difference when we prefix each code point with a minus or a space: „Ü“ -> „-U-̈“ or “ U ̈“, while the second one is transformed like this: „Ü“ -> „-Ü“ or “ Ü“, as we would expect.

While the way to express the Glyph in such a way with two code points is not very well known and thus not very common, we actually see it already today when we look at Wikipedia articles. In some languages, where the pronunciations is ambiguous, it can be made clear by putting an accent mark on one vowel, as for example Кириллица, which puts an accent mark on the term in the beginning of the article like this: „Кири́ллица“. Since in Cyrillic Alphabet accent marks are unfortunately not used in normal writing, it comes in handy that the combining accent also works with cyrillic letter. When putting minus-signs between the code points it looks like this: „К-и-р-и-́-л-л-и-ц-а“ or with spaces like this: „К и р и ́ л л и ц а“. So Strings that we encounter in our programs will contain these combining characters in the future. While we can prohibit them, it is better to embrace this and it is actually not too hard, if we use decent libraries. Java has the Normalizer class in its built in library, that can convert to one or the other convention of expressing such glyphs and then allowing comparison in the way that we actually mean.

Unfortunately issues like semantic lengths of strings or semantic positions become even harder than they already are after moving from characters to code points. And we can be sure that Unicode has still more to offer to complicate things, if we dig deeper. The typical answer that we get on most web sites that talk about these issues is something like: „The length of strings and positions within strings are surprisingly irrelevant to most programs.“

In the end of the day, jobs that have been trivial in the past are now becoming a big deal and we need to learn to think of comparison, length, position, regular expressions, sorting and all kinds of string functionality with bytes, characters, code points and glyphs in mind.

What can our current libraries already do for us, what are we missing in them, considering different programming languages, databases, text files and network transmission?

Links

Share Button

Program Functionality without Code

Depending on the programming language and the frameworks we use, it is possible to have program functionality that is not happening in actual code that we write in that language. It seems weird, but actually it is something that we have been doing for decades and it has been sold to us as being extremely powerful and useful and sometimes it actually is. Aspect oriented programming is mostly based on this idea…

Typical examples are things we want to be taken care of but we do not want to actually write them ourselves..

Some of these look really great and who wants to deal with memory management today? Unless we do real time programming or special security code where information may not exist in the memory for longer than the actual processing, this is just what we successfully and without too much pain are doing all the time.

While some of these look really great, and have become more or less om there is also some danger in having some very powerful implicit functionality, like transaction management. While it looks tempting to delegate transaction management to a framework, because it is annoying and it is not really understood very well by most application developers, there comes some danger with it. This is even worse if it is used in conjunction with something like JPA or Hibernate… Assuming we have a framework that wraps methods marked with an annotation like „@Transactional“, meaning that this method call should be wrapped into a transaction (java-like pseudo-code):

@Transactional
public X myMethod(Y y)  {
   X result = do_something(y);
   return result;
}

being roughly equivalent to

public  X myMethod(Y y) {
  TransactionContext ctx = getTransactionContext();
  try {
      ctx.beginTransaction();
      X result = do_something(y);
      ctx.commit();
      return result;
   } catch (Exception ex) {
      ctx.rollback();
      throw ex;
   }
}

Yes, it is more elegant to just annotate it.
But now we program something like this:

@Transactional
public Function myMethod(Y y) {
      ....
}

where we actually enclose something into the function and give it back. Now when calling the function, we might get an error, because it encloses stuff from the time, when the transaction was actually still open, while it has been committed by the time, the function is actually called. So in frameworks that force the usage of such annotated transaction handling, such beautiful functional style programming patterns may actually not work and need to be avoided or at least constrained to the cases that do still work. This can be a reasonable price to pay, but it is important, to understand the constraints, that come with this implicit functionality.

Another interesting area that comes with a lot of potential functionality is correlated with authorization. Assuming we have a company that sells some services or products and we have key account managers that use the software we have written. Now for whatever reasons, they should only be able to see the data about their own customers, possibly data for customers for whose key account manager they are the deputy. Or if they are the boss of some key account managers, maybe they can see all of their data…

Now a function

List listCustomers() {
...
}

gives different results, depending on who is using it. This introduces an implicit invisible parameter. And however smart the user of this software is, he only sees what he is supposed to see, unless the software has some vulnerabilities, which it probably has.

So whenever we read such code that we have not written ourselves and have not written yesterday, there may be surprises about what it does. It is an interesting question how to test this with a good coverage of all constellations for implicit parameters. Anyway, we have to get used to it and embrace it, it is an integral part of our software ecosystem. But it is also important to use these powerful mechanisms only where they are really so helpful that it is worth the loss in clarity and explicitness.

While annotations are at least in place to be found, there are also other ways. Typically xml files can be used to configure such stuff. Or it can be done programmatically in a totally different place of the software by setting up some hooks, for example. Without good documentation or good information flow within the team, this may be hard to find.

Share Button

Intervals

Intervals are subsets of a universe, that are defined by upper and lower boundaries. Typically we think about real numbers, but any totally ordered universe allows the definition of intervals.

Intervals are defined by lower and upper boundaries, which can be a limiting number or unlimited, typically written as \infty for the upper bound and -\infty for the lower bound. The boundaries can be included or excluded. So the following combinations exist for a universe X:

(-\infty, \infty)=X
unlimited
(-\infty, a] = \{ x \in X : x \le a\}
half open, lower unlimited
(-\infty, a) = \{ x \in X : x < a\}
open, lower unlimited
[b, \infty) = \{ x \in X : x \ge b\}
half open, upper unlimited
(b, \infty) = \{ x \in X : x > b\}
open, upper unlimited
(c, d) = \{ x \in X : c < x < d\}
open
[c, d) = \{ x \in X : c \le x < d\}
half open
(c, d] = \{ x \in X : c < x \le d\}
half open
[c, d] = \{ x \in X : c \le x \le d\}
closed
\emptyset
it is sometimes useful to consider the empty set as an interval as well

The words „open“ and „closed“ refer to our usual topology of real numbers, but they do not necessarily retain their topological meaning when we extend the concept to our typical data types. a, b, c and d in the notation above do not have to be members of X, as long as the comparison is defined between them and all members of X. So we could for example meaningfully define for X=\mathbb{Q} the interval (\sqrt{2}, \pi) = \{ x\in\mathbb{Q} : \sqrt{2} < x < \pi\}.

As soon as we do not imply X=\mathbb{R} we always have to make this clear… And \mathbb{R} is kind of hard to really work with in software on computers with physically limited memory and CPU power.

Intervals have some relevance in software systems.

We sometimes have a business logic that actually relies on them and instead programming somehow around it, it is clearer and cleaner to actually work with intervals. For example, we can have a public transport scheduling system and we deal with certain time intervals in which different schedules apply than during the rest of the day. Or we have a system that records downtimes of servers and services and these are quite naturally expressed as intervals of some date-time datatype. It is usually healthy to consider all the cases mentioned above rather than ignoring the question if the boundary with probability zero of actually happening or having ugly interval limits like 22:59:59.999.

The other case is interval arithmetic. This means, we do floating point calculations by taking into account that we have an inaccuracy. So instead of numbers we have intervals I. When we add two intervals, we get I+J=\{x+y : x\in I \wedge y\in J\}. In the same way we can multiply and subtract and even divide, as long as we can stay clear of zero in the denominator. Or more generally we can define f(I_1, I_2, \ldots, I_n)=\{f(x_1,x_2,\ldots,x_n) : x_1\in I_1 \wedge x_2 \in I_2 \wedge\ldots\wedge x_n\in I_n\}.
It does of course require some mathematical thinking to understand, if the result is an interval again or at least something we can deal with reasonably. Actually we are usually happy with replacing the result by an interval that is possibly a superset of the real result, ideally the minimal superset that can be expressed with our boundary type.

At this point we will probably discover a desire to expand the concept of intervals in a meaningful way to complex numbers. We can do this by working with open disks like \{ z \in \mathbb{C} :  |z-z_0| < d\} or closed disks like \{ z \in \mathbb{C} :  |z-z_0| \le d\}. Or with rectangles based on two intervals I and J like \{ z = x+iy \in \mathbb{C} : x \in I \wedge y \in J \}.

These two areas are quite interesting and sometimes useful. Libraries have been written for both of them.

Often we discover, that intervals alone are not quite enough. We would like to do set operations with intervals, that is

union
A\cup B
intersection
A \cap B
set difference
A \setminus B

While the intersection works just fine, as long as we include the empty set \emptyset as an interval, unions and differences lead us to non-intervals. It turns out that interval-unions, sets that can be expressed as a union of a finite number of intervals, turn out to be a useful generalization, that is actually what we want to work with rather than with intervals. In this case we can drop the empty set as interval and just express it as the union of zero intervals.

There are some questions coming up, that are interesting to deal with:

normalization
Can we normalize interval-unions to some canonical form that allows safe and relyable comparison for equality?
adjacency
is our universe X actually discrete, so we can express all unlimited boundaries with closed boundaries?
interval lengths
Do we have a meaningful and useful way to measure the length of an interval or the total length of an interval-union, as long as they are limited? Or even for unlimited intervals?
collection interfaces
Do we want to implement a Set-interface in languages that have sets and an understanding of sets that would fit for intervals
implementation
How can we implement this ourselves?
implementation
Can we find useful implementations?

Having written a java library to support interval-unions on arbitrary Comparable types once in a project and having heard a speech about an interval library in Scala that ended up in using interval-unions in a pretty equivalent way, it might be interesting to write in the future about how to do this or what can be found in different languages to support us. For interval arithmetic some work has been done to create extensions or libraries for C and Fortran, that support this, while I was a student. So this is pretty old stuff and interesting mostly for the concepts, even if we are not going to move to Fortran because of this.

If there is interest I will write more about actual implementations and issues to address when using or writing them.

Links

Share Button

Serialization

Deutsch

Serialization allows us to store objects in a lossless way or to transfer them over the network. This could be done before, but it was necessary to program the serialization mechanism, which was a lot of work. Of course, they were not yet called objects in those days…

Java suddenly had such a serialization for (almost) all object without any additional programming effort. This does not mean that this automatic serialization did not exist before, but it was made popular with Java, because frameworks started to heavily rely on it. To use it ourselves, we just had to use ObjectOutputStream and ObjectInputStream and Objects could be stored and written across the network or just be cloned. It was even able to handle circular references, which most serialization mechanisms cannot do. The idea was not really new, as other languages had something like this already before, but nobody became aware of it.

But there are some drawbacks, that were discovered when it was already too late and that should at least be mentioned.

  • Marking with a Serializiable-Interface is conceptionally quite a bad solution, because it assumes that serialization never gets lost by deriving classes, which is just not true. An Unserializable interface would have been a much better solution, if not almost ideal solution for this, because trivial objects are always serializable and they loose this when something non-serializable is added. Then again, how about collections… Today possibly some annotation could also be helpful.
  • This serialVersionUID creates a lot of pain. Should we change it, whenever the interface changes? We talk about the implicit serialization interface, not about an explicit interface that we can easily see. Should we trust automatic mechanisms? In any case issues with incompatible versions remain that are not really solved well and cannot even easily be solved well.
  • Serialization introduces an additional invisible constructor.
  • Serialization undermines the idea of private and protected, because suddenly private and protected member attributes become part of the interface
  • Funny effects happen with serializable non-static inner classes, because there serialized version bakes in the containing outer object. Yes it has to…
  • The object indentity gets lost, when an object is serialized. It is easy to create several copies of the same object.
  • Sometimes it is still necessary to manually write serialization code, for example for singletons. It is easy to forget this, because everything seems to work just fine automatically.
  • Java’s Serialization is quite slow.
  • The format is binary and cannot easily be read. A pluggable serialization format that could allow more human readable data files like JSON, XML,… would have been better…
  • Serialization creates a temptation to use this format for communication, which again forces a tight coupling that might not be necessary otherweise.
  • Serialization creates a temptation to use it as a storage format instead of mature database technologies. Very bad: the second level cache of hibernate…

There were some advantages in having this serialization in the past and for some purposes it kind of works. But it is important to question this and to consider other, more solid approaches, even if they require slightly more work. Generally it is today considered one of the larges fallacies of Java to introduce this serialization mechanism in this way. There are now better ways to do serialization, that require a bit more work, but avoid some of the terrible short comings of the native Java-Serialization.

For the serialVersionUID there are several approaches that can work. A statical method, that extracts from an „$Id$“-string that is managed by svn, can be a way. It will avoid compatibility between even slightly different versions, which is probably the best we can get. With git it is a bit harder, but it can be done as well.

Usually it is the best choice by far to leave serialVersionUID empty and rely on Java’s automatic mechanisms. They are not perfect, but better than 99% of the manually badly maintained serialVersionUIDs. If you want to manage your serialVersionUIDs yourself, there needs to be a checklist on what to do to release a new version of a file, a library or a whole software system. This is usually sick, because it creates a lot of work, even more errors and should really be done only with good reasons, good discipline and a very good concept. If you like anyway to use serialVersionUID or if you are forced to do so by the project, here is a script to create them randomly:

#!/usr/bin/perl
use bigint;
use Math::Random::Secure qw(irand);
my $r = (irand() << 32) + irand();
printf "%20d\n", $r;

This is still better than using the IDE-generated value and keeping it forever or starting with a 1 or 0 and keeping it forever, because updating this serialVersionUID is not really on our agenda. And it shouldn't be.

Share Button

Why am I learning Python

To be honest, what can be done with Python can also be done with Perl or Ruby. I am not working in areas, where there is much better library support for Python than for Perl or Ruby. And I like Perl and Ruby very much and I am somewhat skeptical about Python. But there are some points that make it worth knowing Python in addition to Perl and Ruby, not instead of them.

I strongly recommend using real programming languages like Perl, Ruby, Python and you can add some more instead of Bash scripts, where a certain complexity is exceeded. Try reading the pure bash scripts that are used to start Maven, Tomcat or other useful software. Often there is a CMD-script as well, that is the real pure horror. Python serves this purpose well enough, the other two of course as well.

It is always good to learn new languages once in a while, because they extend our horizon and help us even to be better with our more preferred languages. And why not challenge the preferences…

There is a good point in allowing for a tool box of languages, not „only Java“ or „only C#“ or „only C“ or even „only Perl“, whatever you like… Combining a useful toolbox of several languages is the right way to go. This would be the case with a toolbox containing A and B, where A ∈ { C, C++, Java, C#, F#, Scala, Clojure, …} and B ∈ { Perl, Perl6, Ruby, Lua, Python,…}. Usually it is a good idea to make it slightly larger, but it is also good to find a consensus on which set of languages to concentrate. I would for example discourage using sed and awk, because they can quite easily be replaced by Perl and limit bash to very trivial scripts. There are some cases in which the awk or sed scripting is a bit shorter than it is with doing the same in Perl or Ruby, but this does not justify maintaining the extra knowledge, while Perl on top of Java does justify this a lot. So the toolbox should be big enough to cover everything, but it does not have to be too redundant and there can be preferences what tool is recommended to use for a certain class of purposes, if this recommendation is reasonable. This makes it easier to maintain each others code. Now there are many projects, where the spot of B is taken by Python. So in order to be a good team player it might be useful to be able to work with the python scripts, write in this language and contribute instead of spending too much time talking about why Perl or Ruby or Lua or whatever is better. Which it might be. Or which might be more a matter of taste. Here is what big sites are using as A, B, C,….

Now out of these scripting languages, Python is for sure a successful contender. This results in good libraries, but also in higher likelyhood of Python occupying the spot B.

Now we have tools like Jenkins, Kubernetes, Docker, Cloud computing, Spark and simply certain Linux distributions, which might come along with their preferred set of scripting languages that are well supported for performing certain tasks. This can be delegated to one or two guys in the team or kept to a minimum, but this might become a factor of increasing importance. It might force us to have multiple „Bs“ or multiple „As“.

And there are certain areas, where Python is simply strong and has become the language of choice. It seems to have become the successor of Fortran for many if not most numerical calculation areas, even though there will probably always be a niche for powerful compiled languages like Fortran and C for the ultimative performance. But so the library is written in C with Python bindings and we get most of the performance as well. Also Data Science seems to mostly opt for Python as the general purpose language besides R and SQL and SAS. Even Bioinformatics, which was a stronghold of Perl for many years is now preferring Python… Yes, it does hurt someone who likes Perl, but it is true… So to be able to work in many interesting areas, it is useful to know some Python. So I started learning it. I am using a Russian translation of the book Programming Python.

I might write a bit more about the language, once I have some more experience with it.

Share Button

Meaningless Whitespace in Textfiles

We use different file formats that are more or less tolerant to certain changes. Most well known is white space in text files.

In some programming languages white space (space, newline, carriage return, form feed, tabulator, vertical tab) has no meaning, as long as any whitespace is present. Examples for this are Java, Perl, Lisp or C. Whitespace, that is somehow part of String content is always significant, but white space that is used within the program can be combination of one or more of the white space characters that are in the lower 128 positions (ISO-646, often referred to as ASCII or 7bit ASCII. It is of course recommended to have a certain coding standard, which gives some guidelines of when to use newlines, if tabs or spaces are preferred (please spaces) and how to indent. But this is just about human readability and the compiler does not really care. Line numbers are a bit meaningful in compiler and runtime error messages and stack traces, so putting everything into one line would harm beyond readability, but there is a wide range of ways that are all correct and equivalent. Btw. many teams limit lines to 80 characters, which was a valid choice 30 years ago, when some terminals were only 80 characters wide and 132 character wide terminals where just coming up. But as a hard limit it is a joke today, because not many of us would be able to work with a vt100 terminal efficiently anyway. Very long lines might be harder to read, so anything around 120 or 160 might still be a reasonable idea about line lengths…

Languages like Ruby and Scala put slightly more meaning into white space, because in most cases a semicolon can be skipped if it is followed by a newline and not just horizontal white space. And Perl (Perl 5) is for sure so hard to compile that only its own implementation can properly format or even recognize which white space is part of a literal string. Special cases like having the language in a string and parsing and then executing that should be ignored here.

Now we put this program files into a source code management system, usually Git. Some teams still use legacy systems like subversion, source safe, clear case or CVS, while there are some newer systems that are probably about as powerful as git, but I never saw them in use. Git creates an MD5 hash of each file, which implies that any minor change will result in a new version, even if it is just white space. Now this does not hurt too much, if we agree on the same formatting and on the same line ending (hopefully LF only, not CR LF, even on MS-Windows). But our tooling does not make any difference between significant changes and insignificant formatting only changes. This gets worse, if users have different IDEs, which they should have, because everyone should use the IDE or editor, with which he or she is most efficient and the formal description of the preferred formatting is not shared between editors or differs slightly.

I think that each programming language should come with a command line diff tool and a command line formatting tool, that obey a standard interface for calling and can be plugged into editors and into source code management systems like git. Then the same mechanisms work for C, Java, C#, Ruby, Python, Fortran, Clojure, Perl, F#, Scala, Lua or your favorite programming language.

I can imaging two ways of working: Either we have a standard format and possibly individual formats for each developer. During „git commit“ the file is brought into the standard format before it is shown to git. Meaning less whitespace changes disappear. During checkout the file can optionally be brought into the preferred format of the developer. And yes, there are ways to deal with deliberate formatting, that for some reason should be kept verbatim and for dealing differently with comments and of course all kinds of string literals. Remember, the formatting tool comes from the same source as the compiler and fully understands the language.

The other approach leaves the formatting up to the developer and only creates a new version, when the diff tool of the language signifies that there is a relevant change.

I think that we should strive for this approach. It is no rocket science, the kind of tools were around for many decades as diff and as formatting tools, it would just be necessary to go the extra mile and create sister diff and formatting tools for the compiler (or interpreter) and to actually integrate these into build environments, IDEs, editors and git. It would save a lot of time and leave more time for solving real problems.

Is there any programming language that actually does this already?

How to handle XML? Is XML just the new binary with a bit more bloat? Can we do a generic handling of all XML or should it depend on the Schema?

Share Button

Loops with unknown nesting depth

We often encounter nested loops, like

for (i = 0; i < n; i++) {
    for (j = 0; j < m; j++) {
        doSomething(i, j);
    }
}

This can be nested to a few more levels without too much pain, as long as we observe that the number of iterations for each level need to be multiplied to get the number of iterations for the whole thing and that total numbers of iterations beyond a few billions (10^9, German: Milliarden, Russian Миллиарди) become unreasonable no matter how fast the doSomethings(...) is. Just looking at this example program

public class Modular {
    public static void main(String[] args) {
        long n = Long.parseLong(args[0]);
        long t = System.currentTimeMillis();
        long m = Long.parseLong(args[1]);
        System.out.println("n=" + n + " t=" + t + " m=" + m);
        long prod = 1;
        long sum  = 0;
        for (long i = 0; i < n; i++) {
            long j = i % m;
            sum += j;
            sum %= m;
            prod *= (j*j+1) % m;
            prod %= m;
        }
        System.out.println("sum=" + sum + " prod=" + prod + " dt=" + (System.currentTimeMillis() - t));
    }
}

which measures it net run time and runs 0 msec for 1000 iterations and almost three minutes for 10 billions (10^{10}):

> java Modular 1000 1001 # 1'000
--> sum=1 prod=442 dt=0
> java Modular 10000 1001 # 10'000
--> sum=55 prod=520 dt=1
> java Modular 100000 1001 # 100'000
--> sum=45 prod=299 dt=7
> java Modular 1000000 1001 # 1'000'000
--> sum=0 prod=806 dt=36
> java Modular 10000000 1001 # 10'000'000
--> sum=45 prod=299 dt=344
> java Modular 100000000 1001 # 100'000'000
--> sum=946 prod=949 dt=3314
> java Modular 1000000000 1001 # 1'000'000'000
--> sum=1 prod=442 dt=34439
> java Modular 10000000000 1001 # 10'000'000'000
--> sum=55 prod=520 dt=332346

As soon as we do I/O, network access, database access or simply a bit more serious calculation, this becomes of course easily unbearably slow. But today it is cool to deal with big data and to at least call what we are doing big data, even though conventional processing on a laptop can do it in a few seconds or minutes... And there are of course ways to process way more iterations than this, but it becomes worth thinking about the system architecture, the hardware, parallel processing and of course algorithms and software stacks. But here we are in the "normal world", which can be a "normal subuniverse" of something really big, so running on one CPU and using a normal language like Perl, Java, Ruby, Scala, Clojure, F# or C.

Now sometimes we encounter situations where we want to nest loops, but the depth is unknown, something like

for (i_0 = 0; i_0 < n_0; i_0++) {
  for (i_1 = 0; i_1 < n_1; i_1++) {
    \cdots
      for (i_m = 0; i_m < n_m; i_m++) {
        dosomething(i_0, i_1,\ldots, i_m);
      }
    \cdots
  }
}

Now our friends from the functional world help us to understand what a loop is, because in some of these more functional languages the classical C-Style loop is either missing or at least not recommended as the everyday tool. Instead we view the set of values we iterate about as a collection and iterate through every element of the collection. This can be a bad thing, because instantiating such big collections can be a show stopper, but we don't. Out of the many features of collections we just pick the iterability, which can very well be accomplished by lazy collections. In Java we have the Iterable, Iterator, Spliterator and the Stream interfaces to express such potentially lazy collections that are just used for iterating.

So we could think of a library that provides us with support for ordinary loops, so we could write something like this:

Iterable range = new LoopRangeExcludeUpper<>(0, n);
for (Integer i : range) {
    doSomething(i);
}

or even better, if we assume 0 as a lower limit is the default anyway:

Iterable range = new LoopRangeExcludeUpper<>(n);
for (Integer i : range) {
    doSomething(i);
}

with the ugliness of boxing and unboxing in terms of runtime overhead, memory overhead, and additional complexity for development. In Scala, Ruby or Clojure the equivalent solution would be elegant and useful and the way to go...
I would assume, that a library who does something like LoopRangeExcludeUpper in the code example should easily be available for Java, maybe even in the standard library, or in some common public maven repository...

Now the issue of loops with unknown nesting depth can easily be addressed by writing or downloading a class like NestedLoopRange, which might have a constructor of the form NestedLoopRange(int ... ni) or NestedLoopRange(List li) or something with collections that are more efficient with primitives, for example from Apache Commons. Consider using long instead of int, which will break some compatibility with Java-collections. This should not hurt too much here and it is a good thing to reconsider the 31-bit size field of Java collections as an obstacle for future development and to address how collections can grow larger than 2^{31}-1 elements, but that is just a side issue here. We broke this limit with the example iterating over 10'000'000'000 values for i already and it took only a few minutes. Of course it was just an abstract way of dealing with a lazy collection without the Java interfaces involved.

So, the code could just look like this:

Iterable range = new NestedLoopRange(n_0, n_1, \ldots, n_m);
for (Tuple t : range) {
    doSomething(t);
}

Btw, it is not too hard to write it in the classical way either:

        long[] n = new long[] { n_0, n_1, \ldots, n_m };
        int m1 = n.length;
        int m  = m1-1; // just to have the math-m matched...
        long[] t = new long[m1];
        for (int j = 0; j < m1; j++) {
            t[j] = 0L;
        }
        boolean done = false;
        for (int j = 0; j < m1; j++) {
            if (n[j] <= 0) {
                done = true;
                break;
            }
        }
        while (! done) {
            doSomething(t);
            done = true;
            for (int j = 0; j < m1; j++) {
                t[j]++;
                if (t[j] < n[j]) {
                    done = false;
                    break;
                }
                t[j] = 0;
            }
        }

I have written this kind of loop several times in my life in different languages. The first time was on C64-basic when I was still in school and the last one was written in Java and shaped into a library, where appropriate collection interfaces were implemented, which remained in the project or the organization, where it had been done, but it could easily be written again, maybe in Scala, Clojure or Ruby, if it is not already there. It might even be interesting to explore, how to write it in C in a way that can be used as easily as such a library in Java or Scala. If there is interest, please let me know in the comments section, I might come back to this issue in the future...

In C it is actually quite possible to write a generic solution. I see an API like this might work:

struct nested_iteration {
  /* implementation detail */
};

void init_nested_iteration(struct nested_iteration ni, size_t m1, long *n);
void dispose_nested_iteration(struct nested_iteration ni);
int nested_iteration_done(struct nested_iteration ni); // returns 0=false or 1=true
void nested_iteration_next(struct nested_iteration ni);

and it would be called like this:

struct nested_iteration ni;
int n[] = { n_0, n_1, \ldots, n_m };
for (init_nested_iteration(ni, m+1, n); 
     ! nested_iteration_done(ni); 
     nested_iteration_next(ni)) {
...
}

So I guess, it is doable and reasonably easy to program and to use, but of course not quite as elegant as in Java 8, Clojure or Scala.
I would like to leave this as a rough idea and maybe come back with concrete examples and implementations in the future.

Links

Share Button

Microsoft is buying Github

It seems that Microsoft is buying Github for about 7.5 billion USD worth in their own stock.

Is this a good thing or a bad thing? Probably no reason to celebrate, but Microsoft under Nadella seems to run a totally different strategy than Microsoft under Steve Ballmer. Selling licenses of MS-Windows and MS-Office is probably still a good business, if done efficiently, but it cannot be the future of a company of the size of Microsoft. The dominating operating system is now Linux, mostly due to cell phones and tablets running Android, network devices and of course servers. The religious rejection of open source, as it was propagated by Steve Ballmer, is no longer visible. Some MS-Projects like C# and F# have in large parts or completely become open source and on the MS-Azure-platform virtual servers with Linux and PostgreSQL can be chosen, with the limitation that there seems to be a need to have an MS-Windows-box with dedicated software to manage the Azure cloud services. If we ask Richard Stallman this is probably all tactics to achieve bad goals by harming the free software movement. This can be true or not.

But we no longer have to automatically assume that this acquisition is bad. Most likely Github will continue to operate as it did before. Nadella will not shut down Github, while Ballmer might have done so, maybe. He would not have bought Github anyway for so much money. The real asset of Github as well as of LinkedIn and Skype for Microsoft might be the large collection of high quality identity data. Today Google and Facebook have identity data for about two thirds of the adult world population by my estimation. We see it in action when we find a „login with Google“ or „login with Facebook“ option for a web site. Yes, „login with Skype“, „login with Github“ and „login with LinkedIn“ would also be possible or even exist. This is the spot were these acquisitions might or might not be seriously negative. But increasing the pool of identity data makes a lot of sense for the new strategy of the company and I would assume that it was one of the major reasons for this acquisition of Github as well as Skype and LinkedIn before. Is it worth so much money? If the answer is quite obvious, we will probably know in the future.

Links

Links are in English or in German…

Share Button

Scala Days 2018

In 2018 I visited Scala Days in Berlin with this schedule. Other than in previous years I missed the opening keynote, which is traditionally on the evening before the main conference starts, because this is not so well compatible with my choice of traveling with a night train, especially because I did not want to miss more than two days of my current project.

I might provide Links to Videos of the talks, if they become available, including the interesting keynote that I have missed.

So I did visit the following talks on the first full day:

And on the second full day:

Links

Share Button