LongDecimal

Disclaimer: This article is an occasion, where you might need some of the presumably useless mathematics that you might have learned in school and university. If this bothers you, maybe you should wait for the next article in about two weeks time.

LongDecimal is a library that I have provided for Ruby. It is available as a ruby gem. It was originally intended to provide something like BigDecimal for Java. There is a BigDecimal, but it is not really the same. For writing finance applications, such a class is useful, so I wrote one that covers what Java’s BigDecimal has. It ended up by having a lot more, but we will get to that later.

So the general idea is that we do math with a subset of the rational numbers (\mathbb{Q}) \mathbb{D} = \{ \frac{x}{10^n} : x \in \mathbb{Z} \wedge n \in \mathbb{N}_0\}. This is not quite the truth, because the n actually carries information that we care about, so we would actually define

    \[\mathbb{D} = \{ (\frac{x}{10^n}, n) : x \in \mathbb{Z} \wedge n \in \mathbb{N}_0\}.\]

So we actually want to allow the numerator x to be a multiple of 10 and we use this to express the precision as to how many digits after the decimal point are explicitely part of our number. Having more decimal places after the decimal point expresses more precision.

Now we try to use mathematical operations +, - and \cdot on \mathbb{D}. It turns out that we have three different cases. The ring operations can be defined without problems, even though \mathbb{D} is not quite a ring, as we will see. But it is good enough for most purposes.

  • (\frac{x}{10^n}, n) + (\frac{y}{10^m}, m) = (\frac{x}{10^n} + \frac{y}{10^m}, \max(n, m))
  • (\frac{x}{10^n}, n) - (\frac{y}{10^m}, m) = (\frac{x}{10^n} - \frac{y}{10^m}, \max(n, m))
  • (\frac{x}{10^n}, n) \cdot (\frac{y}{10^m}, m) = (\frac{xy}{10^{n+m}}, m+n)

Addition and Subtraction actually lose information if n\ne m, because we might have an input with lower precision and in the end pretend to have a result of the higher precision. But not losing numerical information is considered more important and implicit rounding should be avoided at all costs, at least for the basic operations.

\mathbb{D} is not a ring, but it is a Semiring. The zero is not universally unique, but we seem to have many zeros (0, n). This is not the problem, because only (0, 0) would act as an additive neutral element. What we lack are additive inverse elements. If we have an element (x, n) with n>0, there is no element (y, m), such that (x,n)+(y,m)=(x+y, \max(n,m) = (0, 0). The distributivity, required for a semiring, can be seen easily:

  • ((\frac{x}{10^n}, n) + (\frac{y}{10^m}, m))\cdot (\frac{z}{10^l}, l) = ((\frac{x}{10^n}+\frac{y}{10^m})\cdot\frac{z}{10^l}, l+\max(m,n)
  • (\frac{x}{10^n}, n)\cdot (\frac{z}{10^l}, l) + (\frac{y}{10^m}, m)\cdot (\frac{z}{10^l}, l) = (\frac{x}{10^n}\cdot\frac{z}{10^l}+\frac{y}{10^m}\cdot\frac{z}{10^l}, \max(l+m,l+n)

But since we do computer programming and not math and only use math as a tool to help us, it is kind of OK, that it is only a semiring and not a ring, as long as we know it.

Division is a special case, because it is not always possible to express the exact numerical value of the quotient in \mathbb{D}, for example 3.0/7.0 = \frac{3}{7}, where the denominator is not a power of ten. To do such operations, a rule on how to round needs to be provided. This is cumbersome, because it blows up our formulas, so we define a set \mathbb{E}=\{(r, n) : r \in\mathbb{Q} \wedge n \in\mathbb{N}_0\}. Now the quotient of two elements of \mathbb{D} is a member of \mathbb{E}. And we have the rules

  • (\frac{x}{10^n}, n) / (\frac{y}{10^m}, m) = (\frac{x}{10^n} / \frac{y}{10^m}, p(n, m))
  • (r, n) + (s, m) = (r+s, \max(n, m))
  • (r, n) - (s, m) = (r-s, \max(n, m))
  • (r, n) \cdot (s, m) = (rs, n+m)
  • (r, n) / (s, m) = (\frac{r}{s}, q(n,m))

where p and q somehow try to estimate how precise the result of the division might be. The basic idea is to do the whole calculation that includes the division and round the result to the desired number of decimal places after the point and with the rounding mode desired.

Now the power is a hard one. Arbitrary powers can of course be defined and are supported, but most of the time, the exponent is actually an integer. These cases can be defined nicely. For exponents m\ge 0 we actually get a result in \mathbb{D} and for negative exponents m < 0 we get results in \mathbb{E}:

  • \bigwedge_{n\ge 0}:(\frac{x}{10^n}, n) ^m = \frac{x^m}{10^{mn}}, mn)
  • \bigwedge_{n < 0}:(\frac{x}{10^n}, n) ^m = \frac{x^m}{10^{mn}}, mn)

For non-integral exponents, the calculation of powers falls back to Ruby’s built in power and transforms elements of {\mathbb{D} and \mathbb{E} involved into rational numbers. These are of limited use, but they are provided and work and can be used, when needed. There is a more general power function, that has additional parameters for the desired rounding and number of digits after the decimal point. While this library goes long ways to achieve decent accuracy and speed, there are certainly possible input parameters that will result in extremely long calculation times or results that are much less accurate than claimed. Such examples are „hard“ to find and should not harm the practical usefulness of the library too much. Similar libraries in the Java world like BigDecimal do not even try to calculate powers with arbitrary exponents and the Ruby builtin library BigDecimal (which is something slightly different) does have its issues when calculating arbitrary powers.

Rounding functions are there to convert a numerical type that is at least viewable as a subset of \mathbb{R} to \mathbb{D}. The actual rounding has to be implemented, but it has been done for \mathbb{D}, \mathbb{E} and the built in types of Ruby except for Complex (\mathbb{C}). For complex numbers, the real and the imaginary part are rounded and stuffed into a new complex number.

Rounding needs two pieces of information, the desired precision (number of decimal places after the decimal point) and the rounding mode. There are different methods for rounding, but they all follow the same basic rules. A special case is the round_to_allowed_remainders, which does a residue class rounding.

There are many rounding modes. Rounding can be towards 0, away from 0, towards infinity or towards negative infinity. This boils down to cutting off all digits but n (or adding zeros) and possibly adjusting the result by one, if the cut off part contained anything but zeros. Other rounding modes take a mean between the two adjacent result candidates and decide by that which one to take, requiring an extra rule for the case that the value that needs to be rounded happens to be exactly on the border.

Generalized powers and all functions that return something irrational like square roots, cubic roots, exponential functions, logarithms and in the future also trigonometric functions needs to be calculated with the number of digits required and a rounding mode. Currently square roots (sqrt) and cube roots (cbrt) are calculated accurately according to these rounding parameters. For the transcendential functions (logarithms, exponential functions, power, trigonometric functions) minor deviations from the mathematically accurate result are still possible. Since the major usage of the library is expected to deal with the basic operations only, this is considered acceptable. To really work with the transcendental functions, using interval arithmetic in conjunction with long decimal would anyway be a better way, so the necessary guarantee to be given would be to provide a result that is close, but guaranteed to be lower or equal than the real mathematical result and one that is guaranteed to be greater or equal. Progress in this area is not going to happen very soon, unless someone would be volunteering to help with this or someone would be volunteering to sponsor the development.

Also it might be interesting to port this library to other languages, even to Java, because it has become much more sophisticated than Java’s BigDecimal library. Again this is unlikely to happen too soon without any help.

The current priority is to keep this library working with recent Ruby versions and to add the missing trigonometric functions.

Use it as follows:
gem install long-decimal
to install it. Then use it in your code with:
require "long-decimal"

A remark for people who are mathematically inclined: The definition of the natural numbers \mathbb{N} is not totally universal. Sometimes we have \mathbb{N} = \{0, 1, 2, 3, 4,\ldots\} and sometimes we have \mathbb{N} = \{1, 2, 3, 4,\ldots\}. To avoid this, I am using \mathbb{N}_0 = \{0, 1, 2, 3, 4,\ldots\}, even though the index _0 is kind of ugly. I agree with Dijkstra that we should prefer to include the 0 in the natural numbers.
Another remark for mathematically interested readers: If we were defining \mathbb{D}=\{ \frac{x}{10^n} : x \in \mathbb{Z} \wedge n \in \mathbb{N}_0\}, we would actually have a ring. If we now replaced 10 with a prime number p, we would approach the realm of p-adic numbers (\mathbb{Q}_p). This is well worth supporting by a library as well, but it is quite a different story and of course only of interest to a small group who actually knows p-adic numbers and works with them.

Links

This blog:

Share Button

TruffleRuby

The language Ruby is one of the most beautiful languages. A lot of things can be done, it has a good level of abstraction, it has chosen some very good defaults, has provided some great ideas that I have not discovered in any other language that I know well and provides a lot of flexibility. But I could no longer recommend it for projects that might require a good performance. I won’t go into the issue of static typing vs. dynamic here. Ruby is following the dynamic typing path and if you think that is a bad idea at all, then it will never become your favorite. But this is an issue with pros and cons. The big disadvantage of Ruby is that it is not very good in terms of performance. The single threaded performance is somewhat better in many reasonable languages like Java, Python, Scala, Clojure, C, C#, F# and some others .. And it gets worse when we want to use multiple cores, because Ruby does not run them simultaneously, but uses a global lock which ensures that only one thread at a time is running. Or in case of JRuby just crashes or yields wrong results in certain mulithreaded programs that we could write.

One approach is to go for immutability as a default, which allows quite painless multithreading. Scala and Clojure follow this route, for example. It is hard to write good code with this constraint or to make good use of very local mutability without leaking it outside, but under these conditions multiple threads are just working fine without deadlocks, crashes or falsified results. Another approach is to just copy structures and leave its own copy to each thread. There are ways to do a lot on this path, but the copying costs a lot of memory and performance and it is not always a gain.

Now Ruby heavily relies on mutable structures for strings and collections. It is not reasonable to go for a total paradigm change in this aspect. But there are some ways to get good and safe and fast operations on these collections and strings without breaking this. One idea is to work with chunks of collections or strings. For strings, the string that we are working with is described as a concatenation of such strings. Many operations can be made by just concatenating multiple strings together and possibly replacing one of them with a copy that can be made as needed. This is called Rope. A similar approach can be applied to collections. Then a smart locking mechanism is applied to the shorter string or collections when needed, but many operations can avoid locking or block much less of the structure.

Also the compiler can analyze the program and simplify it to a great extent, compile it to the JVM, which in conjunction with hot spot optimizations will make it run really fast. Now this TruffleRuby is much faster than other Rubies, by a factor of about 10. It uses GraalVM and it actually supports a lot of C-extensions for libraries through the feature of GraalVM that they can be eventually compiled to the JVM. It does not work if extensions rely on implementation details of the Ruby structures in C and it often does not work for C-extensions that go to low level OS functionality. The current version of TruffleRuby is not really ready to use in conjunction with Ruby on Rails, which is kind of a no go, because Ruby is usually used in conjunction with Rails. My impression is that it will be possible to use it with Rails in a year or two.

Hearing of this in a talk by Benoit Daloze in the local Rails user group in Zürich was a great and positive surprise. Ruby gets interesting again.

Links

Share Button

Why am I learning Python

To be honest, what can be done with Python can also be done with Perl or Ruby. I am not working in areas, where there is much better library support for Python than for Perl or Ruby. And I like Perl and Ruby very much and I am somewhat skeptical about Python. But there are some points that make it worth knowing Python in addition to Perl and Ruby, not instead of them.

I strongly recommend using real programming languages like Perl, Ruby, Python and you can add some more instead of Bash scripts, where a certain complexity is exceeded. Try reading the pure bash scripts that are used to start Maven, Tomcat or other useful software. Often there is a CMD-script as well, that is the real pure horror. Python serves this purpose well enough, the other two of course as well.

It is always good to learn new languages once in a while, because they extend our horizon and help us even to be better with our more preferred languages. And why not challenge the preferences…

There is a good point in allowing for a tool box of languages, not „only Java“ or „only C#“ or „only C“ or even „only Perl“, whatever you like… Combining a useful toolbox of several languages is the right way to go. This would be the case with a toolbox containing A and B, where A ∈ { C, C++, Java, C#, F#, Scala, Clojure, …} and B ∈ { Perl, Perl6, Ruby, Lua, Python,…}. Usually it is a good idea to make it slightly larger, but it is also good to find a consensus on which set of languages to concentrate. I would for example discourage using sed and awk, because they can quite easily be replaced by Perl and limit bash to very trivial scripts. There are some cases in which the awk or sed scripting is a bit shorter than it is with doing the same in Perl or Ruby, but this does not justify maintaining the extra knowledge, while Perl on top of Java does justify this a lot. So the toolbox should be big enough to cover everything, but it does not have to be too redundant and there can be preferences what tool is recommended to use for a certain class of purposes, if this recommendation is reasonable. This makes it easier to maintain each others code. Now there are many projects, where the spot of B is taken by Python. So in order to be a good team player it might be useful to be able to work with the python scripts, write in this language and contribute instead of spending too much time talking about why Perl or Ruby or Lua or whatever is better. Which it might be. Or which might be more a matter of taste. Here is what big sites are using as A, B, C,….

Now out of these scripting languages, Python is for sure a successful contender. This results in good libraries, but also in higher likelyhood of Python occupying the spot B.

Now we have tools like Jenkins, Kubernetes, Docker, Cloud computing, Spark and simply certain Linux distributions, which might come along with their preferred set of scripting languages that are well supported for performing certain tasks. This can be delegated to one or two guys in the team or kept to a minimum, but this might become a factor of increasing importance. It might force us to have multiple „Bs“ or multiple „As“.

And there are certain areas, where Python is simply strong and has become the language of choice. It seems to have become the successor of Fortran for many if not most numerical calculation areas, even though there will probably always be a niche for powerful compiled languages like Fortran and C for the ultimative performance. But so the library is written in C with Python bindings and we get most of the performance as well. Also Data Science seems to mostly opt for Python as the general purpose language besides R and SQL and SAS. Even Bioinformatics, which was a stronghold of Perl for many years is now preferring Python… Yes, it does hurt someone who likes Perl, but it is true… So to be able to work in many interesting areas, it is useful to know some Python. So I started learning it. I am using a Russian translation of the book Programming Python.

I might write a bit more about the language, once I have some more experience with it.

Share Button

Scanning, sorting and processing large numbers of photos

I guess for most of us this is more an issue of their private life rather than done professionally, and those woo do this for money should already have answers for everything…. But the IT aspects of this are interesting anyway…

So some of us, including me, have hundreds or thousands of photographs that have been created using analog photography. I am still using it, because I have a good equipment, the prices and availability for films and prints and scanning of the negatives to a CD are still good. My equipment is good and I am neither willing to give that up nor to do a major investment. It will come some day in the future and I expect that within five to ten years the reasonably priced and ubiquitous offers for handling of negative films and prints will disappear.

Anyway it is a good idea to scan all the slides and negatives, at least the ones that are of any interest. It is easier and cheaper to copy them, to get prints and to do some improvements with software like Gimp prior to creating prints. Also it is also possible to use and share them online.

Scanning with a flat bed scanner is not an option for negatives, it works with prints, but I think that it is too slow and I do not like the loss in quality due to the unnecessary intermediate step. This leaves two options, getting a negative scanner myself or using a service. So it is good to assume that they are already scanned for now. I organize the photos in a directory structure. The names should contain only 7-Bit-ASCII-characters, but no spaces, to be easier accessible by scripts and on the shell. I have scripts to rename them to this pattern, for directories and for files. They can be found under my github project „photo processing scripts“ with names:
* rename-canonical
* rename-dir-canonical
* rename-dirs-radically
Another interesting issue is finding and removing duplicates, but since the name of the file and its position int the file system do have some meaning, this needs some attention. When two identical files A and B are found, there are five resolutions:
* rm A (remove A, leave B)
* rm B (remove B, leave A)
* rm A ; ln -s B A (make A a softlink to B)
* rm B ; ln -s A B (make B a softlink to A)
* rm B ; ln -f A B (make B a hardlink of A. Apart from the inode number this is equivalent to the opposition direction)
Which of these is actually prefered? My scripts picks the last option, but does not actually perform it. Instead it just create output of the shell commands, which can be piped to a file or directly to sh, in which case they are immediately executed. Otherwise it is possible to edit the command, filter them or even change them with a one-liner in the Perl programming language. This can be found here:
* find-dups
For viewing the photos in the browser, I have added another script, that is called
* create-foto-index
It searches the current directory and all sub directories, except those starting with a dot („.“) recursively. For each image file a thumb nail image is needed, which is eather found in the .thumbs directory or created using the script
* scale-image
Then an index.html file is created in each dictory having links to its child directories, the neighboring directories and the parent directory. For each image the thumbnail is included and it is a link to the full sized image. With this it is easily possible to vieww the whole album in a browser locally.
Some images know their orientation already from the camera or phone, but they appear wrong anyway. These can be fixed automatically running the script
* auto-rotate
in the directory.
I have a web server and a CGI-script running:
* cgi/mark-images.cgi
which allows me to mark images with a checkbox or with a string. Using letters „D“ for delete, „R“ for rotate right (90 degree clockwise), „L“ for rotate left (90 degries counter clockwise) and „F“ for flip (rotate 180 degrees) and then press the OK button.
Running the script
* rotate-checked
which will delete and rotate the images according to the choices in the form.

This is already quite a useful situation. Images that are needed for prints or for the web might need some processing with GIMP:
* possibly rotate them in such a way that the horizon is horizontal and vertical lines are vertical, at least in the middle of the image.
* possibly correct perspective
* possibly sharpen
* possibly correct contrast and brightness
* possibly correct color saturation and colors
* cut out what is really interesting
* save it under a different name
* call create-foto-index again.
The webform and the CGI-script can be used for picking which images to edit. After having pressed OK it will be done like this:

gimp `egrep 'jpg$' </var/lib/wwwrun/mark-fotos/marked.dat` &

In a similar way images from a directory can be selected in indexf.html and then extracted to a ZIP:

zip my-archive.zip `egrep 'jpg$' </var/lib/wwwrun/mark-fotos/marked.dat`

which can be given to somebody or uploaded for creating prints or just unpacked in anther directory to have only the good images.

There are some more issues, which I might address in another article.

Share Button

Will Java, C, C++ and C# be the new Cobols?

A few decades ago most programming was performed in Cobol (I do not want to shout it), Fortran, Rexx and some typical main frame languages, that hardly made it to the Linux-, Unix- or MS-Windows-world. They are still present, but mostly used for maintenance and extension of existing software, but less often for writing new software from scratch.
In these days languages like C, C++, Java and to a slightly lesser extent C# dominate the list of most commonly used languages. I would assume that JavaScript is also quite prominent in the list, because it has become more popular to write rich web clients using frameworks like Angular JS. And there are tons of them and some really good stuff. Some people like to see JavaScript also on the server side and in spite of really interesting frameworks like Node-JS I do not really consider this a good idea. If you like you may add Objective C to this list, which I do not know very much at all, even though it has been part of my gcc since my first Linux installation in the early 1990es.

Anyway, C goes back to the 1970es and I think that it was a great language to create at that time and it still is for a certain set of purposes. When writing operating systems, database engines, compilers and interpreters for other languages, editors, or embedded software, everything that is very close to the hardware, explicit control and direct access to very powerful OS-APIs are features that prove to be useful. It has been said that Java runs as fast as C, which is at least close to the truth, but only if we do not take into account the memory usage. C has some short comings that could be done better without sacrificing its strengths in the areas where it is useful, but it does not seem to be happening.

C++ has been the OO-extension of C, but I would say that it has evolved to be a totally different language for different purposes, even though there is some overlap, it is relatively easy to call functionality written in C from C++ and a little bit harder the other way round… I have not used it very much recently, so I will refrain from commenting further on it.

Java has introduced an infrastructure that is very common now with its virtual machine. The JVM is running on a large number of servers and any JVM-language can be used there. The platform independence is an advantage, but I think that its importance on servers has diminished a little bit. There used to be all kinds of servers with different operating systems and different CPU-architectures. But now we are moving towards servers being mostly Linux with Intel-compatible CPUs, so it is becomeing less of an issue. This may change in the future again, of course.

With Mono C# can be used in ways similar to Java, at least that is what the theory says and what seems to be quite true at least up to a certain level. It seems to be a bit ahead of Java with some language features, just think of operator overloading, undeclared exceptions, properties, generics or lambdas, which have been introduced earlier or in a more elegant way or we are still waiting in Java. I think the case of lambdas also shows the limitations, because they seem to behave differently than you would expect from real closures, which is the way lambdas should be done and are done in more functionally oriented languages or even in the Ruby programming language, in the Perl programming language or typical Lisps.
Try this

List<Func<int>> actions = new List<Func<int>>();

int variable = 0;
while (variable < 5)
{
    actions.Add(() => variable * 2);
    ++ variable;
}

foreach (var act in actions)
{
    Console.WriteLine(act.Invoke());
}

We would expect the output 0, 2, 4, 6, 8, but we are getting 10, 10, 10, 10, 10 (one number in a line, respectively).
But it can be fixed:

List<Func<int>> actions = new List<Func<int>>();

int variable = 0;
while (variable < 5)
{
    int copy = variable;
    actions.Add(() => copy * 2);
    ++ variable;
}

foreach (var act in actions)
{
    Console.WriteLine(act.Invoke());
}

I would say that the concept of inner classes is better in Java, even though what is static there should be the default, but having lambdas this is less important…
You find more issues with class loader, which are kind of hard to tame in java, but extremely powerful.

Anyway, I think that all of these languages tend to be similar in their syntax, at least within a method or function and require a lot of boiler plate code. Another issue that I see is that the basic types, which include Strings, even if they are seen as basic types by the language design, are not powerful enough or full of pitfalls.

While the idea to use just null terminated character arrays as strings in C has its beauty, I think it is actually not really good enough and for more serious C applications a more advanced string library would be good, with the disadvantage that different libraries will end up using different string libraries… Anyway, for stuff that is legitimately done with C now, this is not so much of an issue and legacy software has anyway its legacy how to deal with strings, and possible painful limitations in conjunction with Unicode. Java and also C# have been introduced at a time when Unicode was already around and the standard already claimed to use more than 65536 code points (characters in Unicode-speak), but at that time 65536 seemed to be quite ok to cover the needs for all common languages and so utf-16 was picked as an encoding. This blows up the memory, because strings occupy most of the memory in typical application software, but it still leaves us with uncertainties about length and position, because code points can be one or two 16-bit-„characters“ long, which can only be seen by actually iterating through the string, which leaves us where we were with null terminated strings in C. And strings are really hard to replace or enhance in this aspect, because they are used everywhere.

Numbers are not good either. As an application developer we should not care about counting bits, unless we are in an area that needs to be specifically optimized. We are using mostly integer types in application development, at least we should. These overflow silently. Just to see it in C#:

int i = 0;
int s = 1;
for (i = 0; i < 20; i++)
{
    s *= 7;
    Console.WriteLine("i=" + i + " s=" + s);
}

which gives us:

i=0 s=7
i=1 s=49
i=2 s=343
i=3 s=2401
i=4 s=16807
i=5 s=117649
i=6 s=823543
i=7 s=5764801
i=8 s=40353607
i=9 s=282475249
i=10 s=1977326743
i=11 s=956385313
i=12 s=-1895237401
i=13 s=-381759919
i=14 s=1622647863
i=15 s=-1526366847
i=16 s=-2094633337
i=17 s=-1777531471
i=18 s=442181591
i=19 s=-1199696159

So it silently overflows, or just takes the remainder modulo 2^{32} with the representation system \{-2^{31} \ldots 2^{31}-1\}. Java, C and C++ behave exactly the same way, only that we need to know what „int“ means for our C-compiler, but if we use 32-bit-ints, it is the same. This should throw an exception or switch to some unlimited long integer. Clojure offers both options, depending on whether you use * or *‘ as operator. So as application developers we should not have to care about these bits and most developers do not think about it. Usually it goes well, but a lot of software bugs are around due to this pattern. It is just wrong in C#, Java, and C++. In C I find it more acceptable, because the typical area for using C for new software actually is quite related to bits and bytes, so the developers need to be aware of such issues all the time anyway.

I would consider it desirable to move to more expressive languages like Clojure, Scala, F#, Ruby or Perl for application development. Ruby and Perl have better Strings. Clojure and Scala inherit them from the JVM, and F# has the same strings as C#. Ruby and Clojure have a good way to deal with integers, Scala, Perl and F# can do it right if we actually want to do so, but not by default. Perl and Ruby are very weak when it comes to multithreading. As compared to Java this can be dealt with by just using more processes instead of threads, because the overhead of a Ruby or Perl process is much less than the overhead of a Java process, but I would see this as a major drawback. C, C#, Java and C++ offer good facilities to use multithreading, but the issue of avoiding typical multithreading bugs is a big deal and actually too hard for a large fraction of typical application developers. Or at least too far away from there point of focus. Moving to a more functional paradigm might be a way to go. Java enterprise edition is a failure if the goal is to get multithreading, done well without having to worry about it, because the overhead is too much. On the other hand, if you are willing to go the extra mile, having more explicit access to the multithreading mechanism and using it correctly is extremely powerful, for example in C with pthreads or with a deliberate usage of processes, shared memory and threads together. For which kind of projects do we have the time and the team for this? I am not talking about multithreaded applications that work well on the developer’s laptop, but fail during some high load processing in production with some concurrent modification issues a few months after the deployment. Thinking cannot be replaced by testing.

So now we have a lot of software in C, C++, Java and C# and a lot of new software is written in these languages, even from scratch. We could do better, sometimes we do, sometimes we don’t. It is possible to write excellent application software with Java, C++, C# and even C. It just takes a bit longer, but if we use them with care, it will be ok. Some companies are very conservative and want to use stuff that has been around for a long time. This is sometimes right and sometimes wrong. And since none of the more modern languages has really picked up so much speed that it can be considered a new main stream, it is understandable that some organizations are scared about marching into a dead end road.

On the other hand, many businesses can differentiate themselves by providing services that are only possible by having a very innovative IT. Banks like UBS and Credit Suisse in Switzerland are not likely to be there, while banks like ING are on that road. As long as they compete for totally different customer bases and as long as the business has enough strengths that are not depending so heavily on an innovative IT, but just on a working robust IT, this will be fine. But time moves on and innovation will eventually out-compete conservative businesses.

Share Button

Frameworks for Unit Testing and Mocking

Unit testing has fortunately become an important issue in many software projects. The idea of automatic software based unit and integration tests is actually quite old. The typical Linux software that is downloaded as source code and then built with steps like

tar xfzvv «software-name-with-version».tar.gz
cd «software-name-with-version»
./configure
make
sudo make install

often allows a step

make test

or

make check

or even both before the

make install

It was like that already in the 1990s, when the word „unit test“ was unknown and the whole concept had not been popularized to the main stream.

What we need is to write those automated tests to an extent that we have good confidence that the software will be reliable enough in terms of bugs if it passes the test suite. The tests can be written in any language and I do encourage you to think about using other languages, in order to be less biased and more efficient for writing the tests. We may choose to write a software in C, C++ or Java for the sake of efficiency or easier integration into the target platform. But these languages are efficient in their usages of CPU power, but not at all efficient in using developer time to write a lot of functionality. This is ok for most projects, because the effort it takes to develop with these languages is accepted in exchange for the anticipated benefits. For testing it is another issue.

On the other hand there are of course advantages in using actually the same language for writing the tests, because it is easier to access the APIs and even internal functionalities during the tests. So it may very well be that Unit tests are written in the same language as the software and this is actually what I am doing most of the time. But do think twice about your choice.

Now writing automated tests is actually no magic. It does not really need frameworks, but is quite easy to accomplish manually. All we need is kind of two areas in our source code tree. One area that goes into the production code and one area that is only used for the tests and remains on the development and continuous integration machines. Since writing automated tests without frameworks is not really a big deal, we should only look at frameworks that are really simple and easy to use or maybe give us really good features that we actually need. This is the case with many such frameworks, so the way to go is to actually use them and save some time and make the structure more accessible to other team members, who know the same testing framework. Writing and running unit tests should be really easy, otherwise it is not done or the unit tests are disabled and loose contact to the actual software and become worthless.

Bugs are much more expensive, the later they are discovered. So we should try to find as many of them while developing. Writing unit tests and automated integrated tests is a good thing and writing them early is even better. The pure test driven approach does so before actually writing the code. I recommend this for bug fixing, whenever possible.

There is one exception to this rule. When writing GUIs, automated testing is possible, but quite hard. Now we should have UX guys involved and we should present them with some early drafts of the software. If we had already developed elaborate selenium tests by then, it would be painful to change the software according to the advice of the UX guy and rewrite the tests. So I would keep it flexible until we are on the same page as the UX guys and add the tests later in this area.

Frameworks that I like are actually CUnit for C, JUnit for Java, where TestNG would be a viable alternative, and Google-Test for C++. CUnit works extremely well on Linux and probably on other Unix-like systems like Solaris, Aix, MacOSX, BSD etc. There is no reason why it should not work on MS-Windows. With cygwin actually it is extremely easy to use it, but with native Win32/Win64 it seems to need an effort to get this working, probably because MS-Windows is no priority for the developers of CUnit.

Now we should use our existing structures, but there can be reasons to mock a component or functionality. It can be because during the development a component does not exist. Maybe we want to see if the component is accessed the right way and this is easier to track with a mock that records the calls than with the real thing that does some processing and gives us only the result. Or maybe we have a component with is external and not always available or available, but too time consuming for most of our tests.

Again mocking is no magic and can be done without tools and frameworks. So the frameworks should again be very easy and friendly to use, otherwise they are just a pain in the neck. Early mocking frameworks were often too ambitious and too hard to use and I would have avoided them whenever possible. In Java mocking manually is quite easy. We just need an interface of the mocked component and create an implementing class. Then we need to add all missing methods, which tools like eclipse would do for us, and change some of them. That’s it. Now we have mockito for Java and Google-Mock, which is now part of Google-Test, for C++. In C++ we create a class that behaves similar to a Java interface by having all methods pure virtual with keyword „virtual“ and „=0“ instead of the implementation. The destructor is virtual with an empty implementation. They are so easy to use and they provide useful features, so they are actually good ways to go.

For C the approach is a little bit harder. We do not have the interfaces. So the way to go is to create a library of the code that we want to test and that should go to production. Then we write one of more c-files for the test, that will and up in an executable that actually runs the test. In these .c-files we can provide a mock-implementation for any function and it takes precedence of the implementation from the library. For complete tests we will need to have more than one executable, because in each case the set of mocked functions is fixed within one executable. There are tools in the web to help with this. I find the approach charming to generate the C-code for the mocked functions from the header files using scripts in the a href=“https://en.wikipedia.org/wiki/Ruby_(programming_language)“>Ruby programming language or in the Perl programming language.

Automated testing is so important that I strongly recommend to do changes to the software in order to make it accessible to tests, of course within reason. A common trick is to make certain Java methods package private and have the tests in the same package, but a different directory. Document why they are package private.

It is important to discuss and develop the automated testing within the team and find and improve a common approach. Laziness is a good thing. But laziness means running many automated tests and avoid some manual testing, not being too lazy to write them and eventually spending more time on manual repetitive activities.

I can actually teach this in a two-day or three-day course.

Share Button

How to create ISO Date String

It is a more and more common task that we need to have a date or maybe date with time as String.

There are two reasonable ways to do this:
* We may want the date formatted in the users Locale, whatever that is.
* We want to use a generic date format, that is for a broader audience or for usage in data exchange formats, log files etc.

The first issue is interesting, because it is not always trivial to teach the software to get the right locale and to use it properly… The mechanisms are there and they are often used correctly, but more often this is just working fine for the locale that the software developers where asked to support.

So now the question is, how do we get the ISO-date of today in different environments.

Linux/Unix-Shell (bash, tcsh, …)

date "+%F"

TeX/LaTeX


\def\dayiso{\ifcase\day \or
01\or 02\or 03\or 04\or 05\or 06\or 07\or 08\or 09\or 10\or% 1..10
11\or 12\or 13\or 14\or 15\or 16\or 17\or 18\or 19\or 20\or% 11..20
21\or 22\or 23\or 24\or 25\or 26\or 27\or 28\or 29\or 30\or% 21..30
31\fi}
\def\monthiso{\ifcase\month \or
01\or 02\or 03\or 04\or 05\or 06\or 07\or 08\or 09\or 10\or 11\or 12\fi}
\def\dateiso{\def\today{\number\year-\monthiso-\dayiso}}
\def\todayiso{\number\year-\monthiso-\dayiso}

This can go into a file isodate.sty which can then be included by \include or \input Then using \todayiso in your TeX document will use the current date. To be more precise, it is the date when TeX or LaTeX is called to process the file. This is what I use for my paper letters.

LaTeX

(From Fritz Zaucker, see his comment below):

\usepackage{isodate} % load package
\isodate % switch to ISO format
\today % print date according to current format

Oracle


SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD') FROM DUAL;

On Oracle Docs this function is documented.
It can be chosen as a default using ALTER SESSION for the whole session. Or in SQL-developer it can be configured. Then it is ok to just call

SELECT SYSDATE FROM DUAL;

Btw. Oracle allows to add numbers to dates. These are days. Use fractions of a day to add hours or minutes.

PostreSQL

(From Fritz Zaucker, see his comment):

select current_date;
—> 2016-01-08


select now();
—> 2016-01-08 14:37:55.701079+01

Emacs

In Emacs I like to have the current Date immediately:

(defun insert-current-date ()
"inserts the current date"
(interactive)
(insert
(let ((x (current-time-string)))
(concat (substring x 20 24)
"-"
(cdr (assoc (substring x 4 7)
cmode-month-alist))
"-"
(let ((y (substring x 8 9)))
(if (string= y " ") "0" y))
(substring x 9 10)))))
(global-set-key [S-f5] 'insert-current-date)

Pressing Shift-F5 will put the current date into the cursor position, mostly as if it had been typed.

Emacs (better Variant)

(From Thomas, see his comment below):

(defun insert-current-date ()
"Insert current date."
(interactive)
(insert (format-time-string "%Y-%m-%d")))

Perl

In the Perl programming language we can use a command line call

perl -e 'use POSIX qw/strftime/;print strftime("%F", localtime()), "\n"'

or to use it in larger programms

use POSIX qw/strftime/;
my $isodate_of_today = strftime("%F", localtime());

I am not sure, if this works on MS-Windows as well, but Linux-, Unix- and MacOS-X-users should see this working.

If someone has tried it on Windows, I will be interested to hear about it…
Maybe I will try it out myself…

Perl 5 (second suggestion)

(From Fritz Zaucker, see his comment below):

perl -e 'use DateTime; use 5.10.0; say DateTime->now->strftime(„%F“);‘

Perl 6

(From Fritz Zaucker, see his comment below):

say Date.today;

or

Date.today.say;

Ruby

This is even more elegant than Perl:

ruby -e 'puts Time.new.strftime("%F")'

will do it on the command line.
Or if you like to use it in your Ruby program, just use

d = Time.new
s = d.strftime("%F")

Btw. like in Oracle SQL it is possible add numbers to this. In case of Ruby, you are adding seconds.

It is slightly confusing that Ruby has two different types, Date and Time. Not quite as confusing as Java, but still…
Time is ok for this purpose.

C on Linux / Posix / Unix


#include
#include
#include

main(int argc, char **argv) {

char s[12];
time_t seconds_since_1970 = time(NULL);
struct tm local;
struct tm gmt;
localtime_r(&seconds_since_1970, &local);
gmtime_r(&seconds_since_1970, &gmt);
size_t l1 = strftime(s, 11, "%Y-%m-%d", &local);
printf("local:\t%s\n", s);
size_t l2 = strftime(s, 11, "%Y-%m-%d", &gmt);
printf("gmt:\t%s\n", s);
exit(0);
}

This speeks for itself..
But if you like to know: time() gets the seconds since 1970 as some kind of integer.
localtime_r or gmtime_r convert it into a structur, that has seconds, minutes etc as separate fields.
stftime formats it. Depending on your C it is also possible to use %F.

Scala


import java.util.Date
import java.text.SimpleDateFormat
...
val s : String = new SimpleDateFormat("YYYY-MM-dd").format(new Date())

This uses the ugly Java-7-libraries. We want to go to Java 8 or use Joda time and a wrapper for Scala.

Java 7


import java.util.Date
import java.text.SimpleDateFormat

...
String s = new SimpleDateFormat("YYYY-MM-dd").format(new Date());

Please observe that SimpleDateFormat is not thread safe. So do one of the following:
* initialize it each time with new
* make sure you run only single threaded, forever
* use EJB and have the format as instance variable in a stateless session bean
* protect it with synchronized
* protect it with locks
* make it a thread local variable

In Java 8 or Java 7 with Joda time this is better. And the toString()-method should have ISO8601 as default, but off course including the time part.

Summary

This is quite easy to achieve in many environments.
I could provide more, but maybe I leave this to you in the comments section.
What could be interesting:
* better ways for the ones that I have provided
* other databases
* other editors (vim, sublime, eclipse, idea,…)
* Office packages (Libreoffice and MS-Office)
* C#
* F#
* Clojure
* C on MS-Windows
* Perl and Ruby on MS-Windows
* Java 8
* Scala using better libraries than the Java-7-library for this
* Java using better libraries than the Java-7-library for this
* C++
* PHP
* Python
* Cobol
* JavaScript
* …
If you provide a reasonable solution I will make it part of the article with a reference…
See also Date Formats

Share Button

Conversion of ASCII-graphics to PNG or JPG

Images are usually some obscure binary files. Their most common formats, PNG, SVG, JPEG and GIF are well documented and supported by many software tools. Libraries and APIs exist for accessing these formats, but also a phantastic free interactive software like Gimp. The compression rate that can reasonably be achieved when using these format is awesome, especially when picking the right format and the right settings. Tons of good examples can be found how to manipulate these image formats in C, Java, Scala, F#, Ruby, Perl or any other popular language, often by using language bindings for Image Magick.

There is another approach worth exploring. You can use a tool called convert to just convert an image from PNG, JPG or GIF to XPM. The other direction is also possible. Now XPM is a text format, which basically represents the image in ASCII graphics. It is by the way also valid C-code, so it can be included directly in C programms and used from there, when an image needs to be hard coded into a program. It is not generally recommended to use this format, because it is terribly inefficient because it uses no compression at all, but as intermediate format for exploring additional ways for manipulating images it is of interest.
An interesting option is to create the XPM-file using ERB in Ruby and then converting it to PNG or JPG.

Share Button

Find the next entry in a sequence

In Facebook, Xing, Google+, Vk.com, Linkedin and other of these social media networks we are often encountered with a trivial question like this:

1->2
2->8
3->18
4->32
5->50
6->72
7->?

There are some easy patterns. Either it is some polynomial formula or some trick with the digits.
But the point is, that any such sequence can easily be fullfilled by a polynomial formula. That means we can put any value for 7 and make it work. Or any answer is correct. So what would probably be the real question is the most simple function to full-fill the given constraints. Simplicity can be measured in some way… If the solution is unique is unclear, but let us just look at the polynomial solution.

A function is needed that takes as parameter a list of key-value-pairs (or a hash map) and that yields a function such that the function of any of the key is the associated value.

Assuming a polynomial function in one variable we can make use of the chinese remainder theorem, which can be applied to univariate polynomials over a field F as well as to integral numbers. For a polynomial p(X) we have

    \[p(x) \equiv p(X) \mod X-x\]

where X is the polynomial variable and x\in F is a concrete value.

We are looking for a polynomial p(X) such that for given values x_0,\ldots x_{n-1}, y_0,\ldots,y_{n-1} \in F we have

    \[\bigwedge_{i=0}^{n-1} p(x_i) = y_i\]

or in another way

    \[\bigwedge_{i=0}^{n-1} p(X) \equiv y_i \mod X-x_i\]

which is exactly the Chinese remainder theorem.
Let

    \[I=\{0,\ldots,n-1\}\]

and

    \[\bigwedge_{j=0}^{n-1} I_j = I \setminus \{j\}\]

We can see that for all i \in I the polynomials

    \[e_i = \prod_{j \in I_j} \frac{X-x_j}{x_i-x_j}\]

have the properties

    \[e_i(x_i)=1\]

    \[\bigwedge_{j \in I_i} e_i(x_j)=0\]

or

    \[\bigwedge_{i \in I}\bigwedge_{j \in J} e_i(x_j)=\delta_{i,j}\]

where \delta_{i,j} is the Kronecker symbol, which is 0 if the two indices differ and 1 if they are equal.
Or as congruence:

    \[\bigwedge_{i \in I}\bigwedge_{j \in J} e_i(X)\equiv \delta_{i,j} \mod X-x_j\]

Then we can just combine this and use

    \[p(X) =\sum_{i \in I} y_i e_i(X)\]

This can easily be written as a Ruby function

def fun_calc(pairs)
  n = pairs.size
  result = lambda do |x|
    y = 0
    n.times do |i|
      p_i = pairs[i]
      x_i = p_i[0].to_r
      y_i = p_i[1].to_r
      z = y_i
      n.times do |j|
        if (j != i)
          p_j = pairs[j]
          x_j = p_j[0]
          z *= (x - x_j) / (x_i - x_j)
        end
      end
      y += z
    end
    y
  end
  result
end

This takes a list of pairs as a parameter and returns the polynomial function als lambda.
It can be used like this:

lop = [[0, 0], [1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36], [7, 64]]

f = fun_calc(lop)

20.times do |x|
  y = f.call(x)
  puts sprintf("%6d -> %6d", x, y)
end

Put this together into a ruby program and add some parsing for the list of pairs or change the program each time you use it and all these „difficult“ questions „that 99.9% fail to solve“ are not just easy, but actually soluble automatically.

This is interesting for more useful applications. I assume that there will always be situations where a function is needed that meets certain exact values a certain inputs and is an interpolation or extrapolation of this.

Please observe that there are other interesting and useful ways to approach this:

  • Use a „best“ approximation from a set of functions, for example polynomials with a given maximum degree
  • use cubic splines, which are cubic polynomials within each section between two neighboring input values such that at the input values the two adjacent functions have the same value (y_i, of course), the same first derivative and the same second derivative.

For highway and railroad construction other curves are used, because the splines are making an assumption on what is the x-axis and what is the y-axis, which does not make sense for transport facilities. They are using a curve called Clothoid.

Use Java, C, Perl, Scala, F# or the programming language of your choice to do this. You only need Closures, which are available in Java 8, F#, Scala, Perl, Ruby and any decent Lisp dialect. In Java 7 they can be done with an additional interface as anonymous inner classes. And for C it has been described in this blog how to do closures.

Share Button

Binary Data

Having discussed to some extent about strings and text data, it is time to look at the other case, binary data.

Usually we think of arrays of bytes or sequences of bytes stored in some media.

Why bytes? The 8-bit-computers are no longer so common, but the byte as a typical unit of binary data is still present. Actually this is how files on typical operating systems work, they have a length in bytes and that number of bytes as content. When we use the file system, we have to deal with this.

But assuming a perfect lossless world, the typical smallest unit of information is not a byte, but a bit. So we could actually assume that we have an array of bits of a given length l and the length does not need to be a multiple of 8. This might be subtle, but it does describe the most basic case. Padding with 0-bits does not usually do the job, but there need to be other provisions to deal with the bit-length and the storage length. In case of files this could be achieved by storing the last three bits b of the bit length in the first byte of the file and using the file size s the bit length l can be calculated as 8(s-1)+b.

How about memory? Often the array of bytes is just fine. The approach to have an array of booleans as bits usually hurts, because typical programming languages reserve 32 bit for each boolean and we actually want a packed array. On the other hand we have 64bit-computers and we would like to use this when moving around data, so actually thinking of an array of 64-bit-integers plus information about how many of the bits of the last entry actually count might be better. This does bring up the issue of endianness at some point.

But we actually want to use the data. In the good old C-days we could use quite a powerful trick: We read an array of bytes and casted the pointer to a pointer to a struct. Then we could access the struct. Poor mans serialization, but it was efficient and worked especially well for records with fixed size, for both reading and writing. Off course variable length records can be dealt with as well by imposing rules that say something about the type of the next record based on the contents of the current record.

The Perl programming language and subsequently the Ruby programming language had much more powerful and flexible mechanisms using methods of functions called pack and unpack.

Share Button