Visit to Clojure Exchange

I have just visited Clojure Exchange. Since it had only one track, there is no point in listing which talks I have attended, since this can easily be seen on the web page of the conference.

It was interesting and there were many great talks and I also met great people among the other participants.

Share Button

Devoxx 2016 Visit

As already written in Devoxx 2016 I visited Devoxx in Antwerp 2016.

Hot topics where Java 9 and the functional features of Java 8. But there was a wide range of talks. As in previous years visitors can watch all the talks that they missed or found interesting enough to re-watch online afterwards. In earlier years it was done with „Parlays“ and only available to visitors or to those who pay for it, while it is now available on youtube for everybody. Since the conference has been sold out long before it started, this does not seam to stop people from buying tickets for the conference.

So here is what I did.
Wednesday:

Thursday:

Friday:

Find Links here….

I guess that’s it for today… I hope to visit Antwerp for Devoxx next year again.

Share Button

Devoxx 2016

I am going to the Devoxx in Antwerp 2016.
Updates about what I did will follow soon.

As a starter here is my Devoxx-Talk. Let this be the main content for this posting, which is mostly video instead of text. Here is the github repo with the code examples.

Other Links:

Share Button

How to calculate transcendental functions

There is sometimes need to calculate transcendental functions like \sin, \exp, \log or \tan^{-1}. We get them from the library and the library relies on implementations in the CPU for most of them. This is true, if we like to do them in „double“ format, which is the standard way of doing floating point arithmetic. But it can be interesting how these can be calculated to a given precision or to calculate functions that are not in the library and not easily composed from the library functions. There are many ways to do this and actually the naïve way of using the Taylor-series

    \[f(x) = \sum_{j=0}^\infty a_j (x-x_0)^j\]

is often not such a bad idea, if done correctly.
We know from math what to use for the coefficients a_j and for which ranges of x this converges. For limited fixed precision it is possible to tune the coefficients a bit and get better results with a fixed number of summands. For arbitrary precision we need to be more flexible and cannot be prepared for this exact precision.

Now mathematically we can often have a converging series, for example if we have

    \[f(x) = \sum_{j_0}^\infty \frac{x^j}{j^2}.\]

This converges for |x|\le 1, but the convergence is not necessarily computer friendly. It can be proved easily, that this series converges for |x| \le 1, but for |x|=1 it converges slowly. To give an idea, if we are calculating with 100 digits after the decimal point then we would still have single terms in the area of our desired precision for j=10^{50} and since they get smaller only slowly, we would have to go much further. This is impossible to use.

As a rule of thumb the coefficients are not our friends. They may or may not converge towards zero, but we really have to rely on the (x-x_0)^j-part to get diminishing summands. A good idea is to consider |x-x_0| \le \frac{1}{2} if the coefficients are bounded, which they usually are in real life examples. That means that there is a boundary C>0 such that for each i we have |a_j|<C. So we absolutely need to use some mathematical knowledge about the function in order to get reasonable convergence.

In case of periodic functions like the trigonometric functions, we can normalize x to values within one „period“, but that will reduce x or x-x_0 only to a range of [-\pi, \pi). Using some common trigonometric formulas, we can actually reduce this to the range [0, \pi/2], which is still not good enough. In this case we have to use formulas like \sin(3x)=3\sin(x)-4\sin^3(x) and similar formulas for other trigonometric functions. These allow us to move to smaller values of x. For the exponential function, we have even easier ways. Let n be a natural number such that |\frac{x}{n}| < \frac{1}{2}. Then we let y=\frac{x}{n} and we can calculate z=e^y=\exp(y). Now we have exp(x)=e^x=e^{ny}=(e^y)^n=z^n and we just need to take the n-th power of the intermediate result. This can be calculated using algorithms like square and multiply or even some improvements over that.

In the end we will end up writing a lot of code for different cases which are optimized in different ways for some function. For example the power p(x,y)=x^y is a function in two parameters, that has quite a wild behavior and for writing an implementation that provides reasonable performance and precision we need to handle a lot of cases. Just look at the power function of the standard Java library, which is written in native C-code. Its beauty is not the conciseness, but having some understanding about what it takes to do this well you might eventually appreciate the given implementation, even if you not only use it, but also read it.

Now dealing with the precision is a delicate question, which again requires mathematics. As a general rule we usually need to use more precision for intermediate results. A good tool is to take the derivative or the partial derivatives in case of functions with multiple parameters to see how much changes in that parameter influence changes of the value. The Taylor theorem gives some definite, but possibly hard to apply answers. And it can also be useful to look at lower and upper bounds for the operations performed.

When writing such functions, unit tests are a big deal. Often they are not so hard to write, if we have inverse functions to rely on or if we can increase the precision and see that the lower precision is at least as precise as it claims to be. In some cases existing implementations for double can be used to check if the calculation is correct for smaller precisions.

Most of all it is important to think and use some mathematics or get help for this from somebody with appropriate knowledge.

Just to give you a hint: There are tons of transcendental functions that do not exist in standard libraries and that may be interesting to use. For some of them there are libraries. For some we still need to find libraries or write them.

Share Button

Unit Testing in a non-perfect World

Test Driven Development

We all know that how good test driven development is and that we should move in that direction.

How much coverage

There are some serious obstacles. Most of all, we have some obligation to actually finish software and the resources are usually kind of limited. If they were not limited by money and time constraints, they would hit the limit of efficient team sizes and organizational structures.

We can just look at a simple application that does „CRUD“ operations. Ideally we start with a known data set and reset the database to exactly this content before starting the tests, maybe even before each single test… If we have a huge and well managed server farm to run the test, maybe possible. For the „read“-methods we need to write some tests that succeed in reading, probably performing a few reads with a single read method to cover different outcomes of the successful read or different parameter combinations. Then there are unsuccessful reads that just do not find anything and return null or an empty result collection or even those that fail with an exception. It is of interest to check the maximum and minimum allowed values, if there are such limits. So we end up writing five to a few dozen test methods for a single read method. And this is the simple case. For delete and update we should create our own data in the beginning of the test. Probably there are dependencies and constraints in conjunction with other data, so it is necessary to cover these also. Create and update actually need a variety of at least two values for each of he most simple attributes of the created data object, to deal with not null. Usually we have more constraints on attributes, concerning lengths, value range and some kind of compatibility with other data. So there will be up to around ten tests for each attribute of the created or updated entity and we have successful and unsuccessful operations that we expect. So we will end up writing hundreds or even thousands of unit test methods just to obtain the most basic coverage for a relatively simple „CRUD“-application. Writing many similar tests is not so difficult and it would be interesting to explore ways to cut down on the repetitive work involved by using less verbose languages for writing the tests, creating them partially with scripts or simply writing very powerful helper methods in the test class that just get called with slightly different parameters to do all the tests. It will anyway be a lot of work, to write the tests. I think 60% of the time for the unit tests and 40% for the actual code is a reasonable number for a relatively fair coverage of most of the code.

In practice we should really prioritize our unit testing efforts, because spending 2.5 times as much time for the whole thing as for the code itself is simply not always possible. On the other hand, the time we save in the long run with good testing is even more than ha we spend, if we do the unit test development well.

But there are some aspects to think of:

  • Which parts of the application are fairly stable?
  • Which parts of the application are used and relied on heavily by other parts of the application?
  • Which parts of the application are used a lot by end users?
  • Which parts of the application are high risk because they have more inner complexity?
  • Which parts of the application actually showed errors? Fix the errors by writing a test to expose them first.
  • Which parts of the application are high risk in terms of reputation, money loss or data loss if they go wrong?
  • Which parts of the application are undergoing internal changes, while retaining the API?
  • Which parts of the application are migrated to another platform, OS, DB, architecture …, while retaining the API?

It is good to focus primarily on areas based on these questions and to do reduced testing for areas that are less critical.
The first question is quite delicate, because it exposes some contradiction we need to cope with. We should be agile, change the application easily when requirements are understood better or the architecture is understood better. But with tests, even this effort multiplies by 2.5 or whatever we have to update the unit tests. Or even worse, it leads to disabling the unit tests or to the loss of agility. In areas that change quickly it may be better to write the complete set of tests by the time they have become relatively stable.

Database

The next issue is the database. Typical organizations like to provide one DB instance and schema for the whole development team, because the database instances and schemata are seen as expensive resources. They are hard to maintain and for various reasons it is often difficult to install a local database on each of the developers machine. If it is Oracle, DB2 or MS-SOL-Server, some know-how is needed to install it and maybe even some constraints are there in terms of the OS. MariaDB and PostgreSQL seam to be somewhat easier to install, there are less license issues involved, but still even that is an effort. This can be overcome by virtualization. An image with the DB-setup can be developed once and than copied to each team member. There are interesting and good ways to do something like this. So it is becoming less of an issue, but still it is very unusual to have that. Now there are two ways out of this. One way is to use another DB for development and production. This is somewhat dangerous, because databases are so different, that today’s common abstractions do not hide the differences and we also might pay a high price in terms of performance if we do not use DB-specific features. So it requires extra development effort to support both DB types. And it is very important to run tests against the DB that is used in production anyway on a regular basis. It may be helpful to move part of the tests to such a similar-but-not-equal local environment. The regular development DB is unfortunately often shared between many developers. Now if tests run simultaneously from different development machines against the same DB, they will usually inter and some tests will probably fail just because of that. Not all the time, but sporadically. It can be avoided, by some team organization and some kind of reservation of the DB, but that is painful, so we just run the tests and if they fail assume it is someone else testing at the same time causing the failure. It is possible to write the tests in such a way that they can withstand this, but this is a lot of extra effort, compared to the effort of using a virtual image with a working DB instance it is not justifiable.

So what we should aim for is a dedicated DB schema for each developer. Ideally it should be of the DB software product used for production. It can be locally, on some DB-server or as a virtual image.

Share Button

Modular Arithmetic

We have some articles in this blog about integers of typical programming languages and how they work. Time to introduce the underlying mathematical concepts, that have been covered implicitly until now, since they are also interesting in many other aspects. And besides, this is a very beautiful area of mathematics.

Mathematics that we learn in school is mostly inspired by what is needed for physics. This was quite a good choice 100 years ago, because it gave some motivation to why we do certain things and it was the area, where math was applied. Off course also chemistry and engineering, but these are somewhat similar aspects of mathematics as we use in physics. Now physics and chemistry make use of quite interesting areas of mathematics like group theory or non Euclidean geometry, but these are kind of advanced areas beyond what we typically learn in school. at least in the countries where I went to school. So it is about real numbers, some trigonometry, real analysis (calculus) and maybe complex numbers.

Since more than 50 years mathematics is heavily used in informatics as well, if we abstract informatics away from computers, even longer, because for example algorithms and cryptography have been used for several thousand years already, but that was a small niche and became main stream by the existence of computers. And for informatics and computer science we need different areas of mathematics. Analysis is not the so important, though not irrelevant. One area is information theory, which is based on probability theory and statistics. Numerical calculations have to a great extent remained a domain of mathematics itself, so this connection may be strong, but it is applied mathematicians using computers and using knowledge from IT to program them better, not the other way round. Still numerical analysis is somewhat important, but not really what most of us need very often.

The areas of mathematics that are really interesting for informatics are discrete mathematics, algebra and number theory. There is enough material about this on the web, but for now we will deal with modular arithmetic, which is kind of in the intersection of discrete mathematics, algebra and number theory.

We start with the integral numbers:

    \[{\Bbb Z} = \{\ldots,-3, -2, -1, 0, 1, 2, 3, 4,\ldots\}\]

Now we take any positive integral number m \in {\Bbb N} with m \ge 2.
We say that two integral numbers x and y are congruent modulo m:

    \[x \equiv y \pmod m\]

if and only if x-y can be divided by m. We might also say that there is a k \in {\Bbb Z} such that y = x + k m.
Now we can make interesting observations:
We assume, that we have pairs of numbers such that

    \[u \equiv v \pmod m\]

and

    \[x \equiv y \pmod m\]

Then we can observe that also

    \[u+x \equiv v+y \pmod m\]

    \[u-x \equiv v-y \pmod m\]

    \[u\cdot x \equiv v\cdot y \pmod m\]

This can be proven easily.
We assume as above

    \[y = x + k \cdot m\]

and similarly

    \[v = u + l \cdot m\]

Then we have

    \[y+v = x+u+(k+l)m \equiv x+u \pmod m\]

    \[y-v = x-u+(k-l)m \equiv x-u \pmod m\]

    \[yv = xu+kum + lvm + klm^2 \equiv x-u \pmod m\]

We call a set of all numbers of \Bbb Z that are congruent to each other a remainder class and write this as

    \[\bar x = x + m{\Bbb Z}\]

There are exactly m remainder classes modulo m and usually we use a representation system of

    \[0,1,\ldots m-1\]

or for even m we often use

    \[-\frac{m}{2}, -\frac{m}{2}+1,\ldots,-1,0,1,\dots,\frac{m}{2}-1\]

or for odd m we often use

    \[-\frac{m-1}{2}, -\frac{m-1}{2}+1,\ldots,-1,0,1,\dots,\frac{m-1}{2}\]

We observe these representation systems when we do division with remainder, written as % in many programming languages, but it is necessary to do some quick research on which representation system % uses and which one we want to use and possibly adjust the result. The corresponding division may not be /, but we can obtain it by subtracting our remainder from the dividend and dividing that, which should be an exact division.

Now we need to define a ring. A ring R is a set with operations + and \cdot such that the following rules apply:

  1. For any members x, y \in R we also have x+y \in R, x-y \in R and x\cdot y \in R. This is usually not mentioned, because it is part of how we define these operations in the first place in most mathematical texts.
  2. Addition is communicative: For any members x, y \in R we have x+y=y+x.
  3. Addition has a neutral element 0: For any member x \in R we have x+0=0+x=x.
  4. Addition has inverse elements: For any member x \in R we have a member x'\in R such that x+x'=x'+x=0. Usually we write -x for this inverse element of x and we write x-y instead of x+(-y).
  5. Addition is associative: For any members x, y, z \in R we have (x+y)+z=x+(y+z). We can omit the parentheses here and write x+y+z instead.
  6. Multiplication has a neutral element 1: For any member x \in R we have x\cdot 1=1\cdot x=x.
  7. Multiplication is associative: For any members x, y, z \in R we have (x\cdot y)\cdot z=x\cdot (y\cdot z). We can omit the parentheses here and write x\cdot y\cdot z or even xyz instead.
  8. Multiplication in conjunction with addition is distributive: For any members x, y, z \in R we have (x + y)\cdot z = x\cdot z + y\cdot z and z\cdot (x+y)=z\cdot x + z\cdot y.

If the multiplication is also communicative, we call it a commutative ring. If there is a multiplicative inverse for any element other than 0, we call it a skew field. And if both conditions hold, we call it a field.

Now we can see that \Bbb Z is actually a communicative ring.

And these remainder classes modulo m also form a ring. We call it {\Bbb Z}/m{\Bbb Z} or sometimes also {\Bbb Z}_m, but I do not use the second form, because it is ambiguous with something else (p-adic numbers). If m=p is a prime number, then {\Bbb Z}/p{\Bbb Z} is actually a field and in this case we may write {\Bbb F}_p instead of {\Bbb Z}/p{\Bbb Z}. Or GF(p) in some literature, if you prefer that. Why is it a field?

Now we have an extension of the Euclid algorithm to calculate the gcd of two numbers. This also yields numbers u and v such that g=\gcd(x,y)=ux+vy. So these numbers exist. For a prime number p and a remainder class \bar x \ne 0 we know that x is not a multiple of p and since p is prime we know that

    \[1=\gcd(x,p)=ux+vp\]

. This yields a multiplicative inverse for \bar x because

    \[u\cdot x \equiv 1 \pmod p\]

.

Now we often see m as a power of 2 and the modular arithmetic, at least +, -, *, is what is sold to us as integer arithmetic of Java, C or C#.

On the other hand it can be interesting and useful to use modular arithmetic for other values of m. Interesting are mostly prime numbers, which can be relatively small like 2, 3 or 5, but also really big. For non-primes we have null-factors, that is numbers x, y \not\equiv 0 \pmod m such that x\cdot y \equiv 0 \pmod m. This breaks some fundamental mathematical assumption for integers and fields, but is perfectly correct for this modular ring.

In our daily life modular arithmetic is actually quite common. We have the week days with m=7, the hours of the clock with m=12 or m=24, the minutes and seconds of the clock with m=60 and quite a bit of m=2, which we do not really see as modular arithmetic, but maybe as boolean arithmetic with + being the „exclusive or“, \cdot being the „and“ etc.

Share Button

Collection Libraries

The standard libraries of newer programming languages usually contain so called collection libraries.

Collections can usually be Lists, Sets, Maps or specialization of these.

They cover quite a lot and we start seeing variants that are built on immutability and variants that allow mutability and as always the hybrid in Ruby, that combines these and does an irreversible transition using the freeze method.

There are some interesting collection types other than these, most often we find the Bag as fourth member in the club and then more complex and more specific collections.

What they all have in common is storing a finite number of elements in a certain structure.

Some languages like Clojure, Haskell or Perl 6 use so called lazy collections. That can mean that the members are not actually stored, but that there are methods to calculate them on demand. This allows for very interesting, expressive and beautiful programming, if used properly. Typically a Range of integers is provided as a lazy collection. But there can also be quite interesting lazy collections that are a little bit more sophisticated. Some allow random access to the nth element, like arrays or vectors or arrayLists, some only via iteration.

Interesting lazy collections could be multi-dimensional ranges. Assume we have an array of integers [n_0, n_1, n_2, ...., n_{m-1}] where even m is only known at runtime. Then it is a challenge that sometimes occurs to do a loop like this:

for (i_0 = 0\ldots n_0-1) {
for (i_1 = 0\ldots n_1-1) {
for (i_2 = 0\ldots n_2-1) {
\cdots
}
}
}

Which is kind of hard to write, because we cannot nest the loops if we do not know how deeply they need to be nested.

But if we have a multi-range collection and do something like this

Collection> mr = new MultiRange([n_0, n_1, n_2, ...., n_{m-1});
for (List li : mr) {
\cdots
}

and this beast becomes quite approachable.

A similar one, that is sometimes needed, is a lazy collection containing all the permutations of the n numbers \left\{0\ldots n-1\right\}. Again we only want to iterate over it and possibly not complete the iteration.

Another interesting idea is to perform the set operations like union, intersection and difference lazily. That means that we have a collection class Union, that implements the union of its members. Testing for membership is trivial, iteration does involve some additional structure to avoid duplicates. Intersection and difference are even easier, because they cannot produce duplicates.

What is also interesting is Sets built from intervals. Intervals can be defined in any base set {\mathrm T} (type) that supports comparisons like <, <=, ... We have

  • an open interval (a,b)=\left\{x \in {\mathrm T} : a < x < b\right\}
  • an left half-open interval (a,b]=\left\{x \in {\mathrm T} : a < x \le b\right\}
  • an right half-open interval [a,b)=\left\{x \in {\mathrm T} : a \le x < b\right\}
  • a closed interval [a,b]=\left\{x \in {\mathrm T} : a \le x \le b\right\}

Of these we can create unions and intersections and in the end can always reduce this to unions of intervals. Adjacent intervals can sometimes be merged, overlapping intervals always. If {\mathrm T} supports the concept of successors, than even closed intervals with different limits can be discovered to be adjacent, for example [1,2] and [3,4] for {\mathrm T}={\Bbb Z}. Often this cannot be assumed, for example if we are working with rational numbers with arbitrarily long integers as numerator and denominator.

So these are three concepts to get memory saving, easy to use lazy collections.

Share Button

Alpine Perl Workshop

On 2016-09-02 and 2016-09-03 I was able to visit the Alpine Perl Workshop. This was a Perl conference with around 50 participants, among them core members of the Perl community. We had mostly one track, so the documented information about the talks that were given is actually quite closely correlated to the list of talks that I have actually visited.

We had quite a diverse set of talks about technical issues but also about the role of Perl in projects and in general. The speeches were in English and German…

Perl 6 is now a reality. It can be used together with Perl 5, there are ways to embed them within each other and they seem to work reasonably well. This fills some of the gaps of Perl 5, since the set of modules is by far not as complete as for Perl 5.

Perl 5 has since quite a few years established a time boxed release schedule. Each year they ship a new major release. The previous two releases are supported for bugfixes. The danger that major Linux distributions remain on older releases has been banned. Python 3 has been released in 2008 and still in 2016 Python 2.7 is what is usually used and shipped with major Linux distributions. It looks like Perl 5 is there to stay, not be replaced by Perl 6, which is a quite different language that just shares the name and the community. But the recent versions are actually adopted and the incompatible changes are so little that they do not hurt too much, usually. An advantage of Perl is the CPAN repository for libraries. It is possible to test new versions against a ton of such libraries and to find out, where it might break or even providing fixes for the library.

An interesting issue is testing of software. For continuous integration we can now find servers and they will run against a configurable set of Perl versions. But using different Linux distributions or even non-Linux-systems becomes a more elaborate issue. People willing to test new versions of Perl or of libraries on exotic hardware and OS are still welcome and often they discover a weakness that might be of interest even for the mainstream platforms in the long run.

I will leave it with this. You can find more information in the web site of the conference.

And some of the talks are on youtube already.

It was fun to go there, I learned a lot and met nice people. It would be great to be able to visit a similar event again…

Share Button

Error Messages in Web Applications

In Applications that are used by non-technical end users, which are these days very often web applications, we have to deal with the issue that an unexpected error occurs.

There are the two extremes:

We can just show a screen telling in a nice design that the application does not work and that is all. In terms of user experience that is a good approach and often the recommended way. But there is no reasonable path to recover, other than retrying.

The other extreme, which we actually see quite a lot, is this:

Stacktrace of exception shown to the user in the browser

Stack-trace of exception shown to the user in the browser

That may be ok in some rare case, during beta test or for an internal application that is only used by the development team.

This was a productive application, apparently developed in some MS-dotnet-language, most likely C# or ASP-dotnet, but the same issues are valid for PHP, Java/JSF/JEE, Ruby, Perl, Perl 6, Python, Scala, Clojure, C, C++ and whatever you like.

Imagine Google would display such information because of a bug in their search engine and there were a contact mail address or phone number and millions of people would call their call center every day with such bugs… In this case it is probably better to hide the exception from the users and probably write the software well enough that such issues do not occur too often, because too many people rely on this every day.

The other side is, that there are probably some log files and the exceptions can be found there. Now the log files can be monitored manually, which becomes a bad idea as soon as there is actually one or more full time user, because the logs become huge, but tools like grep or simple perl scripts or tools like splunk can help to deal with this. Since applications tend to be distributed, we have to deal with the fact that the same single instance of error and even more so the same kind of error will occur in many different logs and we need to match them to make sense of this and to understand the problem.

Reading logs and especially stack traces is especially hard in framework worlds, where there are hundreds of levels in the stacktraces coming from the frameworks. Often this is were the error actually is, but even more often it is the application and anyway we are more comfortable doing an workaround rather than fixing the framework, which we could do at least if it is open source. And we should actually send a bug report with as much information as possible, but avoid interpretation on what the bug could be. This can be added as a comment to the bug report, maybe even with a hint how to fix it or a patch, but it should be kept separate.

Anyway, usually we assume the error is not in the framework and this is usually true. So it is a challenge to read this and again tools or scripts should be used to do this. It is also possible and usually necessary to find occurrences of the same kind of error. Often this is hard, because the root cause does not manifest itself and we get consequential errors much later. That is why we are IT experts, so we can find even such hidden bugs. 🙂

Now it is possible to make life a little bit easier. We can give exceptions unique IDs, that can be something like this:
eeeeee-cccc-hhhh-tttttttt-ttttttttt-nnnnnnnnn
where each block consists of digits and upper case letters.
eeeeee encodes the type of the exception, for example by skipping all lower case letters. It does not have to be perfect, just give a hint.
cccc is an error-code if this is used. If and when exceptions should include error codes is an interesting issue by itself.
hhhh is encodes the host where it originally occurred.
tttt… is the time-stamp. If we use msec since 1970 and use base 36 to encode it, it can be shorter.
nnnn… is a number from some counter.
This is just an idea. You could use UUIDs or do something along these lines, but different. Using base 36 is actually a good idea, it makes these codes shorter.

Anyway, having such an ID in each exception in the log allows more easily to find which are different log entries for exactly the same exception. And yes, they do occur and that is OK. Such a code could also be displayed on the screen of the end user if it is an application where users actually have access and contact to some support team. Then they can read it. Again, aim to make it short and unique, but don’t make the whole mechanism too fragile, otherwise we deal with finding the exceptions in the exception handling framework itself and that is not desirable.

What is important: We should actually fix bugs, when we find them. Free some time for it, write unit tests that prove the bug, fix it and make sure it does not come up again by retaining the unit test. Yes, it is work but it is worth it. If the bug justifies an immediate deployment or if we have to wait for a deployment window is another issue, but it should be fixed at least with the next deployment. If regular deployments should be done twice a year or daily is an interesting issue by itself. There should always be ways to do an „emergency deployment“ in case of a critical bug, but it is good to have strong regular mechanism so the emergency deployment can remain an exception.

Share Button

Integers in Perl 6

The language Perl 6 has been announced to be production ready by the beginning of this year. Its implementation is Rakudo, while Perl 6 itself is an abstract language definition that allows any language implementation that passes the test suite to call itself an Perl 6 implementation. The idea is not totally new, we see the Ruby language being implemented more than once (Ruby, JRuby, Rubinius, IronRuby), but we can also learn from the Ruby guys that it is a challenge to keep this up to date and eventually it is likely that one implementation will fall back or go its own way at some point of time.

Perl 6 is also called „Perl“ as part of its name, but quite different from its sister language Perl, which is sometimes called „Perl 5“ to emphasize the distinction, so it is absolutely necessary to call it „Perl 6“ or maybe „Rakudo“, but not just „Perl“.

Even though many things can be written in a similar way, a major change to Perl 5 is the way of dealing with numeric types. You can find an article describing Numeric Types in Perl [5]. So now we will see how to do the same things in Perl 6.

Dealing with numeric types in Perl 6 is neither like in Perl 5 nor like what we are used to in many other languages.

So when we just use numbers in a naïve way, we get long integers automatically:

my $f = 2_000_000_000;
my $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

creates this output:

0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
8 256000000000000000000000000000000000000000000000000000000000000000000000000
9 512000000000000000000000000000000000000000000000000000000000000000000000000000000000

This is an nice default, similar to what Ruby, Clojure and many other Lisps use, but most languages have a made a choice that is weird for application development.

Now we can also statically type this:

my Int $f = 2_000_000_000;
my Int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

and we get the exact same result:

0 1
1 2000000000
2 4000000000000000000
3 8000000000000000000000000000
4 16000000000000000000000000000000000000
5 32000000000000000000000000000000000000000000000
6 64000000000000000000000000000000000000000000000000000000
7 128000000000000000000000000000000000000000000000000000000000000000
8 256000000000000000000000000000000000000000000000000000000000000000000000000
9 512000000000000000000000000000000000000000000000000000000000000000000000000000000000

Now we can actually use low-level machine integers which do an arithmetic modulo powers of 2, usually 2^{32} or 2^{64}:

my int $f = 2_000_000_000;
my int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p *= $f;
}

and we get the same kind of results that we would get in java or C with (signed) long, if we are on a typical 64-bit environment:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

We can try it in Java. I was lazy and changed as little as possible and the "$" is allowed as part of the variable name by the language, but of course not by the coding standards:

public class JavaInt {
    public static void main(String[] args) {
        long $f = 2_000_000_000;
        long $p = 1;
        for (int $i = 0; $i < 10; $i++) {
            System.out.println($i + " " +  $p);
            $p *= $f;
        }
    }
}

We get this output:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

And we see, with C# we get the same result:

using System;

public class CsInt {

    public static void Main(string[] args) {
        long f = 2000000000;
        long p = 1;
        for (int i = 0; i < 10; i++) {
            Console.WriteLine(i + " " +  p);
            p *= f;
        }
    }
}

gives us:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

If you like, you can try the same in C using signed long long (or whatever is 64 bits), and you will get the exact same result.

Now we can simulate this in Perl 6 also using Int, to understand what int is really doing to us. The idea has already been shown with Ruby before:

my Int $MODULUS = 0x10000000000000000;
my Int $LIMIT   =  0x8000000000000000;
sub mul($x, $y) {
    my Int $result = ($x * $y) % $MODULUS;
    if ($result >= $LIMIT) {
        $result -= $MODULUS;
    } elsif ($result < - $LIMIT) {
        $result += $MODULUS;
    }
    $result;
}

my Int $f = 2_000_000_000;
my Int $p = 1;
loop (my Int $i = 0; $i < 10; $i++) {
    say($i, " ", $p);
    $p = mul($p, $f);
}

and we get the same again:

0 1
1 2000000000
2 4000000000000000000
3 -106958398427234304
4 3799332742966018048
5 7229403301836488704
6 -8070450532247928832
7 0
8 0
9 0

The good thing is that the default has been chosen correctly as Int and that Int allows easily to do integer arithmetic with arbitrary precision.

Now the question is, how we actually get floating point numbers. This will be covered in another blog posting, because it is a longer story of its own interest.

Share Button