There is more than one way to do it

Quite often we need to do something (system engineering, software development, database,…) and when we let ten people do it, it will be done in eleven different ways.

Some organization like to „standardize“ things…

And to some extent that is needed actually.

But the challenge is to find what needs to be standardized and what not.

While it is absolutely clear that it will lead to chaos, if nothing is standardized, because stuff will not work together well and people will have a hard time taking over somebody else’s work or helping each other.

But the other extreme can be bad also… It kills creativity, it forces people to use inferior solutions that need a long time to implement. Just think of standardizing on hammers and then using screws as nails. Also to much standardization freezes an approach too early, when there should be different approaches competing, with the more successful ones gaining terrain.

What can be done and what works: for a big software it is possible, that different teams work on different components. They can use different programming languages. In some cases they should actually do that, if they need specific strengths of one language in one area and of another one in another area. Of course a lot of things like interfaces, security, character encodings, networking and network architecture need to be standardized. While it may or may not be perfectly legitimate ot use for example both PostgreSQL and MongoDB and CassandraDB by the same application, there needs to be a good reason for using two different relational databases.

I have seen this happen, but at some point of time one team using programming language A was quite successful and team using programming language B not. And there was no strong specific benefit of language B for what they were doing, other that they knew programming language B better. Companies can replace teams or train people on language A. Or try to train them and replace some of them depending on the success of the transition. Possibly it is not the language, but the team that was the reason… Not necessarily the people, but for some reason they got on the wrong track and are unsuccessful this time… In the end, the project decided to standardize on language A and have this part rewritten by people who knew language A quite well. Even though this meant writing stuff twice, in the end it brought a much better result. This „rewriting something with better technology“ can be a dangerous trap. Often the second solution takes ten times longer to write until it does what the „inferior old solution“ did. But in this case that problem did not occur at all.

Now we should look at the IT work from a more abstract point of view.

Some work is done and in the end there is a result. This can be a source code, a running software, a server setup, a document, a database setup, a network setup,…

As a general rule, we want to standardize some aspects of the result. How the result is achieved should be left to the engineer.

So for many ad-hoc-tasks people can use different tools, like different editors and IDEs, different configurations for these, scripts in different scripting languages etc. Of course efficiency is a goal. Scripts can even be documented or checked into git for optional usage by those who like to use them. People can use Perl, Python, Ruby, Bash or something else for scripting. But the scripts can actually become results and then become candidates for standardization. For example, for managing a large number of servers, a lot can be done using ansible, which can be seen as a kind of fourth generation (declarative) language for doing exactly this. It makes sense to agree on using this all over the place and then the ansible playbook and not only the result of its execution is the result of the work and probably checked into git.

When working on Linux-servers, vi or most likely vim will be installed on every server. With the default configuration, at least for root and other system users. It is possible for an organization to agree to install a second editor on every server without too much effort. But it is usually not done and it does not really solve the issue, so most of the time editing on servers is standardized to usage of vi. Get used to it, it will be like that anyway in the next organization you are working for.

When someone writes a super complex script that nobody else understands, then this can be a combination of unnecessary complexity and inherent complexity. If the inherent complexity is high, then it is quite likely that this script will and should be reused and then it is no longer an ad-hoc solution but it becomes an result. Then it becomes of interest to make it a bit maintainable and readable, so others can work with it and improve it.

But in all levels there are often different ways to do something that are all legitimate and good and efficient for the purpose.

Share Button

Configuration Files

Just a short story as a starter:

I went to a Covid19 test and a colleague asked the very legitimate question, why I am doing a test in spite of being vaccinated twice.

This has a lot to do with configuration of software.

There are configuration files, that are the rules that local, regional, national or EU government impose. They typically prepare rules, publish them and activate them on a given date, but sometimes they become active immediately. The first case is a like using a more or less clean process for managing configurations and the second case is like adhoc editing with vi. Both have their place, but the second case should be understood as a rare exception that is needed to keep the system running and there should be steps to follow afterwards to document this change and to feed it into the process afterwards.

So the rules by governents confingure for example border controls and controls at airports. At each border and at each airport or port there are natural checkpoints where a person can be stopped. When travelling by train, by bicycle, by car or by motorcycle, there are usually only the border controls.

Now these configurations change freqently and there are a lot of complex rules. Like configuration files of software in real life.

So it can happen, that somewhere they ask for a test even if the person has benn vaccinated. Or in other situations the vaccination is enough. Or the proof of vaccination has to be given in a certain format, that is maybe still impossible to get. Or the vaccination only counts if it has been done using a certain vaccine. Now it is advisable to equip oneself in such a way that it is possible to pass absolutely all checkpoints of the whole trip.

Now there can be errors in the system. For example the persons who run a checkpoint might not know the rules valid at that particular day correctly and stop someone who actually complies with the rules.

It is said that it has been known 2000 years ago that the configuration of customs control have been falsified in an ad hoc manner. Not only by mistake, but actually voluntarily.

I think that this describes quite well the challenges and complexities of managing software configuration.

So almost every software has some configuration that comes with it.

This can be flat files, that are more or less specific to configuration.

It can be stored in a database. This can be a classical SQL database or a NoSQL-database or some kind of lightweight database that is accessed only via a library.

Configuration can be prepared in advance.

Or it is edited on the server.

Or it is edited using settings of the software via its GUI.

Now for a seriously managed software installation, it is a good idea to have well defined installation artifacts, which includes the configuration. But less technical configuration can actually be changed by sophisticated end users and this can be actually the right way to do it.

It is a good idea to keep some understanding of the configuration of the software. A way to do this is to have it in git, possibly using tags and or branches. I do not recommend to actually have git on all the machines directly, but rather to have some configuration master server on which the files are edited, prepared and pushed into git. Then they can be pushed to the servers by some mechanism, via tools like ansible or via simple scripts…

Share Button

Remote Work (15 months later)

After having worked mostly remotely for more than 15 months, some more experience has been gained on this issue than in April 2020.

It looks like conservative organizations still have a hard time getting used to this concept and instead insist that they are this rare special exception that absolutely needs to be in the office.

Usually privately owned companies are more innovative and small companies are more innovative. But I have observed the contrary… Some small companies are very conservative and really reject adopting technology and processes that allow to work remotely efficiently. And some relatively big, state owned companies manage to do this really well.

In the end it depends on the people. If the company does not bully away good people and gives them some freedom, they can organize good stuff, just for the love of technology (and of course they get paid). If a small company has a really good business idea and implemented this idea really well, it can afford to be a bit less efficient in some usually minor areas.

Now for Covid19 countries had to make different choices. One strategy was to shutdown as much as possible, to get Covid19 down to zero, which is what Australia, New Zealand and China tried. And which implies, if this strategy is retained, keeping the borders closed for several years in almost iron curtain style. Most countries have followed a strategy of just keeping the numbers at a level that can be dealt with, for example Sweden and Switzerland. And some countries have tried to get vaccinations very early and very fast… Now it was the question, if the people or the companies have stronger influence on the government. In Sweden and Switzerland companies where very strongly urged to let their people work remotely, whenever possible, but they never imposed restrictions the constrained basic human rights, like France in the first time of Covid or Germany recently. Anyway, responsible companies made remote work possible, where it is reasonable to at least reduce the density in the office and not to forget to reduce the density in the trains an public transport for the commute. For our American friends: In modern countries, a lot of people come to work by train, by bus and by bicycle. Which they do in the US as well in the Bay Area and in New York.

So now, what are the pros and cons of remote work? What can we learn from it?

After a year or so, people got used to working remotely. The technology works. Meetings work. So it has become a real option for many tasks.


  • You do not have to spend time on the commute.
  • You can eat lunch at home
  • You do not get distracted by working in a loud office
  • You can have a nap after work
  • If remote work from abroad is allowed and you can afford it: you can work from really cool places


  • When living alone most of us really miss meeting people after some time…
  • Some (actually not so many) meetings are more efficient when in person
  • You get a clean separation between work and free time when working in the office

So what are the right answers:

I think it is good to let people choose if and how much they work from home. The point is: if there is a task that can be done really better in the office, it is good to come to the office… If there is a task that can be done better from home, it is good to be able to just tell this to the team and do it from home. In the end of the day, some people will like to work 100% in the office, some will like to work almost 100% remotely and many will work some „mix“. Of course it is good to make sure the commute is not so long or at least not so unpleasant. Going by train allows to make use of the time much better than driving. Or commuting by bicycle has the side effect of being free exercise and for that reason being the only mode of transport that increases life expectancy. I am writing this blog post on my laptop in a train. My advice: If you like to work at home and are allowed to choose, I recommend going to the office a few times a month, like once a week or like a few adjacent days, depending on the distance.

A challenge will be: When nine people sit in the office and have a meeting and the tenth is participating from remote via a Jabra device, it requires a lot of discipline not to lose the remote guy. Those who have experienced it, know what I am talking about.

So in the end of the day: We can all profit from what we have learned from the pandemia and use our experience to apply remote work even when there is no virus and no government that forces this on us.

Share Button

Unit and Integration Test with Databases

From an ideological point of view, tests that involve the database may be unit tests are may be integration tests. For many applications the functional logic is quite trivial, so it is quite pointless to write „real unit tests“ that work without the database. Or it may be useful.

But there is a category of tests, that we should usually write, that involve the database and the database access layer and possibly the service layer on top of that. If layers are separated in that way. Or if not, they still test the service that is heavily relying on a database. I do not care too much if these tests are called unit tests or integration tests, as long as these terms are used consistently within the project or within the organization. If they are called integration tests, than achieving a good coverage of these integration tests is important.

Now some people promote using an in-memory-database for this. So in the beginning of the tests, this database is initialized and filled with empty tables and on top of that with some test data. This costs almost no time, because the DB is in memory.

While this is a valid way to go, I do not recommend it. First of all, different database products are always behaving different. And usually it is a bad idea to support more than one, because it significantly adds to the complexity of the software or degrades the performance. So there is this database product, that is used for production. For example PostgreSQL or Oracle or MS-SQL-Server or DB2. That is, what needs to work. That means the tests need to run against this database product, at least sometimes. This may be slow, so there is temptation to use H2 instead, but that means, writing tests in such a way that they support both db products. And enhancing the code to support at least to a minimal extent the H2 database as well. This is additional work and additional complexity to maintain.

What I recommend is: Use the same database product as for production, but tune it specifically for testing. For example: You can run it in a RAM disk. Or you can configure the database in such a way that transactions do not insist on the „D“ of „ACID“. This does not break functionality, but causes data loss in case of a crash, which is something we can live with when doing automated tests. PostgreSQL allows this optimization for sure, possibly others as well.
Now for setting up the database installation, possibly with all tables, we can use several techniques that allow to have a database instance with all table structures and some test data on demand. Just a few ideas, that might work:

  • Have a directory with everything in place and copy it into the ram disk and start it from there
  • Just have a few instances ready to use, dispose them after tests and already prepare them immediately after that for the next test
  • Have a docker image that is copied
  • Have a docker image that is immutable in terms of its disk image and use a RAM disk for all variable data. Much like Knoppix
  • Do it with VMWare or VirtualBox instead…

In the end of the day, there are many valid ways. And in some way or other, a few master images have to be maintained and all tests start from this master image. How it is done, if it is done on classical servers, on virtual servers, on a private cloud, on a public cloud or on the developers machine, does not really matter, if it just works efficiently.

If there is no existing efficient infrastructure, why not start with docker images that contain the database?

We need to leave the old traditional thinking that the database instance is expensive and that there are few of them on the server…

We can use the database product that is used for production environments even for development and testing and have separated and for testing fresh images. We should demand the right thing and work hard to get it instead of spending too much time on non-solutions that work only 80%. It can be done right. And laziness is a virtue and doing it right in this case is exactly serving this virtue.

Share Button

Electronic Vaccination Certificates

It is becoming increasingly important to have a way to easily prove being vaccinated against Covid-19 in a way that works at least throughout Europe.

This can be a piece of paper and it can be an app.

People are very concerned about faking the certificate.

This can happen and it is of course a serious crime.

The problem is that this is needed really fast and thinking about a typical software project it is very ambitious to be live in two months if it has not really started yet.

But leaving that timing issue aside, is it a reasonable idea?

Actually there are apps for railway tickets, boarding passes and of course for helping to prove ones identity when logging into some ebanking systems.

In Switzerland railroad tickets are either bought for a specific ride. Or people buy a „flat rate“ and can use any train within Switzerland as often as they want and they have already paid it with the annual payment. And there is a combination of both that makes the ticket cheaper for people who pay a much lower annual rate.

Now there is a mobile app and a chip card. Both provide a QR-code. This allows the trainmen to scan it with the app on their phone and to see the photo of the person and the person’s subscriptions. The actual ticket for guys without the flatrate is another QR-code. This has been in use for a few years now and seems to work quite well.

Of course it is based on the fact that railroad trainmen are a relatively small, trusted group that has access to this kind of information about travelers. That is another issue than having a terminal to scan the app at every cinema, restaurant etc. throughout Europe. The photo is convenience, but it could be provided by showing an ID. The QR-code could be signed with the private key of a health agency, whose public key is available to the public and allows to verify that it is correct. This way it could just provide all the information without using a server. The other variant would be that the QR-code is a key for a lookup on the server. But not depending on storing the important data on a server but having it all in the QR-code is actually a good idea. If it is desired to provide test results as well, then it might be a bit easier to provide access to a server and store all the information there. But since vaccinations and tests can be done in any country and it probably is not a reasonable or desirable idea to implement some mutual data exchange pattern between the servers and it is probably also not possible to tap the NSA-servers for this information, it will probably anyway end up having one QR-code for each event, where events are tests and vaccinations and maybe recoveries of the actual sickness from Covid19. In this case they could work without accessing the server.

The whole functionality is probably not too difficult to build, but these security related software projects tend to require working even more thorough than usual. Also it is probably important to look into the issue of data privacy, which might complicate issues a lot, especially due to the fact that anybody in the world can read the certificate if it is shown and by that access sensitive data of the user. And users have to show it every day on many occasions…

Share Button

Spline Approximation (Mathematics II)

This is the third of a series of article about spline approximation. If you have not done so, you should start reading

So a function which is supposed to approximate a given set of points \{(\xi_j, \ety_j)\}_{j=1\ldots N} as a linear combination

    \[(1)\thickspace g(x) = \sum_{i} a_i f_i(x)\]

of functions f_i, as described in the previous article with

    \[(2)\thickspace f(x)= \begin{cases} 0 &\text{for } x \le -2\\ (x+2)^3&\text{for } -2 < x \le -1\\ 1+3(x+1)+3(x+1)^2-3(x+1)^3&\text{for } -1 < x \le 0\\ 1+3(1-x)+3(1-x)^2-3(1-x)^3&\text{for } 0 < x \le 1\\ (2-x)^3&\text{for } 1 < x \le 2\\ 0&\text{for } x > 2 \end{cases}\]

and with h chosen such that x_i=x_0+i*h for all i

    \[(3)\thickspace f_i(x)=f(\frac{x-x_i}{h}) \text{ for } i=-1,\ldots,n+1\]

Now up to four f_is can overlap in one sub-interval. Assuming the subinterval is


and the arithmetic mean of the borders is

    \[X = \frac{1}{2}(x_i+x_{i+1}) = x_i + \frac{h}{2} = x_{i+1} - \frac{h}{2}\]

This interval is touched by the base functions f_{i-1}, f_i, f_{i+1}, f_{i+2}, so the function g can be written for x\in S as


    \[= af\left(\frac{x-x_{i-1}}{h}\right)    +bf\left(\frac{x-x_{i}}{h}\right)    +cf\left(\frac{x-x_{i+1}}{h}\right)    +df\left(\frac{x-x_{i+2}}{h}\right)\]

    \[= a\left(2-\frac{x-x_{i-1}}{h}\right)^3\]



    \[+d\left(\frac{x-x_{i+2}}{h}+2\right)^3 \text{ using (5)}\]

Now we have





and thus

    \[g(x) =a\,\left(2-{{x+{{h}\over{2}}-X}\over{h}}\right)^3\]

    \[+b\,\left(-3\, \left(1-{{x-{{h}\over{2}}-X}\over{h}}\right)^3+3\,\left(1-{{x-{{h  }\over{2}}-X}\over{h}}\right)^2+3\,\left(1-{{x-{{h}\over{2}}-X  }\over{h}}\right)+1\right)\]

    \[+c\,\left(-3\,\left({{x-{{3}\over{2}} h-X }\over{h}}+1\right)^3  +3\,\left({{x-{{3}\over{2}} h-X}\over{h}}+1 \right)^2  +3\,\left({{x-{{3}\over{2}} h-X}\over{h}}+1\right)+1 \right)\]

    \[+d\,\left({{x-{{{5}\over{2}} h-X}\over{h}}+2\right)^3\]

Now let


and hence


and substitute that:


    \[+b\,\left(-3\,  \left(1-{{y-{{h}\over{2}}}\over{h}}\right)^3+3\,\left(1-{{y-{{h  }\over{2}}}\over{h}}\right)^2+3\,\left(1-{{y-{{h}\over{2}}}\over{h}}  \right)+1\right)\]

    \[+c\,\left(-3\,\left({{y-{{3\,h}\over{2}}}\over{h}}+1  \right)^3+3\,\left({{y-{{3\,h}\over{2}}}\over{h}}+1\right)^2+3\,  \left({{y-{{3\,h}\over{2}}}\over{h}}+1\right)+1\right)\]

    \[+d\,\left({{y-  {{5\,h}\over{2}}}\over{h}}+2\right)^3\]

    \[={{d\,y^3}\over{h^3}} -{{3\,c\,y^3}\over{h^3}} +{{3\,b\,y^3}\over{h^ 3}} -{{a\,y^3}\over{h^3}}\]

    \[-{{3\,d\,y^2}\over{2\,h^2}} +{{15\,c\,y^2 }\over{2\,h^2}} -{{21\,b\,y^2}\over{2\,h^2}} +{{9\,a\,y^2}\over{2\,h^2 }}\]

    \[+{{3\,d\,y}\over{4\,h}} -{{9\,c\,y}\over{4\,h}} +{{33\,b\,y}\over{4 \,h}} -{{27\,a\,y}\over{4\,h}}\]

    \[-{{d}\over{8}} +{{5\,c}\over{8}} +{{17\,b }\over{8}} +{{27\,a}\over{8}}\]

    \[=\frac{d-3\,c+3\,b-a}{h^3}\,y^3 +\frac{-3\,d+15\,c-21\,b+9\,a}{2\,h^2}\,y^2\]

    \[+\frac{3\,d-9\,c+33\,b-27\,a}{4\,h}\,y +\frac{-d +5\,c + 17\,b + 27\,a}{8}\]

This will be used to actually calculate g in programs.

Disclaimer: I am not an expert in numerical analysis. While I believe that the approach that this article comes up with is sound and useful, I do believe that an expert of numerical analysis could still improve the accuracy of the calculations.

How to actually program it will be covered in the upcoming article „Spline Approximation (Cookbook)“. The link will be added, when it is available.

Share Button

Git for Linux System Engineering

Now that git has become the standard version control software, which is used by software developers.

Now for system engineering and system administration purposes it used to be an approach to just login and do something and remember it or even note it somewhere.

Some people tried to just use RCS or SCCS on the server to create a local repository for the version of a configuration file. Some thought might be needed, if this is OK, because in the processes of checking in a file, it might be temporarily unavailable or inconsistent, because RCS can modify files in the processes. Usually this is not a problem, but if these kinds of problems occur, they are kind of weird ot find and fix.

Today the approach has changed a lot. Virtualization and it has become possible to build up thousands of servers. Or even millions, if you go to companies like Google. Typically servers perform a specific task and another server is used for another specific task and since almost all are virtual, that is possible. Thus it become necessary to run and maintain thousands of servers and that means that things have to be done really efficiently.

An important approach is to automate a lot of things. This can be done for example using Ansible or Perl, Ruby or Python. Or combinations of these. Or there can be a cloud that supports a lot of features out of the box. And tools that automate certain tasks…

So in the end of the day, it is no longer the typical approach to login to a server and do certain things, but to work on some kind of master server, prepare things and then run them on multiple servers from there. So the scripts and configuration files are created on this master server and now it becomes perfectly natural to use git to check them in.

Another engineer can clone the repository and perform the same task on a new server, for example.

Also it is a good idea to create branches and work with the branch on a test server, until it looks good and them merge that to the main branch. Even tags and release numbers make sense in some environments.

Only one tiny difference: While software developers tend to use IDEs like IntelliJ IDEA, Eclipse, Visual Studio or editors like Atom or Emacs, system engineers tend to use vi. This has several advantages: First of all, it is installed on every Linux server (unless you create one and uninstall it to disprove this, of course), it runs well across an ssh session even with relatively slow internet connections and it is powerful once you learn how it works. So I recommend to learn just the basics to get simple stuff done. And I recommend to avoid heavy customizing of vi, because that might cause problems when working on a server without the customization.

And even though today work is done mostly on these master servers, of course it does still happen, that it becomes necessary to login to a specific server to check out if the stuff worked as desired or to do tiny tasks that have not been automated. But remember: Lazyness is good. You should hate doing repetitive work on multiple servers and figure out how to automate it, if it happens for the third time or if it can be anticipated that it will happen more often. Automation not only creates efficiency, it also allows for better collaboration (via git), better consistency and better stability. It is the way to go.

Share Button

Starting processes while booting (Linux)

When a Linux system is booted, we want certain processes to run immediately.

In the old days, that is 25 years ago or so, this was done in „BSD-style“ by having certain magical shell scripts that start everything in the right order. When adding another service, this just had to be added to the shell script and that was it.

Then this was replaced by System-V-style init. So the system had certain runlevels. The desired runlevel was configured somewhere. And for each runlevel there was a directory which contained a lot of scripts or mostly softlinks to scripts. The scripts starting with S were used for starting and the scripts with K were used for stopping. The ordering was achieved by adding a two digit number immediately after the letter S or K.

Important runlevels were:

  • single user mode, which is a very special way to boot the system for maintenance tasks that cannot otherwise be achieved. I have used this only a few times in my life when the system was really seriously messed up…
  • multi user mode with network and no graphical UI. This is what most servers are running
  • multi user mode with graphical UI. This is what most Laptops are running

It was possible to boot into a certain runlevel by configuration. And to switch to another runlevel…

Now this has given way to another more versatile and approach called systemd.

Processes that need to be started are configured by a so called service file. It contains information about how to start and stop the process and about dependencies on other services or abstract groups called „target“. Runlevels are expressed as targets and have names instead of numbers.

A service can be enabled by

systemctl enable something.service

which means it will be started automatically when booting.

It can be disabled with

systemctl disable something.service

And it can be started and stopped with

systemctl start something.service
systemctl stop something.service

The status is queried with

systemctl status something.service

There is a lot more to discover… For example, there are timers to run a certain task repeatedly (instead of putting it into crontab).

If you need to shutdown or start a service at certain times, it is a good idea to perform this always via systemctl and call systemctl from the script instead of going directly to the startup script of the process, because systemctl start/stop stores information that allow systemctl status to work.

It should be observed, that systemctl not only calls scripts to start and stop services, but also keeps track of the processes that are created by this. So it is very important to always start and stop them via systemctl and not to bypass this.

It is nice how beautiful and consistent this solution is compared to the previous solutions…

Share Button

Perl & Java

How do we use Perl and Java together? Unlike many other languages, that for example run in the JVM, it is not particularly easy to combine them. But that is not the idea.

A good starting point is thinking about houses and furniture. While it is perfectly possible to build houses of wood or furniture of concrete, the common practice is to build the house of concrete, stone (plus some other „hard“ materials) and to build furniture mostly of wood or materials that are somewhat like wood. There is a point behind this. The house should be durable and it remains the same for a long time. Changing the house is an expensive operation.

Furniture is individual and different people who use different parts of the house usually bring their own furniture. It is easy to change and it dues not need to be as hard, because being inside the house it is protected from wind, snow, rain and to some extent even temperature changes.

And now think about the tools that we use to build the house. They are not made of concrete or stone at all.

Materials have their strength and weaknesses. We can use the common materials for the task. We can substitute them to some extent. Good wooden houses exist, but wooden skyscrapers are rare.

We also have different programming languages with certain strengths and weaknesses. We can substitute them to some extent as well. Good programs exist that are written in many different languages, even combinations of languages. Who likes PHP? And who likes Wikipedia?

There are many good JVM languages, like Java, Scala, Kotlin, Clojure, JRuby, Ceylon, Groovy… They are widely accepted by the people who manage the budgets. And the operations teams know how to work with software in JVM languages. It is easy to find developers, maybe a lot easier for Java than for Ceylon. Ceylon and Kotlin came out at the same time and with very similar goals. Simply said: Kotlin won and Ceylon lost.  And the performance of JVM-lanauges is close enough to that of native C/C++, which is an amazing success of the developers who wrote and improved the JVM. We kindly ignore that this is at the expense of startup times and memory consumption, but even these issues are being worked on and we will see further progress with GraalVM. It runs on Linux servers, which is what we usually want to use, but it would also run on other common servers. This is excellent building material for houses.

The cons of JVM software: high memory footprint and startup times. The memory footprint can be solved by throwing some money and usually that is a reasonable option, when looking at the big picture. Startup times do not really matter, because we just let our server run for a long time.

So for scripts that do small to moderate size jobs on the command line, Perl seems to be superior to Java. When it comes to parsing text and using regular expressions, Perl profits from having the best regular expression engine, second maybe to that of Raku, which is former Perl6.

Perl can be used to edit Java programs. So you write a perl script that performs simple or more complex transformations on a possibly large set of Java classes or even creates Java classes. This allows to do things that are not available in the „refactoring tab“ of the IDE.

You can call Perl programs from Java to perform tasks that are impossible or too difficult to do in Java. Because Java chose to be „OS-independent“, certain lowlevel OS-functionality cannot be accessed from Java, but this is possible from Perl.

And Perl can be used to start Java programs. Today we see really horrible shell scripts to start something like Tomcat. For the windows variant they add a bat-file that is even much worse. A Perl script can much more easily setup the environment for this Java program and start it and there is no need to write it twice. Or use Perl scripts to perform installation tasks for the software.

Another interesting application is testing. Perl can be used to create automated test, to prepare test data or to parse result data. Or to anonymize data.

Most of these things can as well be done with Python and Ruby and probably some other languages. For certain tasks Ruby or Python are probably even better than Perl. And you can very well think of other JVM-languages instead of „Java“. Many of these things can be done with specific tools that serve the task well. Which can be a good idea, but it can also constrain you to what the tool supports, as in the refactoring case with the IDE as a tool.

It is sometimes a good idea, to combine a language like Clojure, Java, C, C++, Scala with a scripting language like Ruby, Python or Perl.

Share Button

Happy Easter

Happy Easter!

Share Button