Functional Scala

I participated online in the conference „Functional Scala 2020“ in London. That it was in London had mostly one relevance, which was the time zone. There was no physical location and all talks were done online. An interesting idea was a virtual location. It consisted of rooms and we could move a dot representing ourselves around. Each room consisted of a beautiful landscape as a map of a different climate zone. I could hear what others said, when I moved my dot, representing myself, closer to them, as in real life, and do some nice conversations like that.

A lot of things were said about Scala 3, which will be a big step forward, but also a big step, because it is not compatible with Scala 2. So some work will be necessary to move on to Scala 3, but we will gain a better language for beginners, intermediates and advanced Scala developers.

I am really looking forward to Functional Scala 2021, hopefully in London.

Share Button

How to disable touchpad (on Linux/X11)

For me it is much better to use an external mouse than the touchpad, which I sometimes touch accidentally.

So, here is how to disable it with a short Perl-Script. A bash script with a bit of Perl would do the same, btw.


#!/usr/bin/perl
my $tp = `xinput list | egrep -i touch`;
chomp $tp;
$tp =~ s/.+id=(\d+).+/$1/;
system "xinput set-prop $tp 'Device Enabled' 0";
print "Touchpad disabled\n";

Share Button

Devoxx UA 2020 (talks)

I watched the conference onlie and picked the following talks:

And on the second day:

Share Button

Devoxx UA

Most conferences have been cancelled, since it is difficult to hold a conference these days. The idea to move the conference online has been obviously around, but it was rejected by most organizers, because it it not the same and the all important chance to meet other people is just not the same.  So the Devoxx in Antwerp, which I like to visit every year, did not happen.  But Devoxx Ukraine decided to go for online.

So how did it work:  There were three tracks.  Each track was represented by a youtube channel, on which the live talk was transmitted.  Before and after the talks, professional moderators appeared in these channels, announced the speakers and did what moderators do in normal conferences.  The devoxx App worked on my cell phone and that seemed to be the most up to date schedule.

Some talks were really excellent.  I enjoyed them a lot even online.  For talks that are not so good, it requires more discipline to stay tuned.

The discussion was done in zoom channels that belonged to the three tracks.  So discussions could technically last until the end of the next talk.  The discussions in zoom also worked quite well, which was a surprise.

I think it is probably the right reaction, to have fewer conferences than usually in a year and to cancel some and move some to online, exactly as it is happening.  I think DevoxxUA had around 10’000 visitors, so they absorbed audiences of several conferences.  Also some common speakers at Devoxx conferences were not giving talks, so I assume that also part of the speakers decided that the online format is not ideal for them.

About the contents I will write in another Blog article.

Share Button

How to rename files according to a pattern

We often encounter situations, where a large number of files should be copied or renamed or moved or something like that.
This can be done on the Linux command line, but it should be possible in almost the same way on the Unix/Linux/Cygwin-command line of newer MS-Windows or MacOS-X.

Now people routinely do that and they have developed several ways of doing it, which are all valid and useful.

I will show how I do things like that. It works and it is not the only way to do it.

So in the most simple case, all files in a directory ending in ‚.a‘ should be renamed to ‚.b‘.

What I do is:


ls *.a \
|perl -p -e 'chomp;$x = $_;s/\.a$/.b/;$y = $_; s/.+/mv $x $y\n/;' \
|egrep '^mv '\
|sh

You can run it without the last |sh, to check if it really does what you want.

So I use the files as input to a short perl script and create shell commands. It would be possible to do this actually in Perl itself, without piping it into a shell:


ls *.b \
|perl -n -e 'chomp;$x = $_;s/\.b$/.c/;$y=$_;rename $x, $y;'

You could also read the directory from perl, it is quite easy, but for just quickly doing stuff, I prefer getting the input from some ls.

To go into sub directories, you can use find:


find . -name '*.c' -type f -print \
| perl -n -e 'chomp;$x = $_;s/\.c$/.d/;$y=$_;rename $x, $y;'

a
You can also rename all the files that contain a certain string:

find . -name '*.html' -type f -print \
|xargs egrep -l form \
|perl -n -e 'chomp; $x=$_;s/\.html$/.form/;$y=$_;rename $x, $y;'

So you can combine with all kinds of shell commands and do really a lot of things in one line.

Of course you can use Raku, Ruby, Python or your favorite scripting language instead, as long as it allows some simple pattern matching and an efficient implicit iteration over the lines.

For such simple tasks there are also ways to do it directly in the shell like this

for f in *.d ; do mv $f `basename $f .d`.e; done

And you can always use sed, possibly in conjunction with awk instead of perl for such simple tasks.

Another approach is to just pipe the files into an texteditor that is powerful enough and create a one time script using powerful editing commands.
On Linux and Unix servers we almost always use vi, even people like me, who prefer Emacs on their own computer:

ls *.e > tmpscript
vi tmpscript

and then in vi


:0,$s/\(.*\)\(.e\)$/mv \1\2 \1.f/
ZZ

and then

sh tmpscript
rm tmpscript

So, there are many ways to achieve this goal and they are flexible and powerful enough to do really a lot more than just such simple pattern renaming.

If you work in a team and put these things into scripts, it might be necessary to follow a team policy about which scripting languages are preferred and which patterns are preferred. And you need to know the stuff that you write yourself, but also the stuff that your colleagues write.

Please, do not do

mv *.a *.b

It won’t work for good reasons.
On Linux and Unix systems the shell (usually bash) expands the glob expression (the stuff with the stars) into a list of strings and then starts mv with these strings a parameters. So calling mv with some file names ending in .a and .b, mv cannot have any idea what to do. When called with more than two parameters, the last one needs to be a directory where to move the stuff, so usually it will just refuse to work.

Share Button

GIMP

In spite of working mostly for server software and server setup using powerful non-graphic command line tools and scripting languages, it is sometimes fun to work with something very graphical. I did talk about Clojure Art, which is fun and creates interesting visual results and helps getting into the phantastic language Clojure. But more than twenty years ago I have discovered GIMP, which is the main image editor on Linux computers. I keep hearing that Photoshop is even a bit better, but it does not work on my computers, so I do not really care too much about it.

To be clear, I am not a professional image editing specialist, I just do it a bit for fun and without the claim of putting in all the knowledge about colors and their visual appearance, the functionality of gimp and image editing in general… I am just experimenting and finding out what looks interesting or good to me and how to work efficiently. Actually it brings together my three interests, programming, photography and bicycle touring, the combination of the latter two being the major source of my input material.

Now you start working with layers and with tools that increase or decrease the brightness, contrast and saturation of an image or of the layer being worked on. Now I would like to explore how certain functions can be brought into this. Either by putting them into my fork of gimp or into a plugin or into a script within gimp or a standalone script or program.

Some things that I find interesting and would like to explore: additional functions for merging layers. The function has the input of n (for example n=2) pixels from the same position and different layers and it produces another pixel. These are the twenty or thirty layer modes, that describe how a layer is seen on top of the next lower visible layer. So two images of the same size or two layers could be merged. It could be as nicely as in Gimp or just desctructively to make things easier. If it is worth anything for anybody else, maybe making it work as the current modes would be a noble goal. But for the moment it is more interesting what to do. A very logical thing to do is taking just the average of the two layers. So for rgb it could be the arithmetic mean of the r, g and b values of the n layers (or images) belonging to the same x-y-position. Now what would alpha values mean? I would think that they are weighting the average and the new alpha value could be the average of the input alpha values. Now we could use geometric, quadratic and cubic means and with some care concerning the 0 even harmonic means. Very funny effects could be created by combining these byte-values with functions like xor.

When working with any functions, it is always annoying that the r-g-b-values are always between 0 and 255 (or something like that). So this can be changed to real numbers, by doing something like

    \[s = \tan(\frac{(r-127.5)\pi}{256})\]

or to the non-negative real numbers by doing something like

    \[s = \tan(\frac{r\pi}{512})\]

Then some functions can be applied to these double values and in the end the inverse function will just be applied and result in a rounded and limited integral value from 0 to 255.

Do we need this? I do not know. But I think it would be fun to be able play around with some functions on one pixel, a vicinity of pixels, the same pixel-position from different layers and the like.

The nice thing is: we can see the result and like it or throw it away. Which function is correct or useful can be discussed and disagreed on. That is more fun than formally proven correctness, assuming of course, that the functions itself are implemented correctly.

I have not looked into the source code of plugins to tell what can reasonably be done without too much effort. But if someone reading this has some ideas, this would be interesting to hear about.

And finally we have one more advantage of GIMP, because it is open source and it is possible to make changes to it.

Share Button

How procurement can create value for IT projects

We know this, in many IT projects we need to make use of services and software and hardware that needs to be bought.

Actually it often makes a huge difference, what kind of deals are made and how efficient the projects can work on this basis.

I will just briefly tell a few stories and tell a bigger story by that.

A common pattern is a „preferred supplier“. It is nice to be a preferred supplier and in the phase when the partner is chosen and the contracts are made, companies often offer their best people and services to show how good it will be later to have them as preferred supplier. And then, when the deal has been fixed, they send the juniors for the same hourly rate and make a lot of profit. Or the price has been made so low, that only the juniors can be sent. This can be a problem in the long run, because it might get really difficult to get enough really good people in order to progress with strategic long term projects and not just maintenance of the daily business. Another interesting pattern can occur, when the preferred supplier is very strong with their employees in a project. Now they are getting some money from the hourly rates and in order to make profit the salaries should be significantly lower than this. In order for this to make sense for the employees they can provide non-monetary incentives, like some kind of career steps. Being in a powerful position in the project, the preferred supplier has some possibilities to choose who is in the project and who gets more responsible positions. So there is a temptation to kick out people who are not from their company and to provide these attractive positions not to the person, who would be the best choice for the customer, but to those whom they want to give an incentive. This is on the expense of their customer. So in the end of the day it is usually good to rely on multiple providers for „external“ people. There are serious companies who behave professionally and correctly even when they have become a „preferred supplier“, but this is not always the case.

When the preferred supplier is providing software, for example a database, it may be possible to get a really good deal for five years. Then in five years the deal needs to be extended and becomes magically more expensive. Especially if the company knows itself in the position of a „preferred supplier“. And when this issue is discovered, maybe even a year before, it is already too late to migrate. And then the expensive software needs to be used for another four years until it is again too late… And being from a big, impressive company does not necessarily make the software good. Counterexamples exist. In the case of databases I have seen companies that follow a strategy of multiple databases and that require a good reason for using the more expensive solutions. And magically the position of the buyer becomes much stronger when the deal needs to be extended for another five years. Maybe the overly expensive database will even be kicked out at some time. And yes, this expensive database has some really cool and pretty unique features. Unfortunately they only come in some enterprise edition that would be even much more expensive than the regular one, while open source databases provide decent, but less sophisticated variants of these enterprise features for a price that is less than the base version of the expensive database product. But, since the DB product cannot easily be changed, it is important to make a wise choice and to consider different options, including the more expensive ones, when starting a project. And to pick what makes sense for the specific needs.

Some interesting observations where made, when some preferred supplier made a really tempting offer for operating all the servers of a larger company. The annual price was really low. Much cheaper than doing it with their own team. Now suddenly the need arose, to store some really large amount of log files. I mean, I am talking about what could be stored on a few USB-discs, that could have been bought in the supermarket for a few hundred Euros. But of course this was forbidden, because the servers had to be run by the preferred supplier and even putting Linux on a few PCs that were no longer needed and attaching a few cheap disks would have been ruled out by the cheap overall contract. But this cheap solution would have been absolutely sufficient for the purpose. Now the diskspace could be bought from this supplier. Or more precisely rented. It was not a few hundred Euros, but a few hundred thousand Euros a year. Yeah, trey needed to make some money somehow… And a few hundred Euros once every few years, or maybe even a few thousand Euros every year would have been totally acceptable to pay by the project. But there are limits. You cannot do certain things under such conditions. The deal kills important possibilities of the IT people. I am not going to write, how this was resolved…

Another story, with a really cheap preferred supplier: They actually ran an important database for a stunningly low fixed base price. And on top of that it was paid per query. So what did they do? They designed the software in such a way that it used an Oracle database as a cache for the pay-per-query DB2 database. So the same query had to be made only once to DB2 as long as the data did not change. And when the data changed, the Oracle database just had to be cleaned up. Since this happened only a few times a year, this technically stupid architecture saved really a lot of money every year. Big money.

Yet another example: The management had already bought clearcase licenses. They were really expensive and the money was already gone. Now the setup that was used for clearcase and that was allowed by the licensing was not really optimized for part of the team working remotely. To do that efficiently would have required a much more expensive license that no one wanted to pay for. So every day synchronizing the software took like 30 to 45 minutes. And one team member had to work full time to maintain clearcase. There were some other pains, like it crashed when files contained only linefeeds instead of carriage return-linefeed and some other annoying details that I do not really remember. Just for the record, some of these issues have been fixed in later releases… And clearcase had a lot of really interesting features that were not at all used. The seriously useful features can all be found in git now, in a contemporary way, of course. But not in those days, when there was still neither git nor subversion. So some tests were performed and it looked like the free software CVS (which really sucks big time when compared with contemporary systems like git) would have worked much much better for the concrete project. But clearcase had to be used because it was so expensive and the money had already been paid.

So in the end of the day, when the procurement does make good deals, this can create a lot of value for the project and allow for efficient and innovative work and for solutions that make sense technically instead of finding tricks how ot bypass the worst parts of the contract.

So a good procurement team and a good communication with the technical staff that knows what is needed for their work is a big plus for everybody, for the project and for the company.

Share Button

Unix and Linux

When Linux appeared in the first half of the 1990’s, I used to hear a lot: „yes, this is a nice thing, but it is not a real Unix“.

So why was there a different name, even though it was behaving almost the same? It was Posix, but Unix was a trademark, that could not be applied to Linux.

Unix was very important in the 90’s and in the 2000’s, but now it has lost almost all of its relevance.

Newer systems almost always use Linux and systems that still run Unix (Solaris or Aix) are usually considered as something that needs to be migrated to Linux sooner or later.

There is nothing wrong with Unix. It brought us great concepts and these concepts are relevant today. And inventing and standardizing these concepts was a good thing and a success story. But all the good stuff can now be found in Linux and since the progress is happening there, it has surpassed the Unixes. Of course it was a factor that HP in the late 90’s or so announced that they saw no future for their HP/UX, which they revoked, but it had created damage to that system. Oracle had bought SUN and that weakened Solaris. Many other Unix-variants have already lost their relevance long ago or are still lively and good niche systems, like BSD.

The success story behind this is that standards that work across companies have been established and allow systems to work together and to behave similarly.

Share Button

How bad can a bad IT be for a company?

Just a funny story that happened some years ago…

I wanted to buy some lamps in a stored somewhere 100 km away from where I lived.

So I went to the shop, ordered them and bought something else already.

Now I went there again when the lamps were there. I had ordered six lamps, but actually wanted to buy one more. A bit of money I had already paid when ordering…
It was a bad day. They told me that the lamps were probably there, but they were not able to process the purchase or even find them because the IT was not running. I was kind of upset about wasting so much time and money to get there and so they paid me the train ticket..

Next time I went there. It took a long time until I was able to pay. And then it really took an hour.. Six lamps ordered, plus one more, minus the sales tax from the previous sale minus what I had already paid plus some other stuff that I had actually bought during this visit. So many numbers all had to be added together with the right sign… After about an hour and many false attempts they got it right. It took an hour from the time when it was my turn to the time I had actually successfully paid and my credit card did work correctly… Then I got a piece of paper and I had to go to another entrance of the building, quite far away from where I was. There they had another understanding about how many lamps I should get then what I thought I had paid for. So I went back to the lady where I had paid and asked her to come with me to help me get the right number of lamps. She did not want to help me in that way, but I got her to write a not on the piece of paper that she had given me, that this means that I was entitled to seven lamps and she signed it. Then after having spent at least two hours I was able to go home with seven lamps and whatever else I had bought.

Now the question is, what is wrong here?

Obviously the IT did not work too well… It did not work at all on the second visit and it did not help getting the job done during the third visit.

But was that really the problem? Or just the symptom?

My impression is that the top management of the company was really bad. The processes were bad. And they were not able to find good employees, to train them and to motivate them. And then the IT was showing the same standard as the rest of the company.

Fixing the IT would not fix the problem. The business has to be fixed, the processes, the management, the employees need to be trained well, selected well and most of all motivated to work well… Then, when there are decent processes, it is a good time to improve the IT to support these processes instead of retaining the bad processes by implementing an IT before understanding the business well enough.

Share Button

Just run it twice

Often we use some kind of „clustered“ environment to run our software.

This promises higher performance and better availability.

And the frameworks seem to suggest that it is just a matter of starting it twice and it will magically work correctly.

There is nothing wrong with investing some thoughts on this issue. It can actually quite wrong otherwise…

So some questions to think about:

Where are the data? Does each service have its own set of data? Or do they share the data? Or is there some kind of synchronization mechanism between the copies of data? Or is some data shared and some data as copy for each instance of the service?

Do we gain anything in terms of performance or is the additional power of the second instance eaten up by the overhead of synchronizing data? Or if data is only stored once, does this become the bottleneck?

Then there is an issue with sending the requests to the right service. Usually it is a good idea to use something like „sticky sessions“ to keep a whole session or collections of related requests on one instance. Even if the protocol is „stateless“ and „restful“.

Is there some magic caching that happens automatically, for example in persistence frameworks like Hibernate? What does this mean when running two instances? Do we really understand what is happening? Or do we trust that hibernate does it correctly anyway? I would not trust hibernate (or any other JPA implementation) on this issue.

What about transactions? If storage is not centralized, we might need to do distributed transactions. What does that mean?

Now messaging can become fun. Modern microservice architectures favor asynchronous communication over synchronous communication, where it can be applied. That means some kind of messaging or transmission of „events“ or whatever it is called. Now events can be subscribed. Do we want them to be processed exactly once, at least once or by every instance? How do we make sure it is happening as we need it? Especially the „exactly once“-case is tricky, but of course it can be done.

How do we handle tasks that run like once in a certain period of time, like cronjobs in Linux? Do we need to enforce that they run exactly once or is it ok to run them on each instance? If so, is it ok to run them at the same time?

Do we run the service multiple times on the productive system, but only a single instance on the test and development systems?

Running the service twice or more times is of course something we need to do quite often and it will become more common. But it is not necessarily easy. Some thinking needs to be done. Some questions need to be asked. And we need to find answwers to them. Consider this as a starting point for your own thinking processes for your specific application landscape. Get the knowledge, if it is not yet in the team by learning or by involving specialists who have experience…

Share Button