Serialization

Deutsch

Serialization allows us to store objects in a lossless way or to transfer them over the network. This could be done before, but it was necessary to program the serialization mechanism, which was a lot of work. Of course, they were not yet called objects in those days…

Java suddenly had such a serialization for (almost) all object without any additional programming effort. This does not mean that this automatic serialization did not exist before, but it was made popular with Java, because frameworks started to heavily rely on it. To use it ourselves, we just had to use ObjectOutputStream and ObjectInputStream and Objects could be stored and written across the network or just be cloned. It was even able to handle circular references, which most serialization mechanisms cannot do. The idea was not really new, as other languages had something like this already before, but nobody became aware of it.

But there are some drawbacks, that were discovered when it was already too late and that should at least be mentioned.

  • Marking with a Serializiable-Interface is conceptionally quite a bad solution, because it assumes that serialization never gets lost by deriving classes, which is just not true. An Unserializable interface would have been a much better solution, if not almost ideal solution for this, because trivial objects are always serializable and they loose this when something non-serializable is added. Then again, how about collections… Today possibly some annotation could also be helpful.
  • This serialVersionUID creates a lot of pain. Should we change it, whenever the interface changes? We talk about the implicit serialization interface, not about an explicit interface that we can easily see. Should we trust automatic mechanisms? In any case issues with incompatible versions remain that are not really solved well and cannot even easily be solved well.
  • Serialization introduces an additional invisible constructor.
  • Serialization undermines the idea of private and protected, because suddenly private and protected member attributes become part of the interface
  • Funny effects happen with serializable non-static inner classes, because there serialized version bakes in the containing outer object. Yes it has to…
  • The object indentity gets lost, when an object is serialized. It is easy to create several copies of the same object.
  • Sometimes it is still necessary to manually write serialization code, for example for singletons. It is easy to forget this, because everything seems to work just fine automatically.
  • Java’s Serialization is quite slow.
  • The format is binary and cannot easily be read. A pluggable serialization format that could allow more human readable data files like JSON, XML,… would have been better…
  • Serialization creates a temptation to use this format for communication, which again forces a tight coupling that might not be necessary otherweise.
  • Serialization creates a temptation to use it as a storage format instead of mature database technologies. Very bad: the second level cache of hibernate…

There were some advantages in having this serialization in the past and for some purposes it kind of works. But it is important to question this and to consider other, more solid approaches, even if they require slightly more work. Generally it is today considered one of the larges fallacies of Java to introduce this serialization mechanism in this way. There are now better ways to do serialization, that require a bit more work, but avoid some of the terrible short comings of the native Java-Serialization.

For the serialVersionUID there are several approaches that can work. A statical method, that extracts from an „$Id$“-string that is managed by svn, can be a way. It will avoid compatibility between even slightly different versions, which is probably the best we can get. With git it is a bit harder, but it can be done as well.

Usually it is the best choice by far to leave serialVersionUID empty and rely on Java’s automatic mechanisms. They are not perfect, but better than 99% of the manually badly maintained serialVersionUIDs. If you want to manage your serialVersionUIDs yourself, there needs to be a checklist on what to do to release a new version of a file, a library or a whole software system. This is usually sick, because it creates a lot of work, even more errors and should really be done only with good reasons, good discipline and a very good concept. If you like anyway to use serialVersionUID or if you are forced to do so by the project, here is a script to create them randomly:

#!/usr/bin/perl
use bigint;
use Math::Random::Secure qw(irand);
my $r = (irand() << 32) + irand();
printf "%20d\n", $r;

This is still better than using the IDE-generated value and keeping it forever or starting with a 1 or 0 and keeping it forever, because updating this serialVersionUID is not really on our agenda. And it shouldn't be.

Share Button

Why am I learning Python

To be honest, what can be done with Python can also be done with Perl or Ruby. I am not working in areas, where there is much better library support for Python than for Perl or Ruby. And I like Perl and Ruby very much and I am somewhat skeptical about Python. But there are some points that make it worth knowing Python in addition to Perl and Ruby, not instead of them.

I strongly recommend using real programming languages like Perl, Ruby, Python and you can add some more instead of Bash scripts, where a certain complexity is exceeded. Try reading the pure bash scripts that are used to start Maven, Tomcat or other useful software. Often there is a CMD-script as well, that is the real pure horror. Python serves this purpose well enough, the other two of course as well.

It is always good to learn new languages once in a while, because they extend our horizon and help us even to be better with our more preferred languages. And why not challenge the preferences…

There is a good point in allowing for a tool box of languages, not „only Java“ or „only C#“ or „only C“ or even „only Perl“, whatever you like… Combining a useful toolbox of several languages is the right way to go. This would be the case with a toolbox containing A and B, where A ∈ { C, C++, Java, C#, F#, Scala, Clojure, …} and B ∈ { Perl, Perl6, Ruby, Lua, Python,…}. Usually it is a good idea to make it slightly larger, but it is also good to find a consensus on which set of languages to concentrate. I would for example discourage using sed and awk, because they can quite easily be replaced by Perl and limit bash to very trivial scripts. There are some cases in which the awk or sed scripting is a bit shorter than it is with doing the same in Perl or Ruby, but this does not justify maintaining the extra knowledge, while Perl on top of Java does justify this a lot. So the toolbox should be big enough to cover everything, but it does not have to be too redundant and there can be preferences what tool is recommended to use for a certain class of purposes, if this recommendation is reasonable. This makes it easier to maintain each others code. Now there are many projects, where the spot of B is taken by Python. So in order to be a good team player it might be useful to be able to work with the python scripts, write in this language and contribute instead of spending too much time talking about why Perl or Ruby or Lua or whatever is better. Which it might be. Or which might be more a matter of taste. Here is what big sites are using as A, B, C,….

Now out of these scripting languages, Python is for sure a successful contender. This results in good libraries, but also in higher likelyhood of Python occupying the spot B.

Now we have tools like Jenkins, Kubernetes, Docker, Cloud computing, Spark and simply certain Linux distributions, which might come along with their preferred set of scripting languages that are well supported for performing certain tasks. This can be delegated to one or two guys in the team or kept to a minimum, but this might become a factor of increasing importance. It might force us to have multiple „Bs“ or multiple „As“.

And there are certain areas, where Python is simply strong and has become the language of choice. It seems to have become the successor of Fortran for many if not most numerical calculation areas, even though there will probably always be a niche for powerful compiled languages like Fortran and C for the ultimative performance. But so the library is written in C with Python bindings and we get most of the performance as well. Also Data Science seems to mostly opt for Python as the general purpose language besides R and SQL and SAS. Even Bioinformatics, which was a stronghold of Perl for many years is now preferring Python… Yes, it does hurt someone who likes Perl, but it is true… So to be able to work in many interesting areas, it is useful to know some Python. So I started learning it. I am using a Russian translation of the book Programming Python.

I might write a bit more about the language, once I have some more experience with it.

Share Button

Meaningless Whitespace in Textfiles

We use different file formats that are more or less tolerant to certain changes. Most well known is white space in text files.

In some programming languages white space (space, newline, carriage return, form feed, tabulator, vertical tab) has no meaning, as long as any whitespace is present. Examples for this are Java, Perl, Lisp or C. Whitespace, that is somehow part of String content is always significant, but white space that is used within the program can be combination of one or more of the white space characters that are in the lower 128 positions (ISO-646, often referred to as ASCII or 7bit ASCII. It is of course recommended to have a certain coding standard, which gives some guidelines of when to use newlines, if tabs or spaces are preferred (please spaces) and how to indent. But this is just about human readability and the compiler does not really care. Line numbers are a bit meaningful in compiler and runtime error messages and stack traces, so putting everything into one line would harm beyond readability, but there is a wide range of ways that are all correct and equivalent. Btw. many teams limit lines to 80 characters, which was a valid choice 30 years ago, when some terminals were only 80 characters wide and 132 character wide terminals where just coming up. But as a hard limit it is a joke today, because not many of us would be able to work with a vt100 terminal efficiently anyway. Very long lines might be harder to read, so anything around 120 or 160 might still be a reasonable idea about line lengths…

Languages like Ruby and Scala put slightly more meaning into white space, because in most cases a semicolon can be skipped if it is followed by a newline and not just horizontal white space. And Perl (Perl 5) is for sure so hard to compile that only its own implementation can properly format or even recognize which white space is part of a literal string. Special cases like having the language in a string and parsing and then executing that should be ignored here.

Now we put this program files into a source code management system, usually Git. Some teams still use legacy systems like subversion, source safe, clear case or CVS, while there are some newer systems that are probably about as powerful as git, but I never saw them in use. Git creates an MD5 hash of each file, which implies that any minor change will result in a new version, even if it is just white space. Now this does not hurt too much, if we agree on the same formatting and on the same line ending (hopefully LF only, not CR LF, even on MS-Windows). But our tooling does not make any difference between significant changes and insignificant formatting only changes. This gets worse, if users have different IDEs, which they should have, because everyone should use the IDE or editor, with which he or she is most efficient and the formal description of the preferred formatting is not shared between editors or differs slightly.

I think that each programming language should come with a command line diff tool and a command line formatting tool, that obey a standard interface for calling and can be plugged into editors and into source code management systems like git. Then the same mechanisms work for C, Java, C#, Ruby, Python, Fortran, Clojure, Perl, F#, Scala, Lua or your favorite programming language.

I can imaging two ways of working: Either we have a standard format and possibly individual formats for each developer. During „git commit“ the file is brought into the standard format before it is shown to git. Meaning less whitespace changes disappear. During checkout the file can optionally be brought into the preferred format of the developer. And yes, there are ways to deal with deliberate formatting, that for some reason should be kept verbatim and for dealing differently with comments and of course all kinds of string literals. Remember, the formatting tool comes from the same source as the compiler and fully understands the language.

The other approach leaves the formatting up to the developer and only creates a new version, when the diff tool of the language signifies that there is a relevant change.

I think that we should strive for this approach. It is no rocket science, the kind of tools were around for many decades as diff and as formatting tools, it would just be necessary to go the extra mile and create sister diff and formatting tools for the compiler (or interpreter) and to actually integrate these into build environments, IDEs, editors and git. It would save a lot of time and leave more time for solving real problems.

Is there any programming language that actually does this already?

How to handle XML? Is XML just the new binary with a bit more bloat? Can we do a generic handling of all XML or should it depend on the Schema?

Share Button

Loops with unknown nesting depth

We often encounter nested loops, like

for (i = 0; i < n; i++) {
    for (j = 0; j < m; j++) {
        doSomething(i, j);
    }
}

This can be nested to a few more levels without too much pain, as long as we observe that the number of iterations for each level need to be multiplied to get the number of iterations for the whole thing and that total numbers of iterations beyond a few billions (10^9, German: Milliarden, Russian Миллиарди) become unreasonable no matter how fast the doSomethings(...) is. Just looking at this example program

public class Modular {
    public static void main(String[] args) {
        long n = Long.parseLong(args[0]);
        long t = System.currentTimeMillis();
        long m = Long.parseLong(args[1]);
        System.out.println("n=" + n + " t=" + t + " m=" + m);
        long prod = 1;
        long sum  = 0;
        for (long i = 0; i < n; i++) {
            long j = i % m;
            sum += j;
            sum %= m;
            prod *= (j*j+1) % m;
            prod %= m;
        }
        System.out.println("sum=" + sum + " prod=" + prod + " dt=" + (System.currentTimeMillis() - t));
    }
}

which measures it net run time and runs 0 msec for 1000 iterations and almost three minutes for 10 billions (10^{10}):

> java Modular 1000 1001 # 1'000
--> sum=1 prod=442 dt=0
> java Modular 10000 1001 # 10'000
--> sum=55 prod=520 dt=1
> java Modular 100000 1001 # 100'000
--> sum=45 prod=299 dt=7
> java Modular 1000000 1001 # 1'000'000
--> sum=0 prod=806 dt=36
> java Modular 10000000 1001 # 10'000'000
--> sum=45 prod=299 dt=344
> java Modular 100000000 1001 # 100'000'000
--> sum=946 prod=949 dt=3314
> java Modular 1000000000 1001 # 1'000'000'000
--> sum=1 prod=442 dt=34439
> java Modular 10000000000 1001 # 10'000'000'000
--> sum=55 prod=520 dt=332346

As soon as we do I/O, network access, database access or simply a bit more serious calculation, this becomes of course easily unbearably slow. But today it is cool to deal with big data and to at least call what we are doing big data, even though conventional processing on a laptop can do it in a few seconds or minutes... And there are of course ways to process way more iterations than this, but it becomes worth thinking about the system architecture, the hardware, parallel processing and of course algorithms and software stacks. But here we are in the "normal world", which can be a "normal subuniverse" of something really big, so running on one CPU and using a normal language like Perl, Java, Ruby, Scala, Clojure, F# or C.

Now sometimes we encounter situations where we want to nest loops, but the depth is unknown, something like

for (i_0 = 0; i_0 < n_0; i_0++) {
  for (i_1 = 0; i_1 < n_1; i_1++) {
    \cdots
      for (i_m = 0; i_m < n_m; i_m++) {
        dosomething(i_0, i_1,\ldots, i_m);
      }
    \cdots
  }
}

Now our friends from the functional world help us to understand what a loop is, because in some of these more functional languages the classical C-Style loop is either missing or at least not recommended as the everyday tool. Instead we view the set of values we iterate about as a collection and iterate through every element of the collection. This can be a bad thing, because instantiating such big collections can be a show stopper, but we don't. Out of the many features of collections we just pick the iterability, which can very well be accomplished by lazy collections. In Java we have the Iterable, Iterator, Spliterator and the Stream interfaces to express such potentially lazy collections that are just used for iterating.

So we could think of a library that provides us with support for ordinary loops, so we could write something like this:

Iterable range = new LoopRangeExcludeUpper<>(0, n);
for (Integer i : range) {
    doSomething(i);
}

or even better, if we assume 0 as a lower limit is the default anyway:

Iterable range = new LoopRangeExcludeUpper<>(n);
for (Integer i : range) {
    doSomething(i);
}

with the ugliness of boxing and unboxing in terms of runtime overhead, memory overhead, and additional complexity for development. In Scala, Ruby or Clojure the equivalent solution would be elegant and useful and the way to go...
I would assume, that a library who does something like LoopRangeExcludeUpper in the code example should easily be available for Java, maybe even in the standard library, or in some common public maven repository...

Now the issue of loops with unknown nesting depth can easily be addressed by writing or downloading a class like NestedLoopRange, which might have a constructor of the form NestedLoopRange(int ... ni) or NestedLoopRange(List li) or something with collections that are more efficient with primitives, for example from Apache Commons. Consider using long instead of int, which will break some compatibility with Java-collections. This should not hurt too much here and it is a good thing to reconsider the 31-bit size field of Java collections as an obstacle for future development and to address how collections can grow larger than 2^{31}-1 elements, but that is just a side issue here. We broke this limit with the example iterating over 10'000'000'000 values for i already and it took only a few minutes. Of course it was just an abstract way of dealing with a lazy collection without the Java interfaces involved.

So, the code could just look like this:

Iterable range = new NestedLoopRange(n_0, n_1, \ldots, n_m);
for (Tuple t : range) {
    doSomething(t);
}

Btw, it is not too hard to write it in the classical way either:

        long[] n = new long[] { n_0, n_1, \ldots, n_m };
        int m1 = n.length;
        int m  = m1-1; // just to have the math-m matched...
        long[] t = new long[m1];
        for (int j = 0; j < m1; j++) {
            t[j] = 0L;
        }
        boolean done = false;
        for (int j = 0; j < m1; j++) {
            if (n[j] <= 0) {
                done = true;
                break;
            }
        }
        while (! done) {
            doSomething(t);
            done = true;
            for (int j = 0; j < m1; j++) {
                t[j]++;
                if (t[j] < n[j]) {
                    done = false;
                    break;
                }
                t[j] = 0;
            }
        }

I have written this kind of loop several times in my life in different languages. The first time was on C64-basic when I was still in school and the last one was written in Java and shaped into a library, where appropriate collection interfaces were implemented, which remained in the project or the organization, where it had been done, but it could easily be written again, maybe in Scala, Clojure or Ruby, if it is not already there. It might even be interesting to explore, how to write it in C in a way that can be used as easily as such a library in Java or Scala. If there is interest, please let me know in the comments section, I might come back to this issue in the future...

In C it is actually quite possible to write a generic solution. I see an API like this might work:

struct nested_iteration {
  /* implementation detail */
};

void init_nested_iteration(struct nested_iteration ni, size_t m1, long *n);
void dispose_nested_iteration(struct nested_iteration ni);
int nested_iteration_done(struct nested_iteration ni); // returns 0=false or 1=true
void nested_iteration_next(struct nested_iteration ni);

and it would be called like this:

struct nested_iteration ni;
int n[] = { n_0, n_1, \ldots, n_m };
for (init_nested_iteration(ni, m+1, n); 
     ! nested_iteration_done(ni); 
     nested_iteration_next(ni)) {
...
}

So I guess, it is doable and reasonably easy to program and to use, but of course not quite as elegant as in Java 8, Clojure or Scala.
I would like to leave this as a rough idea and maybe come back with concrete examples and implementations in the future.

Links

Share Button

Not all projects are on ideal paths I (Tom Rocket)

It is nice to write about positive things, how things have been done well, how they should be done well and how good we are and how good we could be if we were just applying the right technology and methodology and process and management…

Tom Rocket

Let’s leave that for today and write about some projects that did not go too well. I picked an example that is more than ten years ago and I am not going to mention any names. Maybe someone who knew this projects as well will recognize something, maybe not. There would be more than one article to write about something like this, and I guess almost anybody who has been around in IT for a couple of years would agree. If not, enjoy being so lucky. 🙂 I change all the names, of course. Otherwise, I hope it is enjoyable to read about others who did funny stuff and to find some anti-patterns between the lines.

So project dinosaur was taking place in the same rooms where I was working. I had some low volume consulting task for this team, but was mostly absorbed by totally different projects.

Tom Rocket is the project manager. There are some more project managers who represent the customers side, but Tom Rocket is running the show and is present in the room. It is easy to recognize that he is the boss, because he calls his team members like the way somebody treats his doggy, who happens for some unintelligible reasons to have a dog, even though he does not particularly like dogs. The team is very engaged, they work hard. Not every day very motivated, but who can understand that. Of course some parts of the development is done by another team of the same company, at another location. The managers of both locations are not really best friends. Donald Peak, the manager of this location, does not care too much, as long as he can sit in his office, smoke cigars, enjoy his newspaper, do some strategy work and manage his car collection. Having been a communist in his younger days, he still knows very well how to be a capitalist. He has his people to deal with the operational issues. They are the bosses of Tom Rocket. And they do not like the other office, where they seem to do interesting and good work. Anyway, to put the pieces together, a complicated server and firewall infrastructure is needed. The other team can dump its code in some neutral zone (DMZ) from where it is fetched and integrated by Tom’s team. And the really cool stuff is done by Tom’s team.

It was a bit before web applications where the thing, at least in a time when people like Tom Rocket did not know and did not care about web applications. The client was developed in MS-Access-Basic. This resulted in interesting opportunities concerning the data. It was possible to store some part of the data in the client database and some part of the data in the server database.

The project was really important. A big company in the country where this happened did an advertising campaign for their end users concerning the services provided by the product developed by Tom Rocket. It was seen a lot in cinemas, in news papers, on the streets. Everybody knew about it. It was coming, it was there already.

We know, there are some organizations that use IT and that actually perform backups. Tom Rocket and the company where he was working did not belong to this group. Backups were not yet introduced, but talking about the possibility of having backups had already begun.

Some IT development projects use source code management. We are talking about the times of source safe and RCS. CVS was already on the horizon, but not yet widely adopted. But Tom’s project did not employ source code management. Whenever a bug in production occurred, they had to get their current development version ready for production, fix the bug there and try to get it installed. Certain end of month operations were causing big pressure on the team each month, because they did not really work fully automatically. The data quality was kind of poor and it was hard for developers to find anybody responsible for this and Tom Rocket was not at all helpful. So they tried to find workarounds that allowed the software to kind of work with such poor data quality, but this was very hard and not really a success story. The introduction of source code management was on the radar, but since the whole team always worked overtime to get the most urgent things done, this never became a serious priority.

A new developer was added to the team. They tried to copy the sources to his home directory, but that did not work. So the solution was, that the senior developer had to give his login and password to the new colleague in order to let him participate in the development.

There was a happy end to all of this. Most if not all people in Tom’s team left the company. The project was stopped, even though it was a bit embarrassing after all the advertisements. And the mother company bought another company in the same country. They somehow merged these too operations. Since the newly acquired company was obviously managed much more professionally and it was much larger, we can make guesses which of the two management teams took over.

Lessons

We can look at this story from different angles. As a distant spectator it is kind of funny, maybe even as a spectator working on something else in the same room, if you have that kind of humor.

As a team member: It might be hard to enjoy working in such an environment and it would not get better by itself. So options are to see if something can be done about Tom becoming a better team leader or being replaced by a better team leader. Or as most of the guys did, just find a better job.

As a project leader in a project that is so deep in the mud? Treat the people working for it even worse, so they will be encouraged to work harder and solve the problems, that can’t be solved at their level? I guess, it is important to admit the situation and either get enough air to put the project on a good track or even risk that it is abandoned. In the end that happened anyway.

As a manager of the company or the office where this took place? I think beyond all cigars, car collections, stock options and strategy it is important to know what is going on in the company, how people are working, how people are feeling, how people are treating each other and how chances are to get to a success that will encourage customers to do another project in the same way. Just ignoring this or demanding „just a bit more“, because the project is in danger and it is no time for addressing any long term issues did not work out. The teams are the real value of an IT company, not the hardware, the furniture and the rooms. Burning the real values like coal, as the word „resources“ might suggest, might not be the recipe for long term success. I would mention in favor of Donald Peak, that I have good reasons to assume that he was a decent and probably good manager in earlier times, but the time of Tom’s project was probably not one the best years of his carreer.

As a customer? I think it is important to have an idea about the team, how they are working and what kind of people they are. In this case it was most likely the right decision to stop the whole thing.

So, as far as I know there was a happy end, at least for the team members and for the mother company.

Share Button

Microsoft is buying Github

It seems that Microsoft is buying Github for about 7.5 billion USD worth in their own stock.

Is this a good thing or a bad thing? Probably no reason to celebrate, but Microsoft under Nadella seems to run a totally different strategy than Microsoft under Steve Ballmer. Selling licenses of MS-Windows and MS-Office is probably still a good business, if done efficiently, but it cannot be the future of a company of the size of Microsoft. The dominating operating system is now Linux, mostly due to cell phones and tablets running Android, network devices and of course servers. The religious rejection of open source, as it was propagated by Steve Ballmer, is no longer visible. Some MS-Projects like C# and F# have in large parts or completely become open source and on the MS-Azure-platform virtual servers with Linux and PostgreSQL can be chosen, with the limitation that there seems to be a need to have an MS-Windows-box with dedicated software to manage the Azure cloud services. If we ask Richard Stallman this is probably all tactics to achieve bad goals by harming the free software movement. This can be true or not.

But we no longer have to automatically assume that this acquisition is bad. Most likely Github will continue to operate as it did before. Nadella will not shut down Github, while Ballmer might have done so, maybe. He would not have bought Github anyway for so much money. The real asset of Github as well as of LinkedIn and Skype for Microsoft might be the large collection of high quality identity data. Today Google and Facebook have identity data for about two thirds of the adult world population by my estimation. We see it in action when we find a „login with Google“ or „login with Facebook“ option for a web site. Yes, „login with Skype“, „login with Github“ and „login with LinkedIn“ would also be possible or even exist. This is the spot were these acquisitions might or might not be seriously negative. But increasing the pool of identity data makes a lot of sense for the new strategy of the company and I would assume that it was one of the major reasons for this acquisition of Github as well as Skype and LinkedIn before. Is it worth so much money? If the answer is quite obvious, we will probably know in the future.

Links

Links are in English or in German…

Share Button

Scala Days 2018

In 2018 I visited Scala Days in Berlin with this schedule. Other than in previous years I missed the opening keynote, which is traditionally on the evening before the main conference starts, because this is not so well compatible with my choice of traveling with a night train, especially because I did not want to miss more than two days of my current project.

I might provide Links to Videos of the talks, if they become available, including the interesting keynote that I have missed, because I arrived on the morning of the first full day with a night train.

So I did visit the following talks on the first full day:

And on the second full day:

Links

Share Button

The little obstacles of interoperability

Deutsch

A lot of things in today’s IT landscape have been unified and interoperability is much better than 20 years ago.

Some examples:

  • Networking: Today networking is TCP/IP. Even the physical cables with RJ45/Ethernet and the wireless networks have been standardized and all kinds of devices can use the same networks. In the old days there were tons of mutually incompatible proprietary network technologies like BitNet (IBM), NetBios (MS), DecNet (DEC), IPX (Novell),….
  • Character Sets: Today we have Unicode and a few standard encodings. At least for Web and Emails we have ways to provide meta information about the actual encoding. This area is not totally free of problems, but on a very good way. In earlier days we had to deal with different EBCDIC-encodings or with character sets that only fully supported English langauge and a few languages using the same alphabet without additional letters like „ä“, „ö“, „ü“, „ø“, „ñ“,… So there we have encountered a great progress.
  • Numbers: For floating point numbers and integers a relatively small set of standardized numerical types has been established, that are used more or less everywhere and behave almost in the same way. The issue of integer overflow remains problematic, though.
  • Software: In the old days software was written for a specific machine, one CPU architecture and one OS. Today we have common platforms on different kind of hardware: Linux runs on almost any physical and virtual hardware from mobile phones and routers to super computers with practically the same kernel. Java, Ruby, Perl, Scala and some other programming languages are available on a range of platforms and provide some kind of abstract platform. And the web is often a good way to develop applications once for a large range of devices.
  • File system: At least we now have a common understanding what a file system looks like. There are some OS specific specialties, but the general idea is still the same. It is possible to share file systems between different operating systems by using technologies like Samba.
  • GNU-Tools: The GNU-Tools (bash, ls, cp, mv,……..) have become the standard on Linux boxes and they are way superior to the traditional Unix tools of the same role and name, that we can still find in Solaris, for example. You can (and should) install them on any Unix and even via cygwin on MS-Windows.

Interoperability is today for many of us interoperability between Linux (and possibly some other rare Posix/Unix-like systems) and Win32/Win64 (MS-Windows).

Experienced Linux users are used to having the forward slash („/“) as file path separator. The backslash is used for escaping special characters. In the MS-Windows-world we often see the backslash („\“) as file path separator. That is enforced in the CMD-window, because it does not pass through the forward slash. My experience with low level Win32/Win64 libraries shows that they understand both variants equally well. Anyway do Ruby, Perl, Java, Cygwin and others support the forward slash. So there is hardly ever any need to check the OS and use backslash or forward slash depending on that, other than for cmd/bat-scripts, but who wants them for more then five lines? I strongly recommend just using the forward slash also under MS-Windows when writing programs in Java, Perl, Scala, Ruby, … It makes life also easier, because the backslash often has to be doubled and it is sometimes hard to keep track of how many backslashes need to be written and to read it.

The line ending is a bit tricky. Linux and Unix use just a „Linefeed“ („\n“=Ctrl-J). For MS-DOS and MS-Windows the combination „Carriage-Return+Linefeed“ („\r\n“=Ctrl-M Ctrl-J) has become the default. Most of today’s programs do not care and understand both variants more or less equally well. Only Notepad does get confused with Linefeed-only files, but notepad is a bad choice anyway. Better Editors (gvim, emacs, ultraedit, scite, …) exist. On the other hand we get problems with the MS-Windows-line-termination in the case of executable scripts. Usually the contain a first line like „#/usr/bin/ruby“. The OS uses that as hint on how to execute them, in this case by calling /usr/bin/ruby. If the line ends with Ctrl-M Ctrl-J, then the OS tries to find a program „/usr/bin/ruby^M“ (^M = Ctrl-M = „\r“), which of course does not exist and we just get an obscure error message.

It is easy to do the conversion:

$ perl -i~ -p -e ’s/\r//g;‘ script

Or the other direction:

$ perl -i~ -p -e ’s/\n/\r\n/g;‘ textfile

For those who use subversion there are ways to enforce a certain way of line endings. Even git supports this.

Share Button

ScalaUA 2018

About a week ago I visited Scala UA in Kiev.

This was the schedule.

It was a great conference once again, as it was already in 2017 and I really enjoyed everything, including the food, which was great again… 🙂

I listened to the following talks on the first day:

And I gave this talk:

On the second day I listened to the following talks:

  • Advanced Patterns in Asynchronous Programming, Michael Arenzon & Assaf Ronen
  • Akka: Actors Design And Communication Techniques, Alex Zvolinskiy
  • Monad Stacks or: How I Learned to Stop Worrying and Love the Free Monad, and other stories, Harry Laoulakos
  • Scala on Wire: How event streams help us build Android apps, Maciej Gorywoda
  • Purely functional microservices with http4s and doobie, Jasper van Zandbeek
  • Tame Your Data with Reactive Streams and Monix, Jacek Kunicki
  • Future and issues of the Scala Ecosystem, Panel Discussion
  • Roll your own Event Sourcing, Lina Krutulytė-Kriščiūnė

And I gave this talk:

As always there was a lot of inspiration coming from the talks and a lot worth exploring in future posts. So there will be Scala posts once in a while, as in the past…

Links

  • Scala Days 2018
  • Scala UA 2017
  • Scala UA (official page)
  • NoSQL
  • Scala
  • Share Button

Encryption of Disks

Today we should use encryption of disks for many situations.

I recommend at least encrypting disks of portable computers that contain the home directory and portable USB disks. They can easily get stolen or lost and it is better if the thief does not have easy access to the content. We should even consider encrypting swap partitions.

There are many ways to do this on different operating systems and actually I only know how to do it for Linux. A possible approach for Windows is to run MS-Windows in a virtual box inside Linux and just profit from the Linux-based encryption. That is what I do, but I do not use my MS-Windows very much. About Apple computers I have no knowledge, please go to the site of somebody else for encryption of disks for them. I know that there is an option available for this, but I do not know how to use it and how good it is.

I prefer to rely on open source solutions for security related issues, because it is harder (but not impossible) to put in malicious components into the software and it is easier to find and to fix them. This is a general point that serious security specialists tend to make that it is better to rely on good and well maintained open source software for security than on closed software of which we do not know the wanted and unwanted backdoors and vulnerabilities.

The way it works in Linux is that we encrypt a disk partition. It can then be accessed after providing a password. It is possible to provide several alternative passwords. The tools to use are dm-crypt, a kernel module, LUKS, cryptsetup, and cryptmount.

It can be done like this (example session) for an external drive that appears as /dev/sdc. Please be extremely careful not to erase any data that you still need or hide data behind a password that you do not know…


# check the partitions
$ fdisk -l /dev/sdc

# encrypt the partition and provide a password:
$ cryptsetup luksFormat /dev/sdc1

# access the partition
$ cryptsetup luksOpen /dev/sdc1 encrypted-external-drive-1

# format it with whatever file system you want to use
$ mkfs.ext4 /dev/mapper/encrypted-external-drive-1
# or
$ mkfs.btrfs /dev/mapper/encrypted-external-drive-1
# or whatever you prefer..

Now each time the disk is mounted, the password needs to be provided.

The issue which file system is best might be worth writing about in the future, it is not in this article.

Links

Share Button