Devoxx UA

Most conferences have been cancelled, since it is difficult to hold a conference these days. The idea to move the conference online has been obviously around, but it was rejected by most organizers, because it it not the same and the all important chance to meet other people is just not the same.  So the Devoxx in Antwerp, which I like to visit every year, did not happen.  But Devoxx Ukraine decided to go for online.

So how did it work:  There were three tracks.  Each track was represented by a youtube channel, on which the live talk was transmitted.  Before and after the talks, professional moderators appeared in these channels, announced the speakers and did what moderators do in normal conferences.  The devoxx App worked on my cell phone and that seemed to be the most up to date schedule.

Some talks were really excellent.  I enjoyed them a lot even online.  For talks that are not so good, it requires more discipline to stay tuned.

The discussion was done in zoom channels that belonged to the three tracks.  So discussions could technically last until the end of the next talk.  The discussions in zoom also worked quite well, which was a surprise.

I think it is probably the right reaction, to have fewer conferences than usually in a year and to cancel some and move some to online, exactly as it is happening.  I think DevoxxUA had around 10’000 visitors, so they absorbed audiences of several conferences.  Also some common speakers at Devoxx conferences were not giving talks, so I assume that also part of the speakers decided that the online format is not ideal for them.

About the contents I will write in another Blog article.

Share Button

How to rename files according to a pattern

We often encounter situations, where a large number of files should be copied or renamed or moved or something like that.
This can be done on the Linux command line, but it should be possible in almost the same way on the Unix/Linux/Cygwin-command line of newer MS-Windows or MacOS-X.

Now people routinely do that and they have developed several ways of doing it, which are all valid and useful.

I will show how I do things like that. It works and it is not the only way to do it.

So in the most simple case, all files in a directory ending in ‚.a‘ should be renamed to ‚.b‘.

What I do is:


ls *.a \
|perl -p -e 'chomp;$x = $_;s/\.a$/.b/;$y = $_; s/.+/mv $x $y\n/;' \
|egrep '^mv '\
|sh

You can run it without the last |sh, to check if it really does what you want.

So I use the files as input to a short perl script and create shell commands. It would be possible to do this actually in Perl itself, without piping it into a shell:


ls *.b \
|perl -n -e 'chomp;$x = $_;s/\.b$/.c/;$y=$_;rename $x, $y;'

You could also read the directory from perl, it is quite easy, but for just quickly doing stuff, I prefer getting the input from some ls.

To go into sub directories, you can use find:


find . -name '*.c' -type f -print \
| perl -n -e 'chomp;$x = $_;s/\.c$/.d/;$y=$_;rename $x, $y;'

a
You can also rename all the files that contain a certain string:

find . -name '*.html' -type f -print \
|xargs egrep -l form \
|perl -n -e 'chomp; $x=$_;s/\.html$/.form/;$y=$_;rename $x, $y;'

So you can combine with all kinds of shell commands and do really a lot of things in one line.

Of course you can use Raku, Ruby, Python or your favorite scripting language instead, as long as it allows some simple pattern matching and an efficient implicit iteration over the lines.

For such simple tasks there are also ways to do it directly in the shell like this

for f in *.d ; do mv $f `basename $f .d`.e; done

And you can always use sed, possibly in conjunction with awk instead of perl for such simple tasks.

Another approach is to just pipe the files into an texteditor that is powerful enough and create a one time script using powerful editing commands.
On Linux and Unix servers we almost always use vi, even people like me, who prefer Emacs on their own computer:

ls *.e > tmpscript
vi tmpscript

and then in vi


:0,$s/\(.*\)\(.e\)$/mv \1\2 \1.f/
ZZ

and then

sh tmpscript
rm tmpscript

So, there are many ways to achieve this goal and they are flexible and powerful enough to do really a lot more than just such simple pattern renaming.

If you work in a team and put these things into scripts, it might be necessary to follow a team policy about which scripting languages are preferred and which patterns are preferred. And you need to know the stuff that you write yourself, but also the stuff that your colleagues write.

Please, do not do

mv *.a *.b

It won’t work for good reasons.
On Linux and Unix systems the shell (usually bash) expands the glob expression (the stuff with the stars) into a list of strings and then starts mv with these strings a parameters. So calling mv with some file names ending in .a and .b, mv cannot have any idea what to do. When called with more than two parameters, the last one needs to be a directory where to move the stuff, so usually it will just refuse to work.

Share Button

Comparing Images

A practical problem that I have is to sort my digital photos. Some of them have been taken with analog cameras and they have been scanned. Some of them have been scanned several times, using different resolution, different providers or different technologies.

So one issue that occurs is having two directories which contain more or less the same images from different scans.

Some of them have been sorted or enriched with useful information or just rotated correctly, others have better resolution. Maybe it is good to use different scans to create a better image than the best scan.

For a few hundred films doing that manually is possible, but a lot of work. So I thought about automating it, at least partially.

There are several issues that make it more difficult. Scans are not very accurate, so they cut of a tiny random bit in the borders. So to match two images exactly it is necessary to scale and crop them.

Colors look quite different. And then of course resolutions are different. And sometimes they have been turned by an angle of 90 or 270 degrees. Some scans miss a few images that another one has.

So, how to start?

First all images are scaled to thumbnails of approximately 65536 pixels. It turns out to be 204×307, but could of course be anything around that size retaining the rough aspect ratio.

Now all thumbnail images from the two directories are read into memory. 80 thumbnail images are no big deal…

Images in portrait orientation are rotated by 90 and 270 degrees and both variants are used. So from here on all images are in landscape format. Upside down is not supported.

All images are scaled to exactly 204×307 in memory to allow comparison.

And the average r, g and b-values from all images from each directory are calculated. The r,g,b values of each pixel are multiplied or divided by a factor, which is the square root of the quotient of these averages and constrained to the range 0..255. So this partially neutralizes the effect of different colors.

Now for each thumbnail from the first directory is compared with each thumbnail from the second directory. This is done by calculating for each pixel the sum of the squares of the differences between the r, g and b values of the two images.

These values are added up and divided by the number of pixels (204×307 in my case). The square root of this is the comparison result. For images that are actually the same, comparison results between 30 and 120 are possible. Now some heuristic is used to find matches based on these values. Also an „interpolation“ is used, if consecutive images in both directories occur with the middle one missing. This usually brings good results. In some cases manual interaction is needed. So it is possible to mark images in an web interface and then the match that was found for them is revoked. Also it is possible to provide „hints“, which means that these images should be matched.

It took about a day and half to write this and it works sufficiently well for the purpose.

But there are much more sophisticated algorithms that really recognize objects. I have a program called hugin that combines a few images to a panorama. Usually it works quite well and it can even rotate, shift and scale images and do amazing things most of the time. Sometimes it just does not work with certain images. Also Google has of course very powerful software to recognize images and compare them and even create panoramas. If we thing about face recognition… There is really good stuff around. But the first approach with some improvements gave sufficiently good results and the program will only run a couple of hundred times, then it will not be needed anymore.

This is heavily inspired by the Perl-library Image::Compare, but since I am doing something slightly different, I do not use this library any more. But subsequently it has been written in Perl. I will happily provide my source code, but I have baked into it assumptions and conventions that I have introduced myself, but that are not universal, concerning the file names and directory structures for the images.

Share Button

Object Creation: Builder vs. Constructor vs. Setter

When we create new objects, we are basically confronted with the need to provide at least one construction pattern.

Of course depending on the language we have more or less three ways to go that are commonly available.

Traditionally in OO it was mandatory to write setters and getters. In C++ or Java they really have names like getXyz or setXyz, but in Ruby or C# or Scala they can be written in such a way that they behave as if the attribute were public and could be assigned and read, by just magically calling the setters and getters internally. Actually Java does that internally for public attributes or more generally for attribute assignment and Hibernate can be configured to go via the getters and setters or via the internal attribute-assignment-getters and setters. This can be useful, to apply some DB-specific conversion in the getters and setters and to bypass it for the DB-access.

Why do we at all use these getters and setters? They were introduced to have flexibility to change the internal implementation without changing the API, because getters and setters can actually become more complex. This can be useful for DB-specific conversions in Hibernate, but apart from that in 25 years of OO-ish development this flexibility has hardly been used. Most of the time the set of attributes changes and the set of setters and getters changes simultanously. So one might ask the question, why we go by default the extra mile of adding getters and setters, when we could just make the attributes public and save some time. I am not asking, because that would decrease the life expectancy. But moving on demand from plain accessible attributes to getters and setters would be just a refactoring like changing the sets of attributes.

Now which of these are preferred and why?

First of all, we do need to read all attributes in some way, otherwise they are just a waste of space. Just forget for the moment programming low level APIs where bits have to be counted and dummy attributes have to be added to move the useful ones to the right position. But very often it turns out that we do not need to change them during the life time of the object. Now Ruby has a nice feature of setting everything up and then calling freeze, which makes the object, but not its sub-objects, immutable. I think it would be worth considering to add something like this to Scala and Java, for example. Clojure has something like this, actually, but with a slightly different flavor.

There is some advantage in knowing that the state of objects does not change. It is easier to reason about code. It helps really for creating thread safety and reentrance. And it even helps when passing around an object reference of an object, that still „lives“ somewhere else, for example when a sub-object comes out of a getter. In functional programming this is mandatory for all internal APIs, in other areas it is just something to make life easier, where the mutation is not actually needed.

So, there the setter go away, where not needed and we end up construction with a constructor that contains all attributes or variants with reasonable default values. This makes sense, when the attributes are few and there is no no risk of mixing them up. The builder pattern helps by naming the attributes in languages, that do not allow named parameters for the constructor out of the box. So it is useful in Java, but obsolete in many other languages. If attributes are final or constant or whatever it is called, the need for setters and getters is technically even less, because nothing can go wrong, the attribute can only be read. But in Java it is best practice to write getters and we should comply.

Now the issue arises that there is only a private or public multi-argument constructor and a lot of getters or whatever is used for reading the attribute in the specific language. And then a framework needs to create objects from XML, JSON or whatever automatically. And these frameworks tend to need at least a no-argument-constructor, often a public no-argument-constructor, that should be only for framework use. And the attributes have to be made non-final. If the framework is smart, it bypasses getters and setters and the no-arg-constructor is already enough.

Some frameworks actually require the setters. There we go to the old school world. We can impose a convention that the setters and the no-arg-constructor are there ONLY for the purposes of the framework and should not be used otherwise. Maybe that is a good approach, it is somewhat cleaner. But the box is opened and mistakes with such a convention will happen, so the question arises, why we need to deal with each attribute at last seven times: The attribute itself, its getter, its setter, the multiarg-constructor, the attribute of the builder, the with-method of the builder and the build-method.

Some things are easier, when moving to a new language and dumping all the garbage-traditions in that step.

But good developers can write good software in any reasonably good language that reasonably suits the purpose.

Links

Share Button

MapStruct

In the Java sphere we often develop the same data class several times. Each layer has its own variant and they are named almost the same, with some prefix or suffix or just the package name to distinguish. The set of attributes is the same (or almost the same), they have setters and getters. Or maybe only getters.

Nobody wants to write business logic two or three or four times, no matter how much support we have for copying code between the layers. And there OO is gone. We have to use anemic data objects, which was clearly introduced as an antipattern by Martin Fowler some years ago.

Since we are now using new paradigms and every couple of years, we no longer care and no longer know. OO was 25 years ago. Now we do FP and Microservices. And new frameworks. And many layers.

So, where does this come from?

First of all, the database access layer is Hibernate. I do not know why, because I think that plain JDBC would be easier, but Hibernate is already there and cannot be removed by arguments. Now Hibernate came with the promise that we can just use plain objects (POJOs) for our data and mirror database tables to classes and columns to attributes. Some XML-stuff had to be written and everything worked. Only writing the XML was such a pain that people immediately jumped to the annotations alternative once it was there. It was better and still is. But now the POJOs are obviously cluttered with Hibernate or JPA annotations. So they have to stay in the database layer. Actually there is a much stronger argument for this. Objects contain other objects and collections of objects. And possibly everything, if we go deeper recursively. So accessing the database should be a reasonably fast operations, so some attributes are loaded lazily. That means, they are only really loaded, when we need them. Which can go terribly wrong, because the transaction is no longer around and it is too late.

Also we have our idea what data classes have to look like, so there are some layers where we want no-argument-constructors and setters and getters, some layers where we want final attributes and constructors with all attributes and only getters and again others where we prefer to use a builder. And yes, each layer has its rules that need to be followed.

So, we keep them in the DB layer and map them to almost identical service layer objects without annotations. Then we work with these, write our business logic. Procedural programming mostly. Because the objects cannot have business logic. So we have classes with methods that are kind of behaving like static methods, but are non-static, because the framework wants it like that.

And then again, we build more and more layers, because each concern needs to be dealt in its own layer. And requires its own set of data objects, possibly with its own annotations for REST or SOAP or JSON or XML or whatever.

So, how do we move data between layers? At each layer boundary the data needs to be copied to the sister objects in the new layer. Now this is kind of stupid, programming something like

class HouseL1 {
private final int a;
private final String b;

public void HouseL1(HouseL2 l2) {
this.a = l2.getA();
this.b = l2.getB();
....
}
}

or with builder or with setters and getters, it is a lot of ugly work. And even worse, all sub-objects and collections of sub-objects have to be mapped. And their sub objects. And we possibly have to stop somewhere.

So we would like to avoid doing all this tedious stuff.

What can we do?

Is Java really the right language? Of course, we are writing enterprise software and we need type safety. Not real type safety like Scala, but a little bit of it feels good.

Reconsider our whole architecture and simplify it. Maybe it is possible to get rid of some layers and write much simpler software that does the same thing, only much faster and with less bugs. Ok, I’m only kidding here. We are talking enterprise software here. And yes, sometimes the layers do have real purposes and make sense.

Try to use the same data objects in all layers anyway? It has been tried. It works, but only in very simple settings.

Create the source code for the mapping. You can write a script for this or find one or find a tool or whatever. Parse the data classes and create the source code for the transformation methods. Or just write hibernate classes and create all other layers with their preferred setup in terms of mutability and construction and annotations from that.

But we are in Java, so why not use reflection and figure out at runtime how to map it. Find a library to do it for you or write your own. Performance? No problem. We use enterprise servers, of course.

So, in the end it is a good possibility to have a Java-tool that creates the source code for the transformation as part of the compile process.

MapStruct does exactly that. We write in interface for our transformations. With some annotations non obvious mapping behavior can be specified. Keep that list small and try to make it possible to automatically recognize the mappings, where possible. Then an extension to the maven compiler-plugin is added, that involves MapStruct to create an implementation for this interface at compile time and of course compile the implementation. And voila, it works. Even for classes with builders, if only the builder uses method names that are identical to the corresponding attribute name without „with“-prefix. So deal with it, name the methods of the builder like that.

And yes, we should get rid of setters, where we do not really need them. And we should not write constructors with 20 parameters, because the parameters will get messed up. In a language like Java, that does not yet have named parameters. And we do not want to couple layers, so constructors that use the sister class from another layer are not a good idea either, if we have more than two layers or so. So there we go with a builder..

In the end of the day, we can write good software with any reasonably good language and framework. But it is worth investigating how to do certain things. And it is really worth asking the question, why we are doing this at times when it is possible to make such choices.

Share Button

Perl Scripts for editing

Even though we do have IDEs with quite powerful refactoring mechanisms for many languages, it is still sometimes useful, to have another automated editing mechanism.

Why would that be the case?

Some examples:

In some cases there was an SQL script to create the database tables, which had been written first. From that classes and even CRUD operations in something like JDBC or DBI can be created. Even though most Java-projects that I have seen recently prefer using Hibernate (or more generally JPA2), there are some benefits in doing just plain old JDBC. This requires some discipline in writing the SQL in a uniform way and it was sometimes even necessary to have „magical comments“ that were ignored by the DB, but used by the script. This can save a lot of work and errors.

In some cases there were large numbers of HTML files. Now a similar kind of change had to be applied to each of them. Using a Script to parse the HTML file and to apply the changes can save a lot of work and provide consistency that is often badly needed. And if the whole thing happens in the context of an agile project, then the stakeholders might want to see the outcome and come up with further ideas. No big deal, just change the transformation scripts and check it out again. I recommend thinking in terms of larger transformation steps. Then I would retain the original files and apply the script for the step until the outcome is ok. Then this can be committed to git and the next step can be worked on. If the intermediate results are not useful for anybody else, just wait with the push or better work on a branch.

Sometimes refactorings have to be done that are not easily supported by the IDE. For example a whole bunch of classes need to be moved to different packages according to some new naming rule. Just find all the classes with their old package names, then move to the new directory structure and rename imports and package declarations to the new structure. This can easily be done with a Script. Always remember to be careful if there is Reflection involved, this will most likely break all refactorings, no matter if by IDE or by script.

Also it is often useful to use scripts to analyze code and to find occurrences of certain patterns.

These things can be done with scripts in Ruby, Python, Perl or Raku. All of these are valid options. Ruby and Raku are somewhat more sophisticated languages than the other two. And Perl and Raku have the most sophisticated Regex capabilities. I would assume that Raku is the best choice, if you start from scratch, it even supports grammars out of the box, which might be a way to address such issues. Perl has it as add on libraries. Of course it is also useful to work with what you know. But sometimes it is worth learning the right tool instead of just using the golden hammer to put in screws.

So in my case the tool for this is currently Perl. It does the job, is available more or less everywhere and I know it well enough. It might be worth moving to Raku in the future.

Some things that often work for simple scripts are the following:

Try to start with normalizing the input files, this might be the first transformation step. Remove trailing spaces, replace tabs with spaces, maybe normalize idention, replace line endings by LF only without CR. In HTML replace HTML-entities for UNICODE-characters by the appropriate Unicode characters, convert everything to UTF-8 etc.
If you have data like phone numbers and dates in the file that are meaningful for your further steps, bring them to a standard format. Of course always depending on your local circumstances, but this is usually the right way to go. This might be partially done with an external tool like xmllint.

For the next steps we need to consider the issue that we have multiple lines. There are regex-variants that do not stop at line boundaries. But if we read the content line wise, it can still be a bit more work. Sometimes it is easier to just replace line feeds by some marker string that does not otherwise occur in your files (check it before!) and then apply usual regex on this longer line.

What we often need is a regex that finds the shortest and not the longest match. If we write something like /a.*b/, this will look for a sequence that starts with an „a“ and ends with the last „b“ that can be found. Often we want to end with the first „b“. This can be achieved by /a.*?b/.

Another pattern that is often useful for such scripts is to work with a state machine. If a certain pattern is discovered, we react to it depending on the state and possibly change the state. So we can apply changes to something only if it occurs in a certain context.

Share Button

JSON instead of Java Serialization: The solution?

We start recognizing that Serialization is not such a good idea.

It is cool and can really work on a wide range of objects, even including complex and cyclic reference graphs. And it was essential for some older Java frameworks like EJB and RMI, which allowed remote access to Java objects and classes.

But it is no longer the future, Oracle will soon deprecate and later remove it. And it will happen this time, even though they really keep stuff around for a long time due to compatibility requirements.

Just to recap: it opens up security discussion, it opens up hidden behavior and makes it harder to reason about code, it creates tight coupling between remote components and it can result in bugs, that only occur at runtime and cannot be discovered at compile time. In short, it is not resilient.

So we need something else. Obvious candidates are XML, YAML and JSON. XML is of course an option and is powerful enough to do many things, but often a bit too clumbsy and too much boiler plate, so we try to move away from it. YAML and JSON kind of do the same thing, but it seems that JSON is winning the race and we all need to know JSON and many of us tend to skip YAML.

So why not use JSON. It is easy, it has good libraries and we can even find databases that work with JSON.

What JSON can express very well are scalars, lists and maps and combinations of these. This is quite exactly what we have in Perl, JavaScript or Clojure as basic building blocks. These languages support object oriented programming, but for simple stuff we go with these basic building blocks. And objects can be modelled as (hash-)maps, with the attribute names as keys. Actually JSON is valid JavaScript code.

We do have to change our thinking when moving from Java Serialization to JSON. JSON does not store any serializable object but just data. Maybe that is enough and that is what we actually want. It totally works in heterogeneous environments, where we are using different programming languages or different implementations.

There are good libraries. I have tried two, Jackson and GSON which both work well, recently mostly Jackson. It is important to think of Clojure, JavaScript, Perl or something like that without objects. So we loose type information, which can be considered good or bad, but if we can arrange ourselves with it, we avoid the tight coupling. JavaBeans are expressed exactly the same as a HashMap with the attribute names as keys. We can provide the top level class when deserializing, but at the child levels it will not be able to figure that out, if it relies on runtime information.

Example Code

Here it has been tried out. Find full example code on github.

A class that contains all kinds of stuff. Not prepared for really putting in nulls, but it is just experimental code…


package net.itsky.jackson;

import java.util.List;
import java.util.Map;
import java.util.Set;

public class TestObject {
    private Long l;
    private String s;
    private Boolean b;
    private Set set;
    private List list;
    private Map map;

    public TestObject(Long l, String s, Boolean b, Set set, List list, Map map) {
        this.l = l;
        this.s = s;
        this.b = b;
        this.set = set;
        this.list = list;
        this.map = map;
    }

    public TestObject() {
        // only for framework purposes
    }

    public Long getL() {
        return l;
    }

    public String getS() {
        return s;
    }

    public Boolean getB() {
        return b;
    }

    public Set getSet() {
        return set;
    }

    public List getList() {
        return list;
    }

    public Map getMap() {
        return map;
    }

    @Override
    public String toString() {
        return getClass().getSimpleName() + "("
+                "l=" + l + " (" + l.getClass() + ") "
                + " s=\"" + s + "\" (" + s.getClass() + ") "
                + " b=" + b + " (" + b.getClass() + ") "
                + " set=" + set  + " (" + set.getClass() + ") "
                + " list=" + list + " (" + list.getClass() + ") "
                + " map=" + map + " (" + map.getClass() + "))";
    }
}

And this is used for running everything. To play around more, it should probably be moved to tests..

package net.itsky.jackson;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.ObjectWriter;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;

import java.io.StringReader;
import java.util.List;
import java.util.Map;
import java.util.Set;

public class App {

    public static void main(String[] args) {
        try {
            Set s1 = ImmutableSet.of(1, 2, 3);
            Set s2 = ImmutableSet.of(1, 2, 3);
            Map m1 = ImmutableMap.of("A", "abc", "B", 3L, "C", s1);
            List l1 = ImmutableList.of("i", "e", "a", "o", "u");
            TestObject t1 = new TestObject(30303L, "uv", true, s2, l1, m1);
            Map m2 = ImmutableMap.of("r", "r101", "s", 202, "t", t1);
            List l2 = ImmutableList.of("ä", "ö", "ü", "å", "ø");
            Set s3 = ImmutableSet.of("x", "y", "z");
            TestObject t2 = new TestObject(40404L, "ijk", false, s3, l2, m2);
            ObjectMapper mapper = new ObjectMapper();
            ObjectWriter writer = mapper.writerWithDefaultPrettyPrinter();
            System.out.println("t2=" + t2);
            String json = writer.writeValueAsString(t2);
            System.out.println("json=" + json);
            StringReader stringReader = new StringReader(json);
            TestObject t3 = mapper.readValue(stringReader, TestObject.class);
            System.out.println("t3=" + t3);
        } catch (Exception ex) {
            RuntimeException rex;
            if (ex instanceof RuntimeException) {
                rex = (RuntimeException) ex;
            } else {
                rex = new RuntimeException(ex);
            }
            throw rex;
        }
    }
}

And here is the output:

t2=TestObject(l=40404 (class java.lang.Long)  s="ijk" (class java.lang.String)
  b=false (class java.lang.Boolean)  
set=[x, y, z] (class com.google.common.collect.RegularImmutableSet)
  list=[ä, ö, ü, å, ø] (class com.google.common.collect.RegularImmutableList)
  map={r=r101, s=202, 
t=TestObject(l=30303 (class java.lang.Long)
  s="uv" (class java.lang.String)  b=true (class java.lang.Boolean)
  set=[1, 2, 3] (class com.google.common.collect.RegularImmutableSet)
  list=[i, e, a, o, u] (class com.google.common.collect.RegularImmutableList)
  map={A=abc, B=3, C=[1, 2, 3]} (class com.google.common.collect.RegularImmutableMap))}
 (class com.google.common.collect.RegularImmutableMap))
json={
  "l" : 40404,
  "s" : "ijk",
  "b" : false,
  "set" : [ "x", "y", "z" ],
  "list" : [ "ä", "ö", "ü", "å", "ø" ],
  "map" : {
    "r" : "r101",
    "s" : 202,
    "t" : {
      "l" : 30303,
      "s" : "uv",
      "b" : true,
      "set" : [ 1, 2, 3 ],
      "list" : [ "i", "e", "a", "o", "u" ],
      "map" : {
        "A" : "abc",
        "B" : 3,
        "C" : [ 1, 2, 3 ]
      }
    }
  }
}
t3=TestObject(l=40404 (class java.lang.Long)  s="ijk" (class java.lang.String)  
b=false (class java.lang.Boolean)  
set=[x, y, z] (class java.util.HashSet)  
list=[ä, ö, ü, å, ø] (class java.util.ArrayList)  
map={r=r101, s=202, 
t={l=30303, s=uv, b=true, 
set=[1, 2, 3], 
list=[i, e, a, o, u], 
map={A=abc, B=3, C=[1, 2, 3]}}}
  (class java.util.LinkedHashMap))

Process finished with exit code 0

So the immediate object and its immediate attributes were deserialized properly to what we provided. But everything inside went to maps, lists and scalars.

The intermediate JSON does not carry the type information at all, so this is the best that can be done.
Often it is useful what we want. If not, we need to find something else or see if we can tweak JSON to carry type information.

It will be interesting to explore other serialization protocols…

Links

Share Button

Perl and Scala: what can they learn from each other?

Ironically Scala at first drew my interest, because I discovered that about ten years ago there was no really good understanding of how to do a good multithreading concept for Perl 6. I thought exploring how they do it in Scala, where it was already known to be good at that time, would give a more general understanding to this issue. At this time Perl 6 (now named „Raku“) was intended to rather go without multithreading capabilities than doing them badly. In the end I got dragged into Scala and found that by itself more interesting than the original issue. And Perl 6 community eventually found good answers for providing multithreaded capabilities anyway.

So why there are technical concepts in both of these languages, that are interesting and possibly could in some way be applied to the other one, there is an interesting parallel.

Both Scala and Perl have been „cool languages“ that were really strong in an area or even in a broader range of application areas. Both of them found a competitor, that was kind of an „inferior clone“ of them. PHP in its early versions was very similar to Perl, but „simplified“ and kind of a subset of what Perl provided. At that time Perl had a real boom, because the first Web applications came up and the only reasonable way to go was CGI and of course it was done with Perl. There were some early alternatives like Cold Fusion and ASP, but they never really become main stream, at least not outside of their respective communities. Now PHP eventually took over most of Perl’s CGI and has become a major building block of our current WWW. Wikipedia and this Blog run on PHP. Perl eventually also lost its leading position as system administration scripting language to Ruby and even more to Python and some others, but it is still there and has strong string parsing capabilities and a very useful ecosystem of libraries called CPAN.

Now Scala has found Kotlin to be a similar competitor. Besides being somewhat simpler Kotlin also shines with good tooling support. It comes from the same organization as IntelliJ IDEA, which is the usual IDE for most JVM-languages for people who rely neither on Emacs nor vi. So Kotlin support in IntelliJ is always going to be a high priority. And Kotlin is officially supported by Google as programming language for Android-apps. It seems to work well, allows for more modern development than the supported Java versions and has conceptionally a lot of similarity with Swift, which is the most modern programming language supported by Apple for IOS-Apps. There have been heroic and admirable approaches to allow for Android App development using other JVM-languages, especially Scala. But they all suffer from the same set of problems. In order to avoid installing too much language specific code in the app, dynamic language features that would require a compile capability, as commonly used for Groovy or Clojure have to be avoided. And the excessive use of the languages libraries has to be avoided, because they are not on the phone already, but a copy of them has to be shipped with the app, for each App. So the storage usage is much more than for Kotlin and Java Apps. And then we see an attempt, to reduce the size of the libraries, by only including what is needed. That is necessary, but it looks too fragile to really trust it. So, for Mobile Apps, it is Kotlin. Period. And then Kotlin is already there, so why not use it on the server as well. Yes, I do believe Scala is better, but that is not what everyone thinks and it needs to be much better to justify the additional language, where App-development for Android is already happening.

Now both Perl and Scala had some problems. To some extent, they are even sharing the exact same problem. It was the possibility to write really „cool“ code that was very smart, very short and could not be read by anybody else without very much time and very much knowledge. This can be done in any language, but Perl is the number one for this and I would put Scala as number two and C++ and C as number three and four, from the languages, that I know. It is a good idea to use some coding standards that allow for clean Scala or Perl code. But please remain reasonable and do not let bureaucrats come in charge of the coding standards to create a monster that drains all creativity. Allow using powerful features, but use them in a decent and readable way.

Now in both cases, there was an effort, to write a new version of the language, that was meant to be slightly incompatible and cleaned up some of the weaknesses and brought some improvements. In case of Perl this was Perl 6. It was developed for around 20 years and came out a few years ago. Eventually it turned out too different, so it was renamed to Raku. For Scala, a new language called „Dotty“ was developed. It was decided to make this the next major version, Scala 3. Even though it is much closer to Scala 2 than Raku to Perl 5, it is still incompatible and requires an effort to rewrite code. It is already seen that large Perl 5 projects are hardly moving to Raku, so Perl 5 is there to stay and Raku is just a second language within the same community. This will probably not happen like that with Scala, and the core language team will probably at some point of time concentrate on Scala 3. But large organizations that heavily invested into Scala cannot easily migrate, simply because it needs a lot of time and money. So we will probably also see some long term coexistence of Scala 2 and Scala 3. Maybe Scala 2 will be forked by major adopters. Or it will be supported for money from Lightbend.

Share Button

Phone Numbers and E-Mail Addresses

Most data that we deal with are strings or numbers or booleans and combinations of these into classes and collections. Dates can be expressed as string or number, but have enough specific logic to be seen as a fourth group of data. All these have interesting aspects, some of which have been discussed in this blog already.

Now phone numbers are by an naïve approach numbers or strings, but very soon we see that they have their own specific aspects. The same applies for email addresses which can be represented as strings.

Often projects go by their own „simplified“ specification of what an email address or a phone number is, how to parse, compare and render them. In the end of the day the simplification is harder to tame than the real solution, because it needs to be maintained and specified by the project team rather than being based on a proven library. And once in a while „edge cases“ occur, that cannot be ignored and that make the „home grown“ library even more complex.

Behind phone numbers and email addresses there are well defined and established standards and they are hard to understand thoroughly within the constrained time budget of a typical „business project“, because the time should be allocated to enhancing the business logic and not to reinventing the basics. Unless there is a real need to do so, of course.

Just to give an idea: When phone numbers are parsed or provided by user input, they can start with a „+“ sign or use some country specific logic to express, to which country they belong. And then the „+1“, for example, does not stand for the United States alone, but also for Canada and some smaller countries that are in some way associated with the United States or Canada. Further analysis of the number is required to know about that. The prefix for international number is often „00“, but in the United States it is „011“ and there were and are some other variants, that are still frequently used. Some people like to write something like „+49(0)431 77 88 99 11 1“ instead of „+49 431 77 88 99 11 1“. We can constrain the input to the variants we happen to think of and force the supplier of data to comply, but why bother? Why not accept legitimate formats, as long as they are correct and unambiguous?

Now for E-Mail-addresses there is the famous one page regular expression to recognize correct email addresses which is even by itself not totally complete. Find it at the bottom of the article…

Of course it includes some rarely used variants of email addresses that were once used and have not been completely abolished officially, but it is hard to draw and exact border for this.

So the general recommendation is to find a good library for working with email addresses and phone numbers. Maybe the library can even to some extent eliminate input strings that are formally complying the format, but know to be incorrect by knowing about numbering schemes world wide or about email domains or even by performing lookups.

Another strong recommendation is to store data like email addresses and phone numbers in a technical format, that is in the example of phone numbers always starting with a „+“ followed by digits only. For input any positioning of spaces is accepted, for output the library knows how to format it correctly. This allows selecting by the numbers without dealing with complex formatting, by just using the technical format in the query as well.

For Java (and thus for many JVM-languages), C++ and JavaScript there is an excellent library from Google for dealing with phone numbers. For E-Mails something like apache commons email validator is a way to go.

Keep in mind that for E-Mail addresses and phone numbers, the ultimate way of verification is to send them a link or a code that they need to enter. In the end of the day it is insufficient to rely only on formal verification without this final step.

But still issues remain for transforming data into a canonical technical format for storing them, formatting data for display etc. And there is a huge added value, if we can reliably recognize formally false entries early, when the user can still easily react to it, rather than waiting for an email/SMS/phone call being processed, which may fail when the user is no longer on our „registration site“. And we can process data which has already been verified by a third party, but still we want to parse it to recognize obvious errors.

The concrete libraries may be outdated by the time you are reading this, or they may not be applicable for the language environment that you are using, but please make an effort to find something similar.

So, please use good libraries, that are like to be found for the environment that you are using and write yourself what creates value for your project or organization. Unless your goal is really to write a better library. Better invest the time into areas where there are still no good libraries around.

And as always, you may understand email addresses and phone numbers as an example for a more general idea.

Links

E-Mail Regex

Source: https://emailregex.com/:

(?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t] )+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?: \r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:( ?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\0 31]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\ ](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+ (?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?: (?:\r\n)?[ \t])*))*|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z |(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n) ?[ \t])*)*\<(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\ r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n) ?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t] )*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])* )(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t] )+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*) *:(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+ |\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r \n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?: \r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t ]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031 ]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\]( ?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(? :(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(? :\r\n)?[ \t])*))*\>(?:(?:\r\n)?[ \t])*)|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(? :(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)? [ \t]))*"(?:(?:\r\n)?[ \t])*)*:(?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]| \\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<> @,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|" (?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t] )*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\ ".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(? :[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[ \]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*|(?:[^()<>@,;:\\".\[\] \000- \031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|( ?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)*\<(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,; :\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([ ^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\" .\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\ ]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\ [\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\ r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\] |\\.)*\](?:(?:\r\n)?[ \t])*))*)*:(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\\".\[\] \0 00-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\ .|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@, ;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(? :[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])* (?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\". \[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[ ^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\] ]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:(?:\r\n)?[ \t])*)(?:,\s*( ?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\ ".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:( ?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[ \["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t ])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t ])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(? :\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+| \Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*|(?: [^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\ ]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)*\<(?:(?:\r\n) ?[ \t])*(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\[" ()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n) ?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<> @,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@, ;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t] )*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\ ".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*)*:(?:(?:\r\n)?[ \t])*)? (?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\". \[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?: \r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\[ "()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t]) *))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]) +|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\ .(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z |(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:( ?:\r\n)?[ \t])*))*)?;\s*)

Share Button

Ranges of Dates and Times

In Software we often deal with ranges of dates and times.

Let us look at it from the perspective of an end user.

When we say something like „from 2020-03-07 to 2019-03-10“ we mean the set of all timestamps t such that

    \[\text{2019-03-07} \le d < \text{2019-03-11}\]

or more accurately:

    \[\text{2019-03-07T00:00:00}+TZ \le d < \text{2019-03-11T00:00:00}+TZ\]

Important is, that we mean to include the whole 24 hour day of 2019-03-10. Btw. please try to get used to the ISO-date even when writing normal human readable texts, it just makes sense…

Now when we are not talking about dates, but about times or instants of time, the interpretation is different.
When we say sonmething like „from 07:00 to 10:00“ or „from 2020-03-10T07:00:00+TZ to 2020-04-11T09:00:00+TZ“, we actually mean the set of all timestamps t such that

    \[givenDate\text{T07:00:00}+TZ \le t < givenDate\text{T10:00:00}+TZ\]

or

    \[\text{2020-03-10T07:00:00}+TZ \le t < \text{2020-04-11T09:00:00}+TZ,\]

respectively. It is important that we have to add one in case of date only (accuracy to one day) and we do not in case of finer grained date/time information. The question if the upper bound is included or not is not so important in our everyday life, but it proves that commonly the most useful way is not to include the upper bound. If you prefer to have all options, it is a better idea to employ an interval library, i.e. to find one or to write one. But for most cases it is enough to exclude the upper limit. This guarantees disjoint adjacent intervals which is usually what we want. I have seen people write code that adds 23:59:59.999 to a date and compares with \le instead of <, but this is an ugly hack that needs a lot of boiler plate code and a lot of time to understand. Use the exclusive upper limit, because we have it.

Now the requirement is to add one day to the upper limit to get from the human readable form of date-only ranges to something computers can work with. It is a good thing to agree on where this transformation is made. And to do it in such a way that it even behaves correctly on those dates where daylight saving starts or ends, because adding one day might actually mean „23 hours“ or „25 hours“. If we need to be really very accurate, sometimes switch seconds need to be added.

Just another issue has come up here. Local time is much harder than UTC. We need to work with local time on all kinds of user interfaces for humans, with very few exceptions like for pilots, who actually work with UTC. But local date and time is ambiguous for one hour every year and at least a bit special to handle for these two days where daylight saving starts and ends. Convert dates to UTC and work with that internally. And convert them to local date on all kinds of user interfaces, where it makes sense, including documents that are printed or provided as PDFs, for example. When we work with dates without time, we need to add one day to the upper limit and then round it to the nearest some-date\text{T00:00:00}+TZ for our timezone TZ or know when to add 23, 24 or 25 hours, respectively, which we do not want to know, but we need to use modern time libraries like the java.time.XXX stuff in Java, for example.

Working with date and time is hard. It is important to avoid making it harder than it needs to be. Here some recommendations:

  • Try to use UTC for the internal use of the software as much as possible
  • Use local date or time or date and time in all kinds of user interfaces (with few exceptions)
  • add one day to the upper limit and round it to the nearest midnight of local time exactly once in the stack
  • exclude the upper limit in date ranges
  • Use ISO-date formats even in the user interfaces, if possible

Links

Share Button