Java Properties Files and UTF-8

Java uses a nice pragmatic file format for simple configuration tasks and for internationalization of applications. It is called Java properties file or simply „.properties file“. It contains simple key value pairs. For most configuration task this is useful and easy to read and edit. Nested configurations can be expressed by simple using dots („.“) as part of the key. This was introduced already in Java 1.0. For internationalization there is a simple way to create properties files with almost the same name, but a language code just before the .properties-suffix. The concept is called „resource bundle“. Whenever a language specific string is needed, the program just knows a unique key and performs a lookup.

The unpleasant part of this is that these files are in the style of the 1990es encoded in ISO-8859-1, which is only covering a few languages in western, central and northern Europe. For other languages as a workaround an \u followed by the 4 digit hex code can be used to express UTF-16 encoding, but this is not in any way readable or easy to edit. Usually we want to use UTF-8 or in some cases real UTF-16, without this \u-hack.

A way to deal with this is using the native2ascii-converter, that can convert UTF-8 or UTF-16 to the format of properties files. By using some .uproperties-files, which are UTF-8 and converting them to .properties-files using native2ascee as part of the build process this can be addressed. It is still a hack, but properly done it should not hurt too much, apart from the work it takes to get this working. I would strongly recommend to make sure the converted and unconverted files never get mixed up. This is extremely important, because this is not easily detected in case of UTF-8 with typical central European content, but it creates ugly errors that we are used to see like „sch�ner Zeichensalat“ instead of „schöner Zeichensalat“. But we only discover it, when the files are already quite messed up, because at least in German the umlaut characters are only a small fraction of the text, but still annoying if messed up. So I would recommend another suffix to make this clear.

The bad thing is that most JVM-languages have been kind of „lazy“ (which is a good thing, usually) and have used some of Java’s infrastructures for this, thus inherited the problem from Java.

Another way to deal with this is to use XML-files, which are actually by default in UTF-8 and which can be configured to be UTF-16. With some work on development or search of existing implementations there should be ways to do the internationalization this way.

Typically some process needs to be added, because translators are often non-IT-people who use some tool that displays the texts in the original languages and accepts the translation. For good translations, the translator should actually use the software to see the context, but this is another topic for the future. Possibly there needs to be some conversion from the data provided by the translator into XML, uproperties, .properties or whatever is used. These should be automated by scripts or even by the build process and merge new translations properly with existing ones.

Anyway, Java 9 Java 9 will be helpful in this issue. Finally Java-9-properties that are used as resource bundles for internationalization can be UTF-8.

Links

Share Button

PostgreSQL

Almost every non trivial application uses in some way a database.

For many years this has been anyway Oracle, DB2 or MS-SQL-Server, depending mostly on the habits and on the religious orientation of the organization that developed or ran the application. These days all three are available for Linux and MS-Windows. DB2 is also available z/OS. The „home-platforms“ of these three are probably Linux, z/OS and MS-Windows, respectively (2018).

We saw Teradata as an alternative to DB2 and Oracle for data warehouses. They run on huge amounts of data, but are really invisible to most of us. Maybe the data warehouse is the old „big data“, before the invention of the term.

We saw a big Hype about NoSQL databases and some interesting DB products from this group that could successfully establish themselves.

We saw MySQL (and its fork MariaDB) mostly for database-installations that had somewhat lower requirements on the DB-product in terms of features or in terms of performance. Actually Wikipedia runs on MySQL or MariaDB and that is quite a big installation with heavy user load, but it is mostly about reading.

PostgreSQL was often positioned „somewhere between Oracle and MySQL“.

PostgreSQL 10 just came out. The most important new features where replication on a per table basis, better partitioning of large tables and better support for clustering.

I have worked with all of the database technologies listed here and even giving trainings for MongoDB, Oracle and PostgreSQL.

So where is PostgreSQL positioned really in this landscape?

It is a good database product for a large and growing class of applications. I find it slightly more pleasant than the other four SQL databases mentioned here to work with, because the SQL-implementation and its extensions are powerful, clean and behave more or less as expected. Some minor positive points are the default usage of ISO-date-format, the distinction between Null and empty string and on the other hand that most stuff that works in Oracle at SQL level can easily be transferred to PostgreSQL. The psql-shell works like typical linux shells in terms of command line editing and history. So a lot of minor details are just pleasant or as they should be.

Comparing to the three groups of contenders:

NoSQL

NoSQL databases kind of leave the mainstream of transactional relational SQL-databases and provide us either some interesting special features or promise us performance gains or support of huge data base sizes. The price for this is that we loose a extremely mature, clever and powerful query language, which SQL is. I would go for NoSQL products, if the additional feature of this NoSQL-DB-product cannot be reasonably be duplicated in PostgreSQL or other SQL-DBs and if it is really useful for the job. I would go for a NoSQL-DB-product, if the required data sizes and performance cannot reasonably be achieve using an SQL-product like PostgreSQL, good tuning of hardware, OS, database and application logic, but can actually be achieved with the NoSQL-product. These applications exist and it is important to pick the right NoSQL-DB for the project. It should be observed that PostgreSQL has a lot of features beyond of what normal SQL-databases have and looking into this area might be useful… A typical strength of some NoSQL-databases (like CassandraDB and MongoDB) is that a powerful replication is kind of trivial to set up, while it is a really big story for typical transactional SQL databases… This is due to the transactional feature which adds complexity and difficulty and a performance penalty to some kinds of replications…

MariaDB/MySQL

I do not count that MySQL belongs to Oracle, because MariaDB is an independent fork outside of Oracle and can be used instead of MySQL.
I do think that MySQL does not have quite the level of PostgreSQL in terms of features and cleanness. So we can get PostgreSQL for the same price as MySQL or MariaDB. So why not go for the better product? Even if MariaDB perfectly fits today, the application will grow and it will at some point prove useful to be based on PostgreSQL. I came across the issue of nested transactions some years ago. They were easily supported by PostgreSQL, but not at all by MariaDB. Issues like that can come up more likely this way than the other way around.

Oracle, DB2, MS-SQL-Server

Especially Oracle makes many long term loyal customers run away due to there pricing and licensing practices. While it is extremely hard to change the database of a non trivial database based application, at least new applications in many organizations are discouraged from using Oracle, unless they can make a point why they really need it. MS SQL-Server might absorb some of these, especially since they are now available on Linux servers. But what Oracle does now might very well be the policy of Microsoft or IBM in a few years, so it makes perfect sense to have a serious look at PostgreSQL. A reasonably well tuned PostgreSQL will work pretty much as good as a reasonably well tuned Oracle, DB2 or MS-SQL-Server. Features that are missing now are being added with new releases. Some interesting features make it just a bit more pleasant to use than for example Oracle. It just feals more modern and more Linux-like.

Btw. there were some more contenders in the space of commercial transactional SQL-databases like Adabas D, SyBase and Informix. While the database products Adabas D and SyBase have been bought by SAP 1997 and the whole Sybase company in 2010 in two more or less unsuccessful attempts to have their own database and not having to use their competitors product as database, but they seem to have some success in using HANA now. Informix has been bought by IBM and is still offered as alternative to DB2. I would say that they have lost their relevance.

PostgreSQL

So I do recommend to seriously consider PostgreSQL as a DB product. It is currently my favorite in this space, but there is no univeral tool that fits for everything.

Some random aspects to keep in mind when moving from Oracle to PostgreSQL are mentioned here…

Types CLOB and BLOB do not exist. They can mostly be replaced by types TEXT and BYTEA, but it is not exactly the same. The type TEXT, which is a somewhat unlimited variable length string can easily be used for columns where we would try to use VARCHAR2 in Oracle, which gives us the advantage that we do not have to worry about defining a maximum length or exceeding the 4k limit that Oracle imposes on VARCHAR2.

Empty Strings are not the same as NULL in PostgreSQL, they are in Oracle.

PostgreSQL has a boolean type. Please use it and get rid of the workaround using CHAR, VARCHAR2 or NUMBER as replacement.

Oracle only had one kind of transaction isolation that was really well supported and I think this is still the way to go. It is an excellent choice and is very close to „repeatable read“, while PostgreSQL uses by default „read committed“, but it can be brought to use „repeatable read“. Please keep this in mind to avoid very unpleasant surprises and use the transaction isolation level appropriately.

The structuring of PostgreSQL consists of DB-instances, usually only one on a virtual or physical server, which somewhat resembles what is a database in Oracle. Within a DB-instance, it is possible to define a number of database without much pain. This was totally not the case with Oracle in earlier years and it was best practice to rely no schemas, but now we can easily afford to put more virtual servers each running Oracle (or PostgreSQL), if the licensing does not prohibit it in the case of Oracle. And since Oracle 12 there is the concept of the virtual database which splits a Oracle database into sub databases, somewhat behaving like separate database without the overhead of DB instances. It seems to be quite equivalent to what PostgreSQL does, apart from the naming and many details about how to set it up and how to use it. Schema and User are more separate concepts in PostgreSQL, a Schema can be defined totally independently of Users, but there is a way to define Schema names that match the user names to support this way of working. So we can do pretty well what we want, but the details how to work it out are quite different.

Each database has its programming language to write triggers, stored procedures and the like. They seem to be somewhat similar between different DB-products (we are talking about MS-SQL-Server, Oracle, PostgreSQL and DB2), but different enough that we need to rewrite triggers and stored procedures from scratch. This is not as painful as it used to be, since the approach of accessing DB tables for read access only via views and for write access only via stored procedures seems to have lost some popularity. Having written a lot of the business logic PL/SQL the pain of migrating to another DB product is really enormous, while a business logic in Java, Scala, C, C++, Perl, Ruby, C# or Clojure can be ported more easily to different OS and different DB. But it is no way for free.

One remark for development: Some teams like to use in memory databases for development and then trust that deployment on PostgreSQL or Oracle or whatever will more or less work. I strongly recommend not to follow this route. It is totally not trivial to support one more DB product or usually a second DB product and it is quite easy to setup a virtual OS with the DB product that is being used and with test data. PostgreSQL, Oracle, MS-SQL-Server, MongoDB and whatever you like can be configured to use more Memory and perform pretty much like these in memory DBs, if we set them up for development and are willing to risk data loss. This is no problem, because the image can be trivially copied from the master image when needed. Yes, a really good network and SSDs of sufficient size, speed and quality are needed for working efficiently like this and it is possible and worthwhile to have that.

I can give training about PostgreSQL and MongoDB and about SQL in different dialects. Find contact information here.

And please: comments, corrections and additional information are always welcome…

Links

Share Button

The magic trailing space

When comparing string, of course spaces count as well and they should count. To ignore them, we can normalize strings. Typical white space normalization includes the following (Perl regular expressions):

  • /[ \t]+/ /g replace any sequence of tabs and spaces used to separate content by one space.
  • /\r\n/\n/g replace carriage return + linefeed by linefeed only.
  • /\s+$// remove trailing whitespace.
  • /^\s+// remove leading whitespace.

More or less it is often useful to do something like this when comparing strings that originally come from outside sources and are not normalized, but only „the content“ counts. There can be more sophisticated rules, to deal with no-break-space, with control characters, with trailing spaces at the end of each line or only at the end of the whole thing or replacing multiple empty lines by just one empty line. Just the general idea is to think about the right normalization.

In some cases, like long numbers, spaces or other symbols are used to group digits. These should also be removed. Sometimes more specific rules apply, like for phone numbers, web sites, email addresses etc. that need to be done specifically for this type, hopefully using an adequate library.

More often than not we see that web sites do not do this properly. Quite often an information has to be entered and it is not normalized prior to further processing. So credit card numbers or IBAN numbers are rejected because of spaces or anything because of trailing spaces, of course with an error message that does not give us a hint about what was the problem.

For serious application there needs to be a serious processing step for data coming from outside anyway, for security reasons. Even though SQL injection should not work due to sound SQL-placeholder usage, it is a good practice to check the data anyway and reject it early and with a meaningful message. Should I trust the security of a site that cannot deal with spaces in a credit card number for giving them my card number? I am not sure.

It is about time that UI developers get into the habit of doing the proper processing, normalization and checks for user input. Beware that any security relevant checks need to be done on the server or on the server as well.

Share Button

2018 — Happy New Year

Godt nytår — Gott nytt år — Щасливого нового року — Un an nou fericit — Feliĉan novan jaron — Feliz año nuevo — Καλή Χρονια — Godt nytt år — Bonne année — Akemashite omedetô — Onnellista uutta vuotta — Felice anno nuovo — laimīgu jauno gadu — Bon any nou — Ath bhliain faoi mhaise — Gullukkig niuw jaar — Happy new year — Срећна нова година — bun di bun an — عام سعيد — С новым годом — Frohes neues Jahr — Felix sit annus novus

This was created with a Lua-program:

#!/usr/bin/lua
tab = { "Frohes neues Jahr", "Happy new year", "Gott nytt år", "Feliz año nuevo",
        "Bonne année", "Felix sit annus novus", "С новым годом", "عام سعيد",
        "Felice anno nuovo", "Godt nytt år", "Gullukkig niuw jaar", 
        "Feliĉan novan jaron", "Onnellista uutta vuotta", "Godt nytår",
        "Akemashite omedetô", "Ath bhliain faoi mhaise", "Bon any nou", 
        "Срећна нова година", "laimīgu jauno gadu", "Un an nou fericit", 
        "bun di bun an", "Щасливого нового року", "Καλή Χρονια" }
local rand = math.randomseed( os.time() )
shuffled = {}
for i=1, #tab do
    table.insert(shuffled, math.random(#shuffled+1), tab[i])
end
for i,line in ipairs(shuffled) do
    if (i > 1) then
      io.write(" — ")
    end
    io.write(line)
end

Share Button

Christmas — Weihnachten — Рождество 2017

Bon nadal! — Priecîgus Ziemassvçtkus — З Рiздвом Христовим — Buon Natale — Bella Festas daz Nadal! — С Рождеством — Срећан Божић — καλά Χριστούγεννα — God Jul! — Feliĉan Kristnaskon — ميلاد مجيد — Feliz Navidad — Glædelig Jul — Fröhliche Weihnachten — Joyeux Noël — Hyvää Joulua! — クリスマスおめでとう ; メリークリスマス — Merry Christmas — Natale hilare — God Jul — Crăciun fericit — Prettige Kerstdagen — Nollaig Shona Dhuit!

xmas tree

Christmas Tree in Olten 2017

This message has been generated by a program again, this time I decided to use Scala:

import scala.util.Random
object XmasGreeting {
  val texts : List[String] = List( "Fröhliche Weihnachten",
    "Merry Christmas", "God Jul", "Feliz Navidad", "Joyeux Noël",
    "Natale hilare", "С Рождеством", "ميلاد مجيد", "Buon Natale", "God Jul!",
    "Prettige Kerstdagen", "Feliĉan Kristnaskon", "Hyvää Joulua!",
    "Glædelig Jul", "クリスマスおめでとう ; メリークリスマス",
    "Nollaig Shona Dhuit!", "Bon nadal!", "Срећан Божић",
    "Priecîgus Ziemassvçtkus", "Crăciun fericit", "Bella Festas daz Nadal!",
    "З Рiздвом Христовим", "καλά Χριστούγεννα")
  val shuffledTexts : List[String] = Random.shuffle(texts)
  def main(args: Array[String]) : Unit = {
    println(shuffledTexts.mkString(" — "))
  }
}

Share Button

Scala Exchange 2017

I have visited Scala Exchange („#ScalaX“) in London on 2017-12-14 and 2017-12-15. It was great, better than 2015 in my opinion. In 2016 I missed Scala Exchange in favor of Clojure Exchange.

This time there were really many talks about category theory and of course its application to Scala. Spark, Big Data and Slick were less heavily covered this time. Lightbend (former Typesafe), the company behind Scala, did show some presence, but less than in other years. But 800 attendees are a number by itself and some talks about category theory were really great.

While I have always had a hard time accepting why we need this „Über-Mathematics“ like category theory for such a finite task as programming, I start seeing its point and usefulness. While functors and categories provide a meta layer that is actually accessible in Scala there are actually quite rich theories that can even be useful when constrained to a less infinite universe. This helps understanding things in Java. I will leave details to another post. Or forget about it until we have the next Scala conference.

So the talks that I visited were:

  • Keynote: The Maths Behind Types [Bartosz Milewski]
  • Free Monad or Tagless Final? How Not to Commit to a Monad Too Early [Adam Warski]
  • A Pragmatic Introduction to Category Theory [Daniela Sfregola]
  • Keynote: Architectural patterns in Building Modular Domain Models [Debasish Ghosh]
  • Automatic Parallelisation and Batching of Scala Code [James Belsey and Gjeta Gjyshinca]
  • The Path to Generic Endpoints Using Shapeless [Maria-Livia Chiorean]
  • Lightning talk – Optic Algebras: Beyond Immutable Data Structures [Jesus Lopez Gonzalez]
  • Lightning Talk – Exploring Phantom Types: Compile-Time Checking of Resource Patterns [Joey Capper]
  • Lightning Talk – Leave Jala Behind: Better Exception Handling in Just 15 Mins [Netta Doron]
  • Keynote: The Magic Behind Spark [Holden Karau]
  • A Practical Introduction to Reactive Streams with Monix [Jacek Kunicki]
  • Building Scalable, Back Pressured Services with Akka [Christopher Batey]
  • Deep Learning data pipeline with TensorFlow, Apache Beam and Scio [Vincent Van Steenbergen]
  • Serialization Protocols in Scala: a Shootout [Christian Uhl]
  • Don’t Call Me Frontend Framework! A Quick Ride on Akka.Js [Andrea Peruffo]
  • Keynote: Composing Programs [Rúnar Bjarnason]
Share Button

Collection Initializiation in Java

There is this so called „double brace“ pattern for initializing collection. We will see if it should be a pattern or an anti-pattern later on…

The idea is that we should consider the whole initializion of a collection one big operation. In other languages we write something like
[element1 element2 element3]
or
[element1, element2, element3]
for array-like collections and
{key1 val1, key2 val2, key3 val3}
or
{key1 => val1, key2 => val2, key3 => val3}.
Java could not do it so well until Java 9, but actually there was a way to construct sets and lists:
Arrays.asList(element1, element2, element3);
or
new HashSet<>(Arrays.asList(element1, element2, element3));.
Do not ask about immutability (or unmodifyability), which is not very well solved in the standard java library until now, unless you are willing to take a look into Guava, which we will in another article… Let us stick with Java’s own facilities for today.

So the double brace pattern would be something like this:

import java.util.*;

public class D {
    public static void main(String[] args) {
        List<String> l = new ArrayList<String>() {{
                add("abc");
                add("def");
                add("uvw");
            }};
        System.out.println("l=" + l);

        Set<String> s = new HashSet<String>() {{
                add("1A2");
                add("2B707");
                add("3DD");
            }};
        System.out.println("s=" + s);

        Map<String, String> m = new HashMap<String, String>() {{
                put("k1", "v1");
                put("k2", "v2");
                put("k3", "v3");
            }};
        System.out.println("m=" + m);
    }
}

What does this do?

First of all having an opening brace after the new XXX() creates an anonymous class extending XXX. Then we open the body of the extended class. What is well known to many is that there can be a static {....} section, that is called exactly once for each class. The same applies for a non-static section, which is achieved by omitting the static keyword. This is of course called once for each instance of the class, so in this case it will be called after the constructor of the base class and serves kind of as a replacement for the constructor. To make it look cooler the two pairs of braces are placed together.

It is not so magic, but it creates a lot of overhead by creating anonymous classes with no real additional functionality just for the sake of an initialization. It is even worse, because these anonymous inner classes are not static, so they actually can refer to their surrounding instance. They do not make use of this, but anyway they carry a reference to their surrounding class which might be a very serious problem for serialization, if that is used. And for garbage collection. So please consider the double-brace-initialization as an anti-pattern. Others have blogged about this too…

There are more legitimate ways to group the initialization together. You can put the initialization into a static method and call that. Or you could group it with single braces, just to indicate the grouping. This is a bit unusual, but at least correct:

import java.util.*;

public class E {
    public static void main(String[] args) {
        List<String> l = new ArrayList<String>();
        {
            l.add("abc");
            l.add("def");
            l.add("uvw");
        }
        System.out.println("l=" + l);

        Set<String> s = new HashSet<String>();
        {
            s.add("1A2");
            s.add("2B707");
            s.add("3DD");
        }
        System.out.println("s=" + s);

        Map<String, String> m = new HashMap<String, String>();
        {
            m.put("k1", "v1");
            m.put("k2", "v2");
            m.put("k3", "v3");
        }
        System.out.println("m=" + m);
    }
}

While the first two can somehow be written using Arrays.asList(...), now in Java 9 there are nicer ways for writing all three using List.of("abc", "def", "uvw");, Set.of("1A2", "2B707", "3DD"); and Map.of("k1", "v1", "k2", "v2", "k3", "v3");, which is recommended over any other way because there are some additional runtime and compile time checks and because these are efficient immutable collections. This has been blogged about too.

The aspect of immutability which we should consider today, is not very well covered by the java collections (apart from the new internal one for the new factory methods. Wrapping in Collections.unmodifyableXXX(...) is a bit of overhead in terms of code, memory and CPU-usage but it does not give a guarantee that the collection wrapped into this is actually not being modified elsewhere.

Share Button

VoIP and Landline Telephony

Some may have noticed, some not, but the landline telephony is actually being shut down in the next few years, if it has not happened already. This is done in Germany and in Switzerland and I assume other countries will follow or even do it earlier. In some countries and in some age groups the landline telephone does not exist any more. Younger people have only the cell phone and use flat rates for mobile telephony or VoIP services on the cell phone to call. And actually asynchronous communication mechanisms like email and messaging are more popular now than actually talking on the phone. So the technology that is relevant for phone companies now is internet and mobile telephony. So it is a logical step to stop supporting what has become an expensive niche technology. It looked like phone companies wanted customers to actually move their infrastructure to VoIP. That means the black phone with the dialing wheel from the 1950s would no longer work and customers would have to buy new devices, which would eventually allow them to make calls like before, just using keys instead of the dialing wheel. Or it would even be necessary to buy a computer or a tablet or a smart phone to do telephony at all. It seems that this approach was too ambitious, because there is a large group of customers who are unwilling or unable to move in this direction or simply unwilling to invest a lot of time on changing their habit and learning how to use the VoIP and a lot of money on buying devices that they actually do not want.

So the challenge is now to provided adapters that support all historical phone technology and map that to VoIP without forcing the customer to get used to a new device or a new method of using it. There are some impacts that can probably not be avoided. The adapter needs electricity, while the phone got its own electricity from the landline and even worked when there was an outage of electricity. The adapter can be small, but it will need some space. And there will be patterns of how making a call can fail that did not exist before. More components are involved and all of them can fail. As fall back for emergency calls even when electricity has failed we will have to rely on cell phones. Hopefully their batteries are charged, but people get used to that. And really almost everyone has a cell phone, even in poorer countries. Or at least a neighbor with a cell phone.

If this approach succeeds that will be quite impressive. But probably it is the only reasonable way to do that. And supporting only one technology, which is internet, is cost efficient. So the question who should pay for the adapters has to be answered in each country where this transition is being made.

Btw., I think that television is also a technology that will disappear. While in the old days half a dozen TV stations where on the air and in some countries financed by fees or by advertising or by taxes, we got alternative access via cable, satellite dishes and now the internet. So the local fee-financed TV stations are getting less relevant, because we can watch content from all over the world. So instead of imposing the fees on everybody who dares to live in the country (like in Germany or Switzerland) it is time to either abolish the TV-fees or to cut them way down or to constrain them to those who actually register as users of the national TV stations. So the national TV stations could make their content available in the internet only to those who pay and generate revenue like that. And of course compete with others all over the world who can do the same, if they just manage to provide content in a language that is comfortably understood. As long as the internet is open and we can view content from other countries without censorship this is a great progress against the national TV, even if that disappears due to the lack of funding and the lack of efficiency.

Share Button

Devoxx 2017

I was lucky to get a chance to visit Devoxx in Antwerp the sixth time in a row. As always there were interesting talks to listen to. Some issues that were visible across different talks:

Java 9 is now out and Java 8 will soon go into the first steps of deprecation. The step of moving to Java 9 is probably the hardest in the history of Java. There were features in the past that brought very significant changes, but they were usually kind of optional, so adoption could be avoided or delayed. Java 9 brings the module system and a new level of abstraction in that classes of modules can be made public to other modules selectively or globally. Otherwise they can be by themselves declared as public, but only be visible within the module. This actually applies to classes of the standard library, that were always declared as being private, but that could not be efficiently hidden away from external usage. Now they suddenly do not work any more and a lot of software has some difficulty and needs to be adjusted to avoid these internal classes. Beyond that a lot of talks were about Java 9, for example also covering somewhat convenient methods for writing constant collections in code. Future releases will follow a path that is somewhat similar to that of Perl 5. Releases will be created roughly every half year and will include whatever is ready for inclusion at that time. Some releases will be supported for a longer time than others.

In the arena of non-Java JVM-languages the big winner seems to be Kotlin, while Groovy, Clojure, JRuby and Ceylon where not visible at the conference. Scala has retained its position as an important JVM language besides Java at this conference. The rise of Kotlin may be explained by the fact that Idea (IntelliJ) has become much more important as IDE than Eclipse and Netbeans, which already brings Kotlin onto every JVM-language-developer’s desktop. And Google has moved from Eclipse to Idea as recommended and supported IDE for Android-development and is now officially supporting Kotlin besides Java as language for Android-development. There were heroic efforts to do development in Scala, Clojure, Groovy for Android without support from Google, which is quite possible, but having to deploy the libraries with each app instead of having them already on the phone is a big disadvantage. The second largest mobile OS has added support for Swift as an alternative to Objective C and Swift and Kotlin are different languages, but they are sufficiently similar in terms of concepts and possibilities to ease development of Apps targeting the two most important mobile system platforms in mixed teams at least a bit. And Kotlin gives developers many of the cool and interesting features of Scala, while remaining a bit easier to learn and to understand, because some of the most difficult parts of Scala are left out. Anyway, Scala is not yet heavily challenged by Kotlin and remains important and I think that Clojure and JRuby and Groovy retain their importance and live in somewhat differenct niches than Scala and Kotlin. I would think that they are just a bit too small to be present on each Devoxx. Or it was just random effects about how much news there was about the languages and what kind of speeches had been proposed for them. On the other hand, I would assume that Ceylon has become a dead end, because it came out at the same time as Kotlin and tries to cover the same niche. It is hard to stay important in the same niche with a strong winner.

Then there was of course security security security… Even more important than in the past.

And a lot more…

I listened to the following talks:
Wednesday, 2017-11-08

Thursday, 2017-11-09

Friday, 2017-11-10

Links:

Previous years:

Btw. I always came to Devoxx by bicycle, at least part of the way…

Share Button

Usability of Ticket Vending Machines

Most of us know ticket vending machines that are used for public transport. While people who often buy a similar ticket usually have no trouble using them, it can become quite hard for functions that we rarely use or for people who only rarely use the ticket vending machines. Why is it so hard to program them well?

What we actually want to achieve is searching an item in a multidimensional space. There might be many parameters relevant to the actual ticket and we need to find it through the navigation mechanisms that are offered to us by the software. Sometimes it is straightforward. For local transport some people want to buy a ticket for zone 1B and then they even find this offered on the home screen. Other people have no idea what this zone 1B is and actually do not even care. They should know where they want to go and the system should then know if it is „Kurzstrecke“, „Zone 1A“, „Zone 3D“ or whatever. But the zones might be a useful shortcut for the experienced user, because it can be known that in this case many different travel destinations can be covered by the same ticket, which can be found in a second on the home screen. Making it fast for the experienced user is actually a very good idea, because it saves the time of the customer and of the machine. But the more natural approach of just entering the destination should be provided as well. Up to this point they have actually done it quite well in some vending machines that I have seen.

But then there are some issues that are hard to deal with. I just give some examples that I have experienced without claiming to be complete.

In some cases there is a different ticket universe for local destinations in some area around the location and destinations outside of this. For example the region of „Schleswig-Holstein“ in Germany has a „Schleswig-Holstein-Tarif“ which applies to certain more local trains. The first step when buying a ticket was selecting between „Schleswig-Holstein-Tarif“ and something else. Which is something the customer should not need to care. It is bad to require knowing to which destination and even worse to which trains „Schleswig-Holstein-Tarif“ has to be applied and to which not. In this case „Hamburg“, which is a different region and not part of „Schleswig-Holstein“ as a destination was still part of „Schleswig-Holstein-Tarif“, which can be found out by trial and error, by asking someone or by using google on the smart phone. Hopefully the train is not leaving in 5 minutes. But again, depending on the train, it is anyway not „Schleswig-Holstein-Tarif“. Yes, the ideas behind it are not too hard to understand and many people know them like many people know what „Zone 1B“ means. But some don’t and they are having a hard time. The right way would be to ask for the destination and then offer some train connections or some choices for useuful and accessible parameters for train connections to pick the right ticket.

The next issue is that it is usually possible to buy the ticket in advance for the next day, which is a good idea, because it eliminates the stress of having to buy the ticket or having to wait in a line while the train is due to leave. When I tried that, it was not possible with „Schleswig-Holstein-Tarif“, which was again something that took some time to find out. The information that this is not possible should at least be easily available or even better it actually should be possible to buy all tickets at least a few days in advance.

The pricing system should be logical. While it is annoying that flight ticket pricing can only be understand by discarding all logic, this does not work well at all for train tickets, because there are much more stations and thus much more different possible tickets to calculate. A funny example was once constructed with three towns. A, B and C are in a line. A and B are far apart and an important connection. C is very close to B. It was possible to buy a return ticket from B to A. But it was cheaper to buy a ticket from B to C via A, because this was considered as one path and a discount applied because of the longer distance, which did not apply for the path from B to A or from A to B and the return ticket was calculated like a combination of two one-way tickets. This was resolved by disallowing the ticket from B to C via A. A better solution would be to make the tariff system more logical. If it is hard to distinguish between a return ticket and a one-way via-something ticket, it would be better to unify the discount in such a way that the same kind of discount can be given depending on the total distance covered by the whole ticket.

When taking a bicycle on the train, a bicycle ticket needs to be bought. Since this is a more rare case, it is not found on the home screen and it has to be found through some navigation. I do not have an easy answer for this, but it should be done in such a way that it can actually be found by an inexperienced customer without too much difficulty. I had to use google on my smart phone for the last bicycle ticket that I bought, which is not how it should be.

Many railroad companies offer a ticket for frequent travelers which needs to be paid once a year and then provides a discount for most tickets. It is sometimes not clear, when this discount is available and when not. This information should be easily accessible during the ticket buying process.

I think that ticket vending machines are an interesting example of usability and a lot can be learned from them for other applications. And a lot of things have actually been done right, but that is not enough.

See also:

Share Button