How far to go with internationalization?

I had some experience with mobile apps in Sweden. Since I know Swedish, I did not bother to find out, if they support other languages. But they were potentially important and useful, but implicitly required a domicile in Sweden.

More concrete, the apps that I was interesting in using were „Dalatrafik“ and „Swish“.

Swish is a mobile app for payment. I think that is the way to go and the future. Credit cards are just not quite as good, in many ways. I use Twint a lot in Switzerland and I like it very much. You either have NFC with the phone and the payment device or there is a QR-code to scan. But I think Swish goes much further, because the entry barrier for becoming a receiver of a payment is much lower. It can be used for payments between individuals or in flea markets, where a credit card terminal is not an option or too expensive. Now Sweden is generally more advanced in terms of cashless payment than other European countries and of course a . Very rarely cash is required and even surprisingly often only credit card, debit card and Swish work and cash is not accepted. Swish can be used to pay to a number, for example a camp ground can be paid, when the reception is closed by just sending the amount according to the price list to the swish number. So I have installed Swish. And then the big disappointment: A Swedish bank account is needed to back it. Maybe it is eventually worth having one, but ad hoc it is not an option and only works for residents of Sweden or frequent tourists of Sweden who go the extra mile to open a Swedish bank account as a non-resident. This is even difficult in Switzerland. I have not tried it in Sweden.

Dalatrafik provides information about public transport connections within the Swedish region Dalarna. This really works well. But then it also allows buying tickets for public transport. In times of Corona it is actually the only possible way, because the driver does not sell tickets. In larger towns, there are a few shops that sell tickets as well, but that is neither conveniant nor helpful in most cases. So now the app has two payment modes. Swish and credit card. Swish we will see later. Other than in almost all other cases, it turns out that this app only works with Swedish credit cards. So the way to go by bus as a tourist is to just try to buy a ticket and if somebody controls it, to say, it did not work… Why not accept foreign credit cards?

So please think a bit more who is your user base and of ways of including non-residents or other „non obvious“ groups of users, that are actually important. Starting with something that works and omitting the parts that are really difficult to achieve is good. But it is worth exploring h ow to make it work for non-residents, especially for apps that are so crucial. Dalatrafik could obviously allow foreign credit cards, as almost everybody in the world does. And Swish could come up with a way to add a foreign bank account or a foreign credit card to back the payments. We will see what they will come up with in the future. On the other hand it is amazing, that almost all credit card terminals and ATMs in the world support 6-digit PIN-codes for credit cards and debit cards, although they seem to exist only in Switzerland. A 6-digit code that you may choose yourself is much better than a 4-digit-code that is imposed on you by the bank.

An example of a funny constraint, that has eventually been removed is that users of iphones and ipads needed a Windows-computer (or Apple-computer) somewhere to run their devices, because updates were only possible together with a computer of either of these two types. So Linux users could not buy i-things. Or people who did not have any computer at all and just wanted to use the i-device as their computer. Then they came up with the possibility to do the update just with the device itself without needing any computer whatsoever. This is what all my Android devices do and always did and what makes sense.

Links

Share Button

Some thoughts about Usability

Just from the users perspective, some thoughts and experiences concerning usability…

I have to pay my phone bill every month. It is one for phone, internet, and everything, but it needs to be payed. It is mostly used for the company, so it is paid by my company and I need a proper invoice and the bits of information for how to pay it via e-banking.

Now I get an email every month telling me they want money and I want to pay it. That is not too hard, because the email already contains the information and it is also relatively easy to find on the web site.

But then I need the invoice. When I go to the web site, there are different worlds, like private, small business, big business. It is necessary to log into the right one, and if for historical reasons an account exists for „private“ that is an dead end.

Then it is finally possible to find the money that needs to be paid. But then again, the obvious path leads to the payment information without the link to the PDF of the invoice. It is a dead end.

The web site does have a beautiful design and provides a lot of information and services.

But why don’t they make it really a matter of seconds to pay my bill and printing the invoice? That is the only thing I really want and need to do every month. And it is a bad experience each time. Because I forget how to do it after a month and it is not easy and obvious enough. The invoice could be attached to the email or there could be a link that leads directly to the invoice. Plus another link for whatever else they want to show me, if they like… I can easily ignore that second link most of the time.

Share Button

Für neue Projekte verfügbar

Ich bin ab Anfang September 2020 für neue Projekte verfügbar.

Auf der Webseite meiner Firma IT Sky Consulting kann man sehen, was ich anbiete…

Share Button

Combining multiple scans

When images are scanned multiple times, maybe there is a way to construct an image that is better than any of the scans from them.

In this case it is assumed, that one scan has a higher resolution, but another scan got the colors better.

It has already been found out, which two scans belong to the same original.

Also one of them has already been rotated to the desired position.

The rotation can easily be found. Since images are roughly rectangular and width is roughly 1.5 times the height, simply two rotations have to be tried. Images are compared pixelwise and the one that has less deviation is probably the right one.

Now the orientation is already the same. And when scaled to the same size, the images should be exactly the same, apart from inaccuracies. But that is not the case. They are scaled slightly differently and they are shifted a bit.

Now the shift and scaling can be expressed by four numbers as x=a*u+b and y=c*v+d, where (x,y) and (u,v) are coordinates of the two images. It could be a matrix, if rotation is also involved and this would work the same way, but professional scans do have shift and scaling problems, but much less rotation problems.

So to estimate a, b, c and d, it is possible to go through all the pixels (u,v) of one image. Now the pixels (x,y) = (u+du, v+dv) are compared for combinations of small du and dv. The result is a sum of squares of differences s in a range between 0 and 3*65536. Now low values are good, that means that the coordiates (u,v,x,y) are a good match and need to be recorded with a high weight. This is achieved by weighting them with (3*65536-s)^2. Now it is just two linear regressions to calculate a,b,c and d. Of course, the points near ues the borders need special care, but it is possible to ommit a bit near the boarder to avoid this issue.

Now the points (u,v) of one image are iterated and the approximate match (x,y) on the other image is found. Now for (r,g,b) there can be a function to transform them. To start, it can be linear again, so this will be three linear regressions for r, g, and b. Or it could involve a matrix multiplied to the input vector. Or some function, then a matrix multiplication then vector addition and then the inverse function. The result has to be constrained to RGB-values between 0 and 255.

Now in the end, for each pixel in one image there is a function that changes the colors more to what the other image has. This can be applied to the full resolution image. Also, a weighted average of the original values and the calculated values, to find the best results. Then this can be applied to the whole series.

A few experiments with the function and a bit research on good functions for this might be done, to improve. In the end of the day there is a solution that is good enough and works. Or maybe a few variants and then some manual work to pick the best.

Share Button

Processes

We all encounter once in a while people in the teams who really love processes.

Now processes are a good thing, because they can help us to work, clarify certain things and improve efficiency.

There are even processes that are absolutely mandatory, for security reasons, for example. It should be carefully chosen where to impose such a mandatory process, but then it should never be broken. Other processes are more like a tool, that we use because it helps us. Or because the majority of us or simply the boss think so. Or because they are just there. And have always been there in the last 9 months or so.

A well known example is the work of airplane pilots. When they take over a plane there is a checklist of things what to do. Each item of the checklist has to be followed. Each time.

I have introduced such a checklist for creating a release or of a software into many projects. It proved wrong to shortcut this or just do it somehow, because in the end the wrong or unknown version of the software was installed or something else went wrong. I introduced it also for creating USB-sticks that were used by non-IT-people to install a software on around 1000 devices physically located in different places throughout the country or by hardware people in their workshop. We were not stupid, what could be done via the internet was done via the internet, but machines could get hardware problems or get messed up or simply be set up for the first time. It took a bit more than an hour to do everything by the book and it never became significantly faster. But then again a failure was potentially much much more expensive than a few hours of work. And after this process had been introduced there were no failures at all with any USB-sticks prepared according to this process.

But of course we should always remember that processes are there to help us do our work better. So they should not become so extensive that they use up all the time or that they make work too hard, unless necessary for reasons like the ones mentioned above. So a good understanding is that a lot of things can be done if all who are involved or simply the boss agree to do so after thorough thinking. Do not stop using your brain because of processes.

Share Button

Comparing Images

A practical problem that I have is to sort my digital photos. Some of them have been taken with analog cameras and they have been scanned. Some of them have been scanned several times, using different resolution, different providers or different technologies.

So one issue that occurs is having two directories which contain more or less the same images from different scans.

Some of them have been sorted or enriched with useful information or just rotated correctly, others have better resolution. Maybe it is good to use different scans to create a better image than the best scan.

For a few hundred films doing that manually is possible, but a lot of work. So I thought about automating it, at least partially.

There are several issues that make it more difficult. Scans are not very accurate, so they cut of a tiny random bit in the borders. So to match two images exactly it is necessary to scale and crop them.

Colors look quite different. And then of course resolutions are different. And sometimes they have been turned by an angle of 90 or 270 degrees. Some scans miss a few images that another one has.

So, how to start?

First all images are scaled to thumbnails of approximately 65536 pixels. It turns out to be 204×307, but could of course be anything around that size retaining the rough aspect ratio.

Now all thumbnail images from the two directories are read into memory. 80 thumbnail images are no big deal…

Images in portrait orientation are rotated by 90 and 270 degrees and both variants are used. So from here on all images are in landscape format. Upside down is not supported.

All images are scaled to exactly 204×307 in memory to allow comparison.

And the average r, g and b-values from all images from each directory are calculated. The r,g,b values of each pixel are multiplied or divided by a factor, which is the square root of the quotient of these averages and constrained to the range 0..255. So this partially neutralizes the effect of different colors.

Now for each thumbnail from the first directory is compared with each thumbnail from the second directory. This is done by calculating for each pixel the sum of the squares of the differences between the r, g and b values of the two images.

These values are added up and divided by the number of pixels (204×307 in my case). The square root of this is the comparison result. For images that are actually the same, comparison results between 30 and 120 are possible. Now some heuristic is used to find matches based on these values. Also an „interpolation“ is used, if consecutive images in both directories occur with the middle one missing. This usually brings good results. In some cases manual interaction is needed. So it is possible to mark images in an web interface and then the match that was found for them is revoked. Also it is possible to provide „hints“, which means that these images should be matched.

It took about a day and half to write this and it works sufficiently well for the purpose.

But there are much more sophisticated algorithms that really recognize objects. I have a program called hugin that combines a few images to a panorama. Usually it works quite well and it can even rotate, shift and scale images and do amazing things most of the time. Sometimes it just does not work with certain images. Also Google has of course very powerful software to recognize images and compare them and even create panoramas. If we thing about face recognition… There is really good stuff around. But the first approach with some improvements gave sufficiently good results and the program will only run a couple of hundred times, then it will not be needed anymore.

This is heavily inspired by the Perl-library Image::Compare, but since I am doing something slightly different, I do not use this library any more. But subsequently it has been written in Perl. I will happily provide my source code, but I have baked into it assumptions and conventions that I have introduced myself, but that are not universal, concerning the file names and directory structures for the images.

Share Button

Ketchup or Milestone

Bicycles have potentially very accurate speedometers. They measure distances to an accuracy of usually ten meters and they do this by counting the rotations of the front wheel. Now we can go into physics and see if the front wheel slips significantly, but I would not expect that to be relevant. What is a bit relevant is how much air there is in the tire. But bicycle tires usually take pressures between 4 and 6 bars and using this ensures that the resistance is lowest, the tire lasts longer and of course the measurement is more accurate. Only for cycling offroad and especially on snow or sand, lower pressures might be helpful.

Now usually the circumference of the front wheel can be configured to an accuracy of one millimeter. For this there are two methods and each method has its fans, who say that it is totally impossible to use the other method.

The milestone method, or more accurately the kilometer stone method, because we should of course use the metric system by default in an international context, works like this: The wheel circumference is roughly estimated and configured. Now we cycle long distances on roads with kilometer stones. This is not the case in every country and on every road, but in some countries national highways and even regional highways have it. Germany used to have it in the past, but they moved to a system that numbers sections and measures only within the section, which is useless for this purpose. Sweden and Switzerland do not have it. So it requires makeing use of the kilometer stones in a country that has them. Distances on road signs are very inaccurate in most countries, so they are absolutely useless for this purpose and no better than an estimation of the circumference. So now traveling 10 or better 20 kilometers on a road without steep slopes and without strong head wind allows to see the deviation and adjust the circumference appropriately. Repeating this a few times will yield the best result.

The other method is called ketchup method. It is easy. One spot on the wheel is marked with a lot of ketchup. Then it is carefully walked on a straight line for a few meters meters and the distance between the ketchup spots is measured.

Now what is the difference between the two methods?

Cyclists do not exactly follow a straight line, but go a little bit of zik-zak. This becomes more with head wind or slopes, but usually it is a value that is individual for the cyclist. So going from town A to town B, which is 100 kilometers, will yield (almost) exactly 100 kilometers with the speedometer configured by the kilometer stone method. But it will yield a bit more when configured by the ketchup method.

Both values have their relevance. The ketchup kilometers are the distance that was really traveled in the exact line. And the milestone kilometers are the distance that has been achieved in terms of the usefulness as a means of transport. The zik-zak is just a tiny inevitable inefficiency that does not bring us anywhere.

Now there are people who insist that one of the two methods is superior. And people who just do not care to measure to that accuracy.

But we can learn something from this for IT solutions. Often we have the choice, which information to show to the user. Quite often the ketchup value is chosen, which is a technical internal value, that is the most correct by some technical point of view. But it is good to think twice which value really matters for the user. And it is usually worth to go through a lot more effort and present that „milestone value“ and leave the „ketchup value“ for internal use and maybe advanced detail GUIs that show both values.

I do not care how you configure your bicycle speedometer, but I prefer the milestone method myself. But for user interfaces of software we should all think a bit and understand, what is the equivalent of ketchup and milestone method. At least if there is a significant difference, we should prefer milestone and make more useful user interfaces instead of „correct“ ones in some technical way that is irrelevant to the user.

Share Button

Object Creation: Builder vs. Constructor vs. Setter

When we create new objects, we are basically confronted with the need to provide at least one construction pattern.

Of course depending on the language we have more or less three ways to go that are commonly available.

Traditionally in OO it was mandatory to write setters and getters. In C++ or Java they really have names like getXyz or setXyz, but in Ruby or C# or Scala they can be written in such a way that they behave as if the attribute were public and could be assigned and read, by just magically calling the setters and getters internally. Actually Java does that internally for public attributes or more generally for attribute assignment and Hibernate can be configured to go via the getters and setters or via the internal attribute-assignment-getters and setters. This can be useful, to apply some DB-specific conversion in the getters and setters and to bypass it for the DB-access.

Why do we at all use these getters and setters? They were introduced to have flexibility to change the internal implementation without changing the API, because getters and setters can actually become more complex. This can be useful for DB-specific conversions in Hibernate, but apart from that in 25 years of OO-ish development this flexibility has hardly been used. Most of the time the set of attributes changes and the set of setters and getters changes simultanously. So one might ask the question, why we go by default the extra mile of adding getters and setters, when we could just make the attributes public and save some time. I am not asking, because that would decrease the life expectancy. But moving on demand from plain accessible attributes to getters and setters would be just a refactoring like changing the sets of attributes.

Now which of these are preferred and why?

First of all, we do need to read all attributes in some way, otherwise they are just a waste of space. Just forget for the moment programming low level APIs where bits have to be counted and dummy attributes have to be added to move the useful ones to the right position. But very often it turns out that we do not need to change them during the life time of the object. Now Ruby has a nice feature of setting everything up and then calling freeze, which makes the object, but not its sub-objects, immutable. I think it would be worth considering to add something like this to Scala and Java, for example. Clojure has something like this, actually, but with a slightly different flavor.

There is some advantage in knowing that the state of objects does not change. It is easier to reason about code. It helps really for creating thread safety and reentrance. And it even helps when passing around an object reference of an object, that still „lives“ somewhere else, for example when a sub-object comes out of a getter. In functional programming this is mandatory for all internal APIs, in other areas it is just something to make life easier, where the mutation is not actually needed.

So, there the setter go away, where not needed and we end up construction with a constructor that contains all attributes or variants with reasonable default values. This makes sense, when the attributes are few and there is no no risk of mixing them up. The builder pattern helps by naming the attributes in languages, that do not allow named parameters for the constructor out of the box. So it is useful in Java, but obsolete in many other languages. If attributes are final or constant or whatever it is called, the need for setters and getters is technically even less, because nothing can go wrong, the attribute can only be read. But in Java it is best practice to write getters and we should comply.

Now the issue arises that there is only a private or public multi-argument constructor and a lot of getters or whatever is used for reading the attribute in the specific language. And then a framework needs to create objects from XML, JSON or whatever automatically. And these frameworks tend to need at least a no-argument-constructor, often a public no-argument-constructor, that should be only for framework use. And the attributes have to be made non-final. If the framework is smart, it bypasses getters and setters and the no-arg-constructor is already enough.

Some frameworks actually require the setters. There we go to the old school world. We can impose a convention that the setters and the no-arg-constructor are there ONLY for the purposes of the framework and should not be used otherwise. Maybe that is a good approach, it is somewhat cleaner. But the box is opened and mistakes with such a convention will happen, so the question arises, why we need to deal with each attribute at last seven times: The attribute itself, its getter, its setter, the multiarg-constructor, the attribute of the builder, the with-method of the builder and the build-method.

Some things are easier, when moving to a new language and dumping all the garbage-traditions in that step.

But good developers can write good software in any reasonably good language that reasonably suits the purpose.

Links

Share Button

MapStruct

In the Java sphere we often develop the same data class several times. Each layer has its own variant and they are named almost the same, with some prefix or suffix or just the package name to distinguish. The set of attributes is the same (or almost the same), they have setters and getters. Or maybe only getters.

Nobody wants to write business logic two or three or four times, no matter how much support we have for copying code between the layers. And there OO is gone. We have to use anemic data objects, which was clearly introduced as an antipattern by Martin Fowler some years ago.

Since we are now using new paradigms and every couple of years, we no longer care and no longer know. OO was 25 years ago. Now we do FP and Microservices. And new frameworks. And many layers.

So, where does this come from?

First of all, the database access layer is Hibernate. I do not know why, because I think that plain JDBC would be easier, but Hibernate is already there and cannot be removed by arguments. Now Hibernate came with the promise that we can just use plain objects (POJOs) for our data and mirror database tables to classes and columns to attributes. Some XML-stuff had to be written and everything worked. Only writing the XML was such a pain that people immediately jumped to the annotations alternative once it was there. It was better and still is. But now the POJOs are obviously cluttered with Hibernate or JPA annotations. So they have to stay in the database layer. Actually there is a much stronger argument for this. Objects contain other objects and collections of objects. And possibly everything, if we go deeper recursively. So accessing the database should be a reasonably fast operations, so some attributes are loaded lazily. That means, they are only really loaded, when we need them. Which can go terribly wrong, because the transaction is no longer around and it is too late.

Also we have our idea what data classes have to look like, so there are some layers where we want no-argument-constructors and setters and getters, some layers where we want final attributes and constructors with all attributes and only getters and again others where we prefer to use a builder. And yes, each layer has its rules that need to be followed.

So, we keep them in the DB layer and map them to almost identical service layer objects without annotations. Then we work with these, write our business logic. Procedural programming mostly. Because the objects cannot have business logic. So we have classes with methods that are kind of behaving like static methods, but are non-static, because the framework wants it like that.

And then again, we build more and more layers, because each concern needs to be dealt in its own layer. And requires its own set of data objects, possibly with its own annotations for REST or SOAP or JSON or XML or whatever.

So, how do we move data between layers? At each layer boundary the data needs to be copied to the sister objects in the new layer. Now this is kind of stupid, programming something like

class HouseL1 {
private final int a;
private final String b;

public void HouseL1(HouseL2 l2) {
this.a = l2.getA();
this.b = l2.getB();
....
}
}

or with builder or with setters and getters, it is a lot of ugly work. And even worse, all sub-objects and collections of sub-objects have to be mapped. And their sub objects. And we possibly have to stop somewhere.

So we would like to avoid doing all this tedious stuff.

What can we do?

Is Java really the right language? Of course, we are writing enterprise software and we need type safety. Not real type safety like Scala, but a little bit of it feels good.

Reconsider our whole architecture and simplify it. Maybe it is possible to get rid of some layers and write much simpler software that does the same thing, only much faster and with less bugs. Ok, I’m only kidding here. We are talking enterprise software here. And yes, sometimes the layers do have real purposes and make sense.

Try to use the same data objects in all layers anyway? It has been tried. It works, but only in very simple settings.

Create the source code for the mapping. You can write a script for this or find one or find a tool or whatever. Parse the data classes and create the source code for the transformation methods. Or just write hibernate classes and create all other layers with their preferred setup in terms of mutability and construction and annotations from that.

But we are in Java, so why not use reflection and figure out at runtime how to map it. Find a library to do it for you or write your own. Performance? No problem. We use enterprise servers, of course.

So, in the end it is a good possibility to have a Java-tool that creates the source code for the transformation as part of the compile process.

MapStruct does exactly that. We write in interface for our transformations. With some annotations non obvious mapping behavior can be specified. Keep that list small and try to make it possible to automatically recognize the mappings, where possible. Then an extension to the maven compiler-plugin is added, that involves MapStruct to create an implementation for this interface at compile time and of course compile the implementation. And voila, it works. Even for classes with builders, if only the builder uses method names that are identical to the corresponding attribute name without „with“-prefix. So deal with it, name the methods of the builder like that.

And yes, we should get rid of setters, where we do not really need them. And we should not write constructors with 20 parameters, because the parameters will get messed up. In a language like Java, that does not yet have named parameters. And we do not want to couple layers, so constructors that use the sister class from another layer are not a good idea either, if we have more than two layers or so. So there we go with a builder..

In the end of the day, we can write good software with any reasonably good language and framework. But it is worth investigating how to do certain things. And it is really worth asking the question, why we are doing this at times when it is possible to make such choices.

Share Button

Remote Work

A lot of IT guys have to work at home (in German, but not in English it is called „Homeoffice“) now or are at least encouraged to do so. Btw. „home schooling“ is a different abstract concept, it not only replaces the school building by the home, but also the teachers by the parents, often for religious reasons. But we have enough „theology“ in IT already, so we don’t have to add real theology to this blog. That would be a different blog anyway.

Remote work is nothing new, because some companies have been entirely working like this for years. And people live anywhere in the world. They meet maybe once in a year for a company gathering.

There is some difference, though. These „remote only“ companies can choose their employees, their working area and their technologies in such a way, that it fits this model.

Some people feel more comfortable with going to work and going home, hopefully not with a very long commute, and having these two areas of life clearly separated. This is gone for the moment. But we IT people are lucky, because most of us can continue working with low risk of being infected with COVID-19.

Another important aspect is, that in person meetings are often better than just talking on the phone. Of course, when the working progress is established, smaller issues can easily and efficiently be handled by phone. But for larger issues that may be more complex, controversial or just require not only words, but also white board or something like that, it is usually better, to meet in person. At least if it is not too far to travel.

Also it is very easy to ask colleagues who sit nearby a question, it is nicer to drink coffee together…

So apart from the advantage of saving travel time the remote work has its disadvantages.

But now we are learning to work like that and maybe that will benefit us, because the possibility to use remote work like a couple of times in a month may be useful even in the future.

Some interesting technical aspects are worth noting:

There are different ways to do the work. Some companies have the policy „bring your own device“. The work is mostly done on this device and probably VPN is already in place. So working at home just works immediately.

Also companies that use company owned laptops are often quickly set up, because they just need to setup VPN and maybe rules about the home network security.

But even for companies that use desktop computers, there are ways to deal with this. One approach is to just leave that computer running and redirect the display. This is built into the X Window System of Linux and Unix. It has been there already 30 years ago. In conjunction with Cygwin this is also possible for MS-Windows, but somewhat limited and difficult with non-cygwin applications and somewhat hard to set up. But other technologies like VNC, RDP and probably some others exist. It just requires enough band width, give a little bit of a delay, but it is absolutely possible to work with this. It may be useful to consider moving some things to the local computer, for example the IDE by just accessing the disk remotely, if that is possible with the company policies and the licenses for the IDE. Also confluence, JIRA and these web applications might be accessed with the local browser instead of the browser on the remote computer.

Now for meetings it is good to find a good tool. Maybe use different tools simultaneously. We need to talk with good voice quality. In a group. It is better to have video. We want screen sharing. And there are even tools that do something like shared white boards. A lot can be done. Sometimes it is good to use different software simultaneously, because one has good voice and video, the other one good screen sharing and collaboration features. It is possible to use the mobile phone for one part and the computer for the other part. Good earphones can be helpful, so you can talk with your hands free. Unfortunately they are sold out in many online stores.

Share Button