Do we still need Experts when everything is in the Internet

We find a lot of information about pretty much everything on the internet. We do not have to remember things because we are always online and always able to find the information we are looking for. It is true. I do it, you do it, everybody does it. Wikipedia, Google, Forums and of course specific sites…

I just give an example, I once met a doctor, a physician. A patient had a problem that was unclear to him and he actually told the patient that he would google for the problem. In the end he came up with a very helpful solution for the guy, much better than what many of us would get in the same situation. I will not disclose any details.

Now why do we need an expert at all, if the expert did not know and found a more helpful answer than the other experts only by using a search engine that all of us can use?

Actually it is the conjunction of the expert and the online information that became so helpful. At least since the last presidential election in some country on the North American continent we have learned that media (and probably even the internet) may be unreliable and that truth is relative, if we at all accept the concept of truth.

Or to become more tangible, there are numerous sites that promise us easy solutions at least to questions that really many find important like diets, raising money easily (and legally), and a lot more, we all know it. It is quite easy to put a site online and put any information on there. It is a bit harder, but possible to get found. The author of the site can sell something that he would never buy himself or something that he believes, even though it is not true. I have met a person in Switzerland who seriously told me that eating would be unnecessary for humans and he or she just practiced it for the joy of eating. In case of medical advice it is quite obvious that this might be dangerous, but in almost any area we have more or less the same problem. A government agency that enforces that only the truth is written on web pages would be a nightmare. Just think of your favorite politician being in charge of such an agency…

But for the expert it is easy to recognize which information is serious and useful. And even easier to use the right keywords for searching.

And there is more. The expert knows the situation, consciously and subconsciously he combines experience and what he sees, hears, …. to solve the problem. And builds in the input from the search.

We should also think that searching is extremely efficient, but knowing 99% without searching is so much more efficient. Just think of languages. I speak a couple of them, and it is often useful to use online dictionaries or even translators. But needing them for every other word is inefficient and will actually sometimes lead to wrong understanding.

The information on the internet will become better. There will be new concepts implemented by sites for providing reliable content in certain areas. We already see Wikipedia, which is not 100% reliable, but probably about as good as a printed encyclopedia in this aspect.

Anyway, the experts will not become useless, but we will need them in the future as well.

Share Button

ä ö … in HTML

In the old days of the web, more than 20 years ago, we found a possibility to write German Umlaut letters and a lot of other letters and symbols using pure ASCII. These are called „entities“, btw.

Many people, including myself, started writing web pages using these transcriptions, in the assumption that they were required. Actually in the early days of the web there were some rumors that some browsers actually did not understand the character encodings that contained these letters properly, which was kind of plausible, because the late 80s and the 90s were the transition period where people discovered that computers are useful outside of the United States and at least for non-IT-guys it was or it should have been a natural requirement that computers understand their language or at least can process, store and transmit texts using the proper characters of the language. In case of German language this was not so terrible, because there were transcriptions for the special characters (ä->ae, ö->oe, ü->ue, ß->ss) that were somewhat ugly, but widely understandable to native German speakers. Other languages like Russian, Greek, Arabic or East-Asian languages were in more trouble, because they consist of „special characters“ only.

Anyway, this „ä“-transcription for web pages, which is actually superior to the „ae“, because the reader of the web page will read the correct „ä“, was part of the HTML-standard to support writing characters not on the keyboard. This was a useful feature in those days, but today we can find web pages that help use with the transliteration or just look up the word with the special characters in the web in order to write it correctly. Then we can as well copy it into our HTML-code, including all the special characters.

There could be some argument about UTF-8 vs. UTF-16 vs. ISO-8859-x as possible encodings of the web page. But in the area of the web this was never really an issue, because the web pages have headers that should be present and inform the browser about the encoding. Now I recommend to use UTF-8 as the default, because that includes all the potential characters that we might want to use sporadically. And then the Umlaut kann just directly be part of the HTML content. I convereted all my web pages to use Umlaut-letters properly, where needed, without using entities in the mid 90s.

Some entities are still useful:

  • „&lt;“ for „<“, because „<“ is used as part of the HTML-syntax
  • „&amp;“ for „&“ for the same reason
  • „&gt;“ for „>“ for consistency with „<„
  • „&nbsp;“ for no-break-space, to emphasize it, since most editors make it hard to tell the difference between regular space and no-break-space.
Share Button


This Blog is now using https. So the new URL is, while the old URL will continue to work for some time.

If you like to read more about changing the links within the Blog you can find information on Vladimir’s Blog including a recipe, both in German.

Share Button

Web Pages for Mobile Devices


Many web pages are still created for desktop devices and optimized for a certain screen format. Often an additional effort is spent on adding some mobile capability on top of that.

This tends to neglect the fact that viewing web pages with a mobile device is no longer an edge case, but quite a common practice. Some pages do not work at all on mobile phones, you just have to give up when trying to view them. Some are just unpleasant. Vertical scrolling is generally accepted. We are used to it and it is en par with our reading style. But having to scroll horizontally for each line is just too annoying, and we tend to give up soon, unless the content is really very interesting.

What needs to be observed?

  • The font has to be large enough to be readable. It would be ideal to be able to change the font size at least between a number of choices.
  • Horizontal scrolling is probably ok for some large tables, images or maps, but never for multiline text.
  • We need the whole width of the tiny display for reading. Navigation bars on the left or right side need to disappear or move to the top or botton when we are reading the main part.
  • It is bad to have buttons and links too close to each other, because it is harder to hit them with a finger than with a mouse. We simply can’t see any feedback on what is happening underneath our finger.

Maybe there are more topics.

Of course having more screen real estate is always better and it is a good idea to make use of it.
But we all know that some web pages work extremely well even on cell phones. And some do not at all.

Today the web page is built up in the browser. It gets transmitted from the server as HTML and the browser settings provide font sizes, formatting preferences etc. That is what typical web pages from the 90es were like. They all looked the same and they all worked reasonably well. Forget about the pages doing crap with usage blinking elements and other useless toys of the early web.

Anyway, something like a broken web evolved from that. Web designers wanted to impose their design on a web that was not ready for it. They started to heavily use crappy tricks like nested tables, transparent 1×1 images, images that contained text, frames (really bad!!), formatting information within HTML, like font-tags. We had those famous web pages that were „best viewed with browser xyz version uvw“. The HTML source was totally unreadable and could only, in best case, be processed with tools like frontpage, dream weaver etc. With the wrong browser they were appearing empty or totally messed up. Javascript added even more possibilities to mess around and to become more browser specific.

It was great, especially when being a web desginer and being able to charge for different variants of the same web page in order to actually support different browsers. This was exactly what the web was not meant to be and I think that the basic ideas of the inventors of the web were actually very sound and deserve an evolutionary enhancement.

A good step was the introduction of CSS. It put formatting on a cleaner basis, because formatting information could now be kept in CSS and separated from the content. Of course CSS and HTML needed to be compatible with each other, but HTML could be kept readable and editable, even with a common text editor and the CSS could be retained. I am aware of CSS successors like SASS and SCSS. From a more abstract point of view that is the same.

Another change came up, because web pages are more often generated dynamically on demand. I think that we are spending the vast majority of our time on such dynamic web pages. Google, Wikipedia, Youtube, Facebook, online shops, schedule information, map services, e-banking… you name it. Most of what we do is on dynamically generated web pages. Even this blog article is part of a dynamic web page generated on demand for you by WordPress, based on the contents that I have provided. I think that too many web pages are dynamically generated these days that should actually be static, but that is another discussion. Actually even the early days of the web knew CGI for creating dynamic web pages, but it was an exceptional case used whenever it was really necessary.

Another class of web applications uses JavaScript (like Angular JS) and is a revival of the classical client server architecture. Some see this as the successor for all server generated web pages. I actually think that both approaches should coexist. Some stuff that we are doing now would not be possible without the rich JS-based clients. Think about Google-Docs, modern Wiki-Editors, modern web mail clients, chats, twitter, facebook, google+ and many more. They all use something like this. But there are a lot of advantages in having applications based on server generated HTML with very little JavaScript. This could be covered in a future article…

The interesting question is how we can support mobile devices in a reasonable way.

In the late 90es we had the solution: WAP. You just had to write the pages in WML instead of HTML. That was optimized for mobile devices in many ways: The pages needed very small amounts of data to be transferred over the wireless networks, that were very slow those days. It was possible to see it on really tiny displays. Those days it was cool to have the tiniest cell phone in the team. And navigation was possible with a few simple buttons of the phone. Decent touch screens were not available to the mass market. So it was an ideal solution for the devices that were possible in those days.

Unfortunately it was quite uncommon to set up the same web page a second time in order to offer WAP and even worse to keep that variant up to date. Some did but it was only a small fraction of the web. Today server generated web pages could do that more easily. WordPress, Media-wiki or Google could provide their content in WAP format as well. But in those days static web pages were more common and dynamic web pages were programmed very specifically to a certain output format, usually HTML. HTML-code was usually hard coded in the program.

The salvation came from the super smart phones, that Nokia and Ericsson provided. They could just do „normal“ web pages. Suddenly cell phone users were no longer locked into the stagnating WAP universe, and could access everything. And web pages could drop the ugly second variant for WAP, if they ever had it. And yea, I assume that some WAP-pages are still around now, even if almost nobody cares.

The same web pages now worked for mobile devices, but not always well. The reasons have just been mentioned in the beginning.

How can web pages be provided universally?

1. The WAP approach can be revived by creating a different variant of the web page with HTML that is optimized for mobile devices. We actually find this quite often with two variants and It is possible to maintain these two variants, but laziness is actually good in IT. In this case it leads to writing the web pages once in any input format and generating the www-variant and the m-variant automatically from the same source. That can be a script that is run once after each change to generate two sets of static pages. Or it can be software that generates the requested variant dynamically just in time for each request. As long as this avoids having to maintain two or more variants in parallel, this is already acceptable. Maintaining the two variants manually should be a no-go.

2. Another approach is to have static HTML pages (or dynamically generated HTML that does not take the output device into account), but CSS offered in two or more variants. I find this more elegant than the first approach and I am confident that it will cause less problems in the long term. And it is for sure the more appropriate approach according to the HTML philosophy. It can be done by having the different variants encoded in one CSS file or by generating the CSS file dynamically for the different output devices. Maybe it is a little bit too original for reality to combine static HTML pages with CSS that is generated by rails, CGI or a servlet. If encoding different variants in the same CSS really does not work out, why not.

3. Even more radical is the idea of responsive design. In the ideal case, just one HTML and one CSS are enough for each page. They are done in such a way that the page works well with a wide range of display sizes and adapts itself to that size. I find that more beautiful than the second approach, because the variety of divices is large and still growing and less accessible to a limited number of fixed setups, that will be inaccurate or even wrong at some point.

Some simple elements of responsive design are already useful by themselves:

  • <meta name=“viewport“ content=“width=device-width, initial-scale=1″/> in the header part of the page
  • ideally no absolute sizes in CSS
  • min-size, min-width and min-height are possible, but should only be used when really needed.
  • for large images max-width: 100%; height: auto; in CSS
  • we need to remove the width and height attributes from the img-tags for large images, even though we have learned at some time the opposite for optimizing rendering speed.

There is a lot more to do. Doing it really well or transforming an existing page to responsive design is going to be a big deal.

When using a CMS like Joomla, Drupal, Typo3, WordPress or Media-Wiki, these issues are abstracted away. It is interesting to check out, if the pages are already fine with mobile devices or if work needs to be done. I might look into these issues and write about it in the future.

Just to avoid questions: I am in the process of transforming my own pages to responsive design, but far from finished.

Share Button

Devoxx 2014 in Belgium

In 2014 I have visited the Devoxx conference in Antwerp in Belgium.

Here are some notes about it:

What is Devoxx?

  • Devoxx ist a conference organized by the Belgian Java User Group.
  • Belgium is trilingual (French, Flemish and German), but the conference is 100% in English.
  • The location is a huge cinema complex, which guarantees for great sound, comfortable seats and excellent projectors. It is cool.
  • 8 tracks, overflow for keynotes
  • Well organized (at least this year), more fun than other conferences…
  • sister conferences:
    • Devoxx FR
    • Devoxx PL
    • Devoxx UK
    • Voxxed (Berlin, Ticino,….)

Topics & main Sponsors

  • Java / Oracle
  • Android / Oracle
  • Startups, Business, IT in enterprises / ING-Bank
  • Java-Server, JBoss, Deployment / Redhat
  • JVM-languages
  • Web
  • SW-Architecture
  • Security
  • Whatever roughly fits into these lines and is considered worth being talked about by the speaker and the organizers…

These are some of the talks that I have attended:

Scala and Java8

  • Many conceptional features of Scala have become available in Java 8 with lambdas.
  • Problem: different implementation and interoperability between Java and Scala.
  • Development of Scala will make lambdas of Scala and Java interoperabel.


  • Concept from category theory. (5% of mathematicians do algebra, 5% of algebraians do category theory, but this very abstract and very theoretical piece of math suddenly becomes interesting for functional programming. Off course our functional programming world lacks the degree of infiniteness that justifies the theory at all, but concepts can be applied anyway)
  • Monoid (+, *, concat,…)
  • Functor
  • Monad
  • Wikipedia de

  • (T, \eta, \mu)
  • example: List with a functor F: (A\rightarrow B)\rightarrow (List[A]\rightarrow List[B])
    \mu is flatten: List[List[A]]\rightarrow List[A]; \eta: A\rightarrow List[A]

Probability & Decisions

  • Example: Software for automatic steering of house infrastructure
  • Heuristics and probability theory
  • False positives / false negatives: what hurts? (usually both)
  • Very good explanation of probability theory and its use


  • Clojure is another JVM-language
  • It is a Lisp-Dialekt, recognizable by its source having an abundance of opening and closing parentheses: (+ (* 3 4) (* 5 6))…
  • strong support for functional programming.
  • Dynamically typed (for us: Just think of everything being declared as „Object“ and implicit casts being performed prior to method calls.
  • After Java itself, Scala, Groovy and Javascript it appears to me to be the fifth most common JVM-language


  • „No one at Google uses MapReduce anymore“
  • Google has replaced it with more general and more performance sensitive concepts and implementations.
  • Optimized: save steps, combine them etc.
  • Can be used as cloud service (Cloud Dataflow)

Key Note ING

  • ING considers itself to be an „open bank“
  • Not the money is lieing around openly for burglers to play with it, but they claim to be open for new ideas.
  • Mobile app is the typical interface to the bank.
  • IT has a lot of influence („IT driven business“)
  • Feasability from the IT side is considered important
  • Agile Prozesses (Scrum) vs. Enterprise IT
  • IT has slowly moved to these agile processes.
  • „Enterprise IT is what does not work“

Material Design

  • GUI-Design with Android and Web Material Design
  • Visual ideas available for both platforms
  • Polymer design for Web

SW-Architecture with Spring

  • Spring 4.1
  • „works with WebSphere“
  • DRY
  • Lambda from Java8 can simplify many APIs out of the box by just replacing one-method anonymous and inner classes.
  • Generic Messaging Interface (wasn’t JMS that already???)
  • Caching, be careful when testing, but can be disabled.
  • Test on
  • Spring works well with Java. Also with Groovy, which comes from the same shop as spring. Combination with Scala „experimental“


  • High-Level testing-Framework
  • Uses Java8-Features (Lambda etc.)
  • Description in natural language.
  • Failure-messages are human readable
  • Like Cucumber…
  • Source of randomness can be configured. This is very important for monte-carlo-testing, simulations and the like.

Builtin Types of Scala and Java

  • In Java we find „primitive types“ (long, int, byte, char, short, double,…)
  • Probleme with arithmetic with int, long & Co: Overflow happens unnoticed
  • With float and double Rounding errors
  • With BigInteger, BigDecimal, Complex, Rational error prone, clumpsy and unreadable syntax.
  • In Scala we can write a=b*c+d*e even for newly defined numerical types.
  • Remark: Oracle-Java-guys seem to consider the idea of operator overloading for numerical types more interesting than before, as long as it is not used for multiplying exceptions with collections and the like.
  • Spire library

Future of Java (9, 10, …)

Part I

  • Q&A-meeting (questions asked via twitter)
  • Numerical types are in issue. That primitive types behave as they do and are kind of the default won’t change.
  • Generics and type erasure (where is the problem)?
  • Jigsaw vs. Jars vs. OSGi still open how that will fit together, but jar is there to stay.
  • Jigsaw repository: Could well be located with maven central. Oracle does not work in such a way that this being hosted directly by Oracle is likely to happen, if third party software is there as well.

Part II

  • Benchmarking with Java is hard because of hot spot optimization
  • JMH is a good tool
  • New ideas are always hard to introduce because of the requirement of remaining compatible with old versions.
  • Java might get a „repl“ some day, like irb for Ruby…

Part III

  • Collection literals (promised once for Java 7!!!) did not make it into Java 8, unlikely for Java 9
  • With upcoming value types this might be more reasonable to find a clean way for doing that.
  • For Set and List somthing like
    new TreeSet(Arrays.asList(m1, m2, m3,…., mn))
    works already now
  • For maps something like a pair would be useful. Tuples should come and they should be based on value types. The rest remains as an exercise for the reader.

Part IV

  • Tail-Recursion can now be optimized in an upcoming version.
  • Because of the security-manager, that analyzed stacktraces this was impossible for a long time. (weird!!!)
  • C and Lisp have been doing this for decades now…
  • Statement: Generics are hard, but having understood them once they become easy. So keep trying….
  • Covarianz und Contravarianz (Bei Array falsch gelöst)

Part V

  • Arrays 2.0: indexing with long could become an issue. Some steps towards list, but with array syntax. (studies and papers only)
  • Lists have two extreme implementations: ArrayList and LinkedList. We would love to see more „sophisticated“ Lists, maybe some hybrid of both
  • Checked exceptions: critical issue, it was a problem with generics and lambda. And way too many exceptions are checked, just think of whatever close()-methods can throw, that should not be checked.

Semantic source code analysis

  • Useful for high level testing tools
  • Static and dynamic analysis
  • Dataflow analysis: unchecked data from outside, think of SQL-injection, but also CSS, HTML, JavaScript, JVM languages and byte code

Functional ideas in Java


  • Functions or methods are „first class citizens“
  • Higher order functions (C could that already)
  • Closures
  • Immutability (function always returns the same result)
  • „lazy“-constructions can be possible though
  • For big structures we always have the question of immutability vs. performance
  • But functional is much more thread-friendly

50 new things in Java8

Part I

  • Lambda (see below)
  • Streams (see below)
  • Default implementations in interfaces
  • Date/Time (loke Joda time)
  • Optional (better than null)
  • Libraries can work with lambda
  • Parallel (use with care and only when needed and useful)

Part II

  • String.join()
  • Something like „find“ in Unix/Linux
  • Writing comparators is much easier
  • Maps of Maps, Maps of Collections easier
  • Sorting is better: quicksort instead of mergesort, can be parallelized

Groovy for Android

  • Problem with JVM languages other than Java: library has to be included in each app. 🙁
  • Solution: jar optimization tool helps
  • Second problem: dynamic languages have to be compiled on demand on the device
  • Solution: „static“ programming, dynamic features possible but not performing well


  • Lambdas are anonymous functions
  • Idea given: interface XY with one method uvw()
  • Instead of

    XY xy = new XY() {
    public long uvw(long x) { return x*x }

    XY xy = x -> x*x;
  • shorter, more readable, easier to maintain, interface becomes superfluous in many cases.
  • Closure means that final variables from the surrounding context can be included
  • Instance methods can be seen as closures also, the include the instance in a closure like way.


  • Streams are somewhere in the vicinity of Collection, Iterable, Iterator and the like, but something new.
  • They have methods that allow a function to be applied on all elements
  • Elegant for programming stuff like
    • Sum
    • Product
    • Maximum
    • Minimum
    • First / last / any element with certain property
    • all elements with a certain property
    • all transformed Elements…
  • It turns out to be a wise decision to make it different from Iterable, Iterator, Collection,… and rather provide wrapping capabilities where needed.


Share Button

Apps or HTML5


The idea of having apps for cell phones is not so new. Quite simple phones offered this and the apps were often developed using Java ME, a „reduced“ Java. This may not have been the best possible solution, but at least development could be made for a variety of cell phones with the same source code, but some additional testing effort.
Then Nokia smartphones added the option to use Qt together with C++ for the development of apps. The promise to be device-independent could still be maintained , because Qt is open source and has been ported to several popular desktop operating systems as well as Symbian, Maemo and Meego for cell phones. Qt is now developed by Digia and will become available even for Android in the future.
With the introduction of Apple’s iPhone and Android based cell phones two more variants for developing apps appeared: Objective-C for Apple’s cell phones and Java running on Dalvik for Android. Microsoft also tries to spread their cell phone OS, whose apps are, of course, to be developed differently again, maybe with C #?

Thus app developers should really think twice if it is really a good idea to develop the same app in about 6-8 almost completely independent implementations (for Android, Qt/Symbian, Qt/Maemo/Meego, Objective-C/ios, MS-WinPhone, Blackberry, JavaME,…) in order to support a large part of the potential user base. For very important apps that may well be a reasonable investment, but it turns quickly but the question of whether the cost is justifiable. Leaving out many potential users by just doing one or two or three implementations is not a good idea for an app that is important. And we know exactly which systems will become common in a few years or at least relevant occupy niches. Possibly new systems will at least have an Android Dalvik compatibility, so they will be able to run Android apps even if they are not Android. Sailfish from Jolla promises that they will do this. But it can very well happen that a new mobile OS becomes popular that requires one more implementation for its apps. So native apps installed on the mobile device will become available with a significant delay, while mobile web applications will be available on then new smart phone that we do not even know today from the first day. Noone is going to provide a mobile device without a decent web browser.

The idea of making money by getting some percentage from the sales of apps via the preferred app stores was great a few years ago. But now there are so many apps around that it is becoming harder to achieve significant download figures in order to make more than a few cents. Until recently apps were justified by functionality that was not readily available in web applications. However this is now changing rapidly. With HTML5, JavaScript, Ajax, WebSockets, and some other new features added to the web technology stack, almost everything that could be done by apps can now be done by web applications as well. And the web application can be developed once and used on a multitude of devices. I therefore assume that these apps will survive only for a few applications that are so important that multiple development does not hurt and that need more interaction than usual applications or access to special device hardware. It is increasingly difficult to find such cases. Just some examples:

  • users should pay for using the functionality. It is possible online as well. Many sites have paywalls.
  • games should also work offline, for example in railroad tunnels. HTML5 promises to have a local storage that can be used for this purpose.
  • Appearance: HTML5 is quite powerful for that.
  • interactivity with JavaScript, Ajax and HTML5 is quite powerful.

In short, the business of running app stores might very well become obsolete or at least a niche business for a small number of apps very soon.

Share Button

Automatic Test of Links


Running a web site with hundreds or thousands of links it is essential to have automatic mechanisms for testing these links. Many tools are available for this. I am using the Perl library LWP. My page has only about 130 links, but off course I want to establish processes that scale.

The problem is that in some cases naïve automatic tests do not work very well, mostly because the web servers react differently to test scripts than to a real browser. Some examples:

  • Language settings of the browser sometimes lead to language specific pages. It would be best to test with several language settings.
  • Some pages result in an error code (usually 500) when accessed by a script, but work fine in a browser.
  • Some servers avoid returning the error code 404 (or maybe 403) for pages that no longer exist. Instead they forward to a human readable error page with code 200, which looks ok to the script. The page forwarded to contains a friendly description of the error, which is hard (but not totally impossible) to recognize by a script. Often the name of the error page contains „404“.
  • Domains are actually given up.  Usually some company grabs these domains and puts their content on them, hoping to gather some part of the traffic of the former web site.  This is often commercial, but might even be x-rated content.

So automatic checking of web links remains difficult and still requires some manual work.  5% of the links cause about 95% of the work.

I am interested in improving these processes in order to increase the quality of the tests and to decrease the effort.

Share Button