Powerful API Functions or Specific API Functions?

When designing APIs we should confront ourselves with the question what they should look like, what they should contain and what not. This is not mostly a question about development effort, but about creating a good API that can be used and save us development effort elsewhere.

There are always simple answers, but in the end we should balance certain partially contradicting desires to create something great.

One aspect will be discussed here. Some of us know functions in the libc of certain systems that we use to program in C. Favorite candidates are ioctl and fcntl. These functions include a wide range of functionality and actually do quite different things depending on the parameters. Primarily there is one parameter that selects the function. And then depending on this parameter there are several additional parameters, whose meaning totally depends on the first parameter.

I truly admire the libc and the Posix-API, because of what it can do and how it is accomplished and how clever the concepts are. But putting loosely related stuff into one catch-all-function and using a parameter that selects which function to actually execute is just wrong and it has been wrong even in the days when it was created. Now there is possibly some argument in favor of this design, because these functions are system calls, which are special, because they go immediately into the OS-kernel. Depending on the implementation of the OS there might be limits of the total number of system calls that the OS can support and it might be hard to change the interface between OS and libc too often, so a flexible system call comes in handy. In the concrete example, it is impossible to change it directly, because the POSIX-API has been standardized and this is one of the few standards that has remained relatively stable for 25 years and still offers great functionality. Linux, which strictly follows this standard, is by far the most widespread operating system today, especially on servers, mobile devices (Android) and devices that we perceive as just hardware like network routers, firewalls, … It is too valuable that programs written for the POSIX-API and of course using the defined functionality run on newer Linuxes.

But there is a lesson to learn for our own APIs. We should avoid putting too many different things into one API-function. I do not think that many of us will try to write an universal API-function like ioctl, but more subtle examples are quite common.

A typical pattern is this:

findPerson(name, email, phone_number)

We can provide a name, a phone number or an email address or a combination and then search for entries that match all of the entries that we had provided. This is still quite clear, but now we could also provide a list of phone_numbers, a list of email addresses etc…

Independent of the actual preference, it should be considered, that this are 7 functions. We can include or exclude any of the parameters, but the case that all are null is probably not supported. Or it is the eighth case that finds everything.

When we are talking about 1, 2, 3, or maybe 4 parameters, it is still possible. to create API-functions for all the combinations, like

findPersonByName(name)
findPersonByEmail(email)
findPersonByPhoneNumber(pone_number)
findPersonByNameAndEmail(name, email)
...
findPersonByNameAndEmailAndPhoneNumber(name, email, phone_number)

This will be clearer. When writing exhaustive automatic Tests, which will probably be „integration tests“, not „unit tests“, they have to be written against these seven variants anyway, no matter if it is one or seven functions. The implementation might also internally use „if“s or do the equivalent at query level by doing something like

SELECT * FROM PERSON P
WHERE
(:name IS NULL OR P.NAME = :name)
AND (:email IS NULL OR P.EMAIL = :email)
AND (:phone_number IS NULL OR P.PHONE_NUMBER = :phone_number);

which has actually eight paths, that need to be covered by tests, including the case where all three parameters are null, if that is not blocked by application code.

This also shows the limits of the classical approach, when the multitude of queries gets really complex. That might require a more generic approach, which is actually quite well exemplified by SQL or its embedded forms like JDBC. For typical IT projects, I would give the recommendation, not to go there and develop such a generic query DSL as part of the project. This usually leads to disaster, because the skills for designing a good language or a good generic framework are usually not available in the team and if we talk about budget, quality and schedule, it will usually blow anyway. So the reasonable approaches are either to use an existing well proven solution for the generic API or to just find out, what functionalities are actually needed and to provide them.

Some examples show the opposite, like Ruby on Rails, which was developed as part of a project effort. Another example is a relatively big company that developed a framework quite similar to Spring itself, before Spring was available. But these successes cannot easily be duplicated in our projects.

Share Button

WannaCry or better learn from it?

The malware WannaCry became quite well known, especially because it manifested itself on the displays of the German federal railroad and it even blocked most of the hospital infrastructure in the UK. Find some discussion on Bruce Schneier’s Blog… You find a a href=“https://www.schneier.com/blog/archives/2017/05/did_north_korea_1.html“>more elaborate article on his blog as well. Read Bruce’s blog article, he knows more about security than I do… 🙂

We might have observed, that this attack was targeting MS-Windows computers. The argument, that this is just because MS-Windows computers are more common, is no longer true. But the argument, that the MS-Windows developers just did a lousy job does not hold either. It was true 10, 15, 20, 25 years ago. We have seen it. But today I would assume, that they have improved and are doing a good job.

There is an argument, to favor open source over closed source for security reasons. If a software is open source, it is much more difficult to incorporate malicious features like backdoors into it or to leave security holes open by mistake, because the source code can be analyzed and fixed by anybody who has access to the internet and the capabilities. This is no guarantee, but it is a good thing.

The other argument is more like a question. How close are US companies to US government agencies? Do they do each other little favors? We do not know.

In any way, the people who did this malware attack are criminals and I regret that this has caused so much damage. Fortunately criminals are relatively rare. So the frequency of encountering them in daily life is usually not so high, unless we live in especially crime infested areas. But the internet connects us with criminals all over the world and allows them to damage us. So it might be ok in a good neighborhood not to lock the door or not to lock the bicycle. It gives a good feeling to trust our neighbors. But in the internet, the bad guys are there for sure and they will discover our unlocked virtual door. We can rely on that.

Share Button

ScalaUA 2017

About a month ago I visted the conference ScalaUA in Kiev.

This was the schedule.

It was a great conference and I really enjoyed everything, including the food, which is quite unusual for an IT-conference.. 🙂

I listened to the following talks:
First day:

  • Kappa Architecture, Juantomás García Molina
  • 50 shades of Scala Compiler, Krzysztof Romanowski
  • Functional programming techniques in real world microservices, András Papp
  • Scala Refactoring: The Good the Bad and the Ugly, Matthias Langer
  • ScalaMeta and the Future of Scala, Alexander Nemish
  • ScalaMeta semantics API, Eugene Burmako

I gave these talks:

  • Some thoughts about immutability, exemplified by sorting large amounts of data
  • Lightning talK: Rounding

Day 2:

  • Mastering Optics in Scala with Monocle, Shimi Bandiel
  • Demystifying type-class derivation in Shapeless, Yurii Ostapchuk
  • Reactive Programming in the Browser with Scala.js and Rx, Luka Jacobowitz
  • Don’t call me frontend framework! A quick ride on Akka.Js, Andrea Peruffo
  • Flawors of streaming, Ruslan Shevchenko
  • Rewriting Engine for Process Algebra, Anatolii Kmetiuk

Find recording of all the talks here:
https://www.scalaua.com/speakers-speeches-at-scalaua2017/

Share Button

Using non-ASCII-characters

Some of us still remember the times when it was recommended to avoid „special characters“ when writing on the computer. Some keyboards did not contain „Umlaut“-characters in Germany and we fell back to the ugly, but generally understandable way of replacing the German special characters like this: ä->ae, ö->oe, ü->ue, ß->sz or ß->ss. This was due to the lack of decent keyboards, decent entry methods, but also due to transmission methods that stripped the upper bit. It did happen in emails that they where „enhanced“ like this: ä->d, ö->v, ü->|,… So we had to know our ways and sometimes use ae oe ue ss. Similar issues applied to other languages like the Scandinavian languages, Spanish, French, Italian, Hungarian, Croatian, Slovenian, Slovak, Serbian, the Baltic languages, Esperanto,… in short to all languages that could regularly be written with the Latin alphabet but required some additional letters to be written properly.

When we wrote on paper, the requirement to write properly was more obvious, while email and other electronic communication via the internet of those could be explained as being something like short wave radio. It worked globally, but with some degradation of quality compared to other media of the time. So for example with TeX it was possible to write the German special letters (and others in a similar way) like this: ä->\“a, ö->\“o, ü->\“u, ß->\ss and later even like this ä->“a, ö->“o, ü->“u, ß->“s, which some people, including myself, even used for email and other electronic communication when the proper way was not possible. I wrote Emacs-Lisp-Software that could put my Emacs in a mode where the Umlaut keys actually produced these combination when being typed and I even figured out how to tweak an xterm window for that for the sake of using IRC where Umlaut letters did not at all work and quick online typing was the way to go, where the Umlaut-characters where automatically typed because I used 10-finger system typing words, not characters.

On the other hand TeX could be configured to process Umlaut characters properly (more or less, up to the issue of hyphenation) and I wrote Macros to do this and provided them to CTAN, the repository for TeX-related software in the internet around 1994 or so. Later a better and more generic solution become part of standard TeX and superseded this, which was a good thing. So TeX guys could type ä ö ü ß and I strongly recommended (and still recommend) to actually do so. It works, it is more user friendly and in the end the computer should adapt to the humans not vice versa.

The web could process Umlaut characters (and everything) from day one. The transfer was not an issue, it could handle binary data like images, so no stripping of high bit or so was happening and the umlaut characters just went through. For people having problems to find them on the keyboard, transcriptions like this were created: ä->ä ö->ö ü->ü ß->ß. I used them not knowing that they where not actually needed, but I relied on a Perl script to do the conversion so it was possible to actually type them properly.

Now some languages like Russian, Chinese, Greek, Japanese, Georgian, Thai, Korean use a totally different alphabet, so they had to solve this earlier, but others might know better how it was done in the early days. Probably it helped develop the technology. Even harder are languages like Arabic, Hebrew, and Farsi, that are written right to left. It is still ugly when editing a Wikipedia page and the switching between left-to-right and right-to-left occurs correctly, but magically and unexpected.

While ISO-8859-x seemed to solve the issue for most European languages and ISO-8859-1 became the de-facto standard in many areas, this was eventually a dead end, because only Unicode provided a way for hosting all live languages, which is what we eventually wanted, because even in quite closed environments excluding some combinations of languages in the same document will at some point of time prove to be a mistake. This has its issues. The most painful is that files and badly transmitted content do not have a clear information about the encoding attached to them. The same applies to Strings in some programming languages. We need to know from the context what it is. And now UTF-8 is becoming the predominant encoding, but in many areas ISO-8859-x or the weird cp1252 still prevail and when we get the encoding or the implicit conversions wrong, the Umlaut characters or whatever we have gets messed up. I would recommend to work carefully and keep IT-systems, configurations and programs well maintained and documented and to move to UTF-8 whenever possible. Falling back to ae oe ue for content is sixties- and seventies-technology.

Now I still do see an issue with names that are both technical and human readable like file names and names of attachments or variable names and function names in programming languages. While these do very often allow Umlaut characters, I would still prefer the ugly transcriptions in this area, because a mismatch of encoding becomes more annoying and ugly to handle than if it is just concerning text, where we as human readers are a bit more tolerant than computers, so tolerant that we would even read the ugly replacements of the old days.

But for content, we should insist on writing it with our alphabet. And move away from the ISO-8869-x encodings to UTF-8.

Links (this Blog):

Share Button

Tablet Computers

The idea of tablet computers is actually quite old and it has been tried a couple of times, at least up to prototypes. Probably a certain level of hardware and software was needed to make them both useful and affordable for enough people to become a mass product. This is actually a quite common thing. Some person, group or company has invented something really good, but they were not able to provide a sufficiently reliable, useful and affordable product to the market or just were not able to leave their home market efficiently. There are just a few examples for this, that I have observed.

  • Tilting trains have been tried in Germany, UK, Italy, Spain, Sweden Switzerland, Canada, France and Japan, in some countries several times. Many efforts become dead ends because the technology was not easily built in an affordable and reliable and maintainable way, so the mechanism was disabled or the trains were put out of service way too early. Italy actually made this technology work, but some of the train sets suffered serious deficiencies in quality, reliability and maintenance. Spain did the Talgo, which is less ambitious, because it uses gravity instead of an active mechanism and provides for a weaker effect. Sweden developed the X2000 trains, which seemed to work more or less well, but were quite expensive. But finally it seems that companies are able to produce good trains with this technology, like the relatively new Swiss ICN-trains.
  • A British company had produced trailer bikes for children already in the 1930s. They have one wheel and are attached to a parent’s bike. These were hard to get and they were almost unknown, even though the idea is great. In the 1990s German companies started to adopt the concept and actually produce them in good quality and sell them internationally, which was off course easier than 60 years earlier. They are now a common concept.
  • In the 1970s many bicycles had three speed hub gears. Derailleur gears already existed, but they were hard to use and fragile. For steeper roads it was possible to use a larger sprocket and to be able to climb slopes at the expense of lacking higher gears for flat sections. A British company produced a 5 speed hub gear, but it was extremely difficult to get and the quality was so poor that it would be almost half of the time in repair for a more active cyclist. Today we see mature hub gears with more than ten gears, but the derailleur technology has also become mature enough for the main stream.

So there are several requirements to success.

Another interesting aspect is that the actual usage might become different than anticipated. I understand that the tablet computers where sold as a „better replacement“ for PCs and Laptops in certain areas. I do not think that this is reasonable. Having a keyboard and a larger screen is usually better and it makes sense to transport a small or even a larger laptop. I have often had an external keyboard on top of the laptop, when I could afford to transport it and anticipated a heavy use. The netbook was so small that it did not hurt to have it in the luggage, but it was eventually hard to expand the memory and to get a replacement. A relatively small laptop still serves the purpose when a real computer is needed, but luggage is constrained.

The tablet computer does have some features that make it worth having one on top of a good phone and different sizes of Laptops. I am using an Android tablet, which is the most common OS for tablets, but there are off course some others, which I do not know well enough to write about them.

It is easier to switch between keyboard types. I am using the Cyrillic keyboard a lot and the computer with which I am writing this text has two external keyboards attached. I can switch with a key sequence, but this approach has its limitations. Probably buying a Laptop in Russia and just knowing the German keyboard without relying on the symbols on the keys would work for me. But the tablet makes this work with very little setup, while buying a physical Cyrillic keyboard in Switzerland is a bit harder, but still easy and buying a Laptop with Cyrillic keyboard layout does need some effort.

When doing small stuff, mostly reading or even some smaller emails, this is much better than the phone, but it can be used in the train, in the park, anywhere, where it is possible to sit. A laptop requires some kind of a table to be reasonably useful. There are seats with tables in the train, but that is a matter of luck to get one.

Finally we currently have a lot of Android Apps. They could be written for „normal“ desktop Linux as well or as web applications. Maybe that will happen. But currently some of them are available for Android, but not or not in a useful way for desktop Linux. This may change and it heavily depends on what we are actually using. But in my case it is true and it proved to be helpful to have the larger screen than on the Android phone.

Concerning the SIM card, I actually went the extra mile in terms of higher price and more effort for buying it in order to get a SIM card slot. I have not used it very much, because the extra SIM card is kind of expensive, moving SIM cards between devices is inconveniant and using the Android phone as a WiFi-Router seems to work well enough. But maybe this is useful when travelling a lot with SIM-cards from many countries to use just all the slots in older and newer phones and tablets and to use the device with the currently preferred SIM card as the WiFi router for all the others.

And finally it can be said that we can now buy fairly affordable good tablet computers. What I am missing is that tools from desktop Linux are usually not available on Android or only in a limited version. But the most common applications, a web browser and an email client are off course working on both…

Share Button

User Friendliness

I have made an interesting observation in terms of user friendliness. Let’s call it an anti-pattern…

I had booked a flight in the internet. Now the ticket was a number, which they sent to me by SMS. I went to the page of the airline and tried to do the check-in, in order to get a better seat, a lesser chance of being an overbooking-victim and to save some time and nerves at the airport.

Now it is necessary to enter some information each time, like Passport number, date of birth, my name, validity of passport and citizenship. By mistake I used the wrong citizenship without noticing and then the page asked me, if I have a Visa.

Now it was impossible to go back to fix this, so I had to cancel the whole process and enter all the information once again. At least I thought so. It was worse, because the wrong citizenship had put the whole booking into a weird status which could not be fixed anymore on the web application. It just refused to deal anymore with this ticket.

It was possible to use the app of the airline and to redo the whole thing on my tablet.

If there is information that is so important to get right, there are some suggestions:

  • Allow the user to go back to any form in the dialog and revisit the entries
  • Show the data that has been entered to make it easy to recognize the error
  • Give the user who messed up a second chance to fix it
  • The question is, why I have to enter my birthday, passport number, name etc. each time. They do not change frequently. Privacy is really a good thing, but I guess in the case of traveling by air all privacy issues are just a joke. Why not give at least some convenience?
  • Anybody who happens to get the relatively short ticket number can mess around with it, which could be very annoying, if he for example canceled the flight.
Share Button

Eatable Devices

Wearable devices have become normal and it is hard to buy clothes these days that do not actually belong to this category. If you don’t know, the NSA knows.

The new trend are eatable devices. It has been quite a challenge to create chips and batteries that can be chewed and swallowed without pain and without exploding while being eaten. But the newest chips are made of Bio-Silicon which is 100% eatable and riskless to eat. Batteries have been replaced by condensators, which proved to be the better choice.

Now this opens a huge range of applications. We can write apps that work on the cluster of eatables and support our diet, because they register exactly when, how and by whom they are eaten. Newer pricing models allow us to buy food and pay some part of the price only when we actually eat it. We can see when food is expired and discard it. Of course not in the regular garbage, but finally each household will now get a separate garbage can for electronic devices. Remember to put your clothes in there, even though you were not aware of the chips inside when you bought them. It will be better for the environment.

The labs of IBM and Coca Cola have jointly developed microchips that actually float in the drink and are so small that we usually do not see them. This will support the usual functions of eatable devices and will equip the drinker with so many chips inside the body that permanent tracking will become easy even without a cell phone.

Apple finally found their core business. Eatables. Apples. They almost taste like old style apples, are nicely designed and we buy them in the apple store instead of the supermarket. The first eaters waited half a week in front of the Apple stores to get the taste of the real Apple-apple as early as possible.

Share Button

Do we still need Experts when everything is in the Internet

We find a lot of information about pretty much everything on the internet. We do not have to remember things because we are always online and always able to find the information we are looking for. It is true. I do it, you do it, everybody does it. Wikipedia, Google, Forums and of course specific sites…

I just give an example, I once met a doctor, a physician. A patient had a problem that was unclear to him and he actually told the patient that he would google for the problem. In the end he came up with a very helpful solution for the guy, much better than what many of us would get in the same situation. I will not disclose any details.

Now why do we need an expert at all, if the expert did not know and found a more helpful answer than the other experts only by using a search engine that all of us can use?

Actually it is the conjunction of the expert and the online information that became so helpful. At least since the last presidential election in some country on the North American continent we have learned that media (and probably even the internet) may be unreliable and that truth is relative, if we at all accept the concept of truth.

Or to become more tangible, there are numerous sites that promise us easy solutions at least to questions that really many find important like diets, raising money easily (and legally), and a lot more, we all know it. It is quite easy to put a site online and put any information on there. It is a bit harder, but possible to get found. The author of the site can sell something that he would never buy himself or something that he believes, even though it is not true. I have met a person in Switzerland who seriously told me that eating would be unnecessary for humans and he or she just practiced it for the joy of eating. In case of medical advice it is quite obvious that this might be dangerous, but in almost any area we have more or less the same problem. A government agency that enforces that only the truth is written on web pages would be a nightmare. Just think of your favorite politician being in charge of such an agency…

But for the expert it is easy to recognize which information is serious and useful. And even easier to use the right keywords for searching.

And there is more. The expert knows the situation, consciously and subconsciously he combines experience and what he sees, hears, …. to solve the problem. And builds in the input from the search.

We should also think that searching is extremely efficient, but knowing 99% without searching is so much more efficient. Just think of languages. I speak a couple of them, and it is often useful to use online dictionaries or even translators. But needing them for every other word is inefficient and will actually sometimes lead to wrong understanding.

The information on the internet will become better. There will be new concepts implemented by sites for providing reliable content in certain areas. We already see Wikipedia, which is not 100% reliable, but probably about as good as a printed encyclopedia in this aspect.

Anyway, the experts will not become useless, but we will need them in the future as well.

Share Button

Forrest roads

In forests we usually find unpaved roads like this:

Forrest Road
Forest Road

Now we should observe how they are constructed. Usually the need to somehow allow access to areas in the forest. It does not have to be the shortest connection, it does not have to be as flat as possible, it is not required to build for high speed and it is not at all required to build for high capacity.

Often it is necessary that trucks and heavy machinery can use the road. So it needs to be constructed in a way that it withstands occasional usage by heavy vehicles, for example to extract wood from the forest or in bad cases to fight a forest fire. And the network usually provides a reasonable way for a truck with a trailer to get out again by driving forward only.

Apart from this there is one very important requirement. The construction must be cheap. It should usually be so cheap that it can easily be paid by a relatively small fraction of the money that is obtained by selling the wood from the forest. There can be a significant secondary use for recreational purposes or even as route through the forest, typically for MTBs and pedestrians, which might justify using money from other sources than the revenue from the forest. Interestingly forests in Switzerland have much denser road networks than forests in Norway or Sweden.

What we never see is forest roads that are somehow prepared for the case that it might be decided in the future to expand them to eight lanes or to transform them into high speed highways. It would be good to use a route that allows this, to already pass hills with cuts or tunnels and to build bridges already much wider than currently necessary. Real highways usually run on a dam or at least they include a thick sequence of layers that often add up to a few meters under the upper asphalt layer. There are „best practices“ about building highways, even relatively narrow highways like this one:

Highway E45 in the Taiga in northern Sweden 2014
Highway E45 in the Taiga in northern Sweden 2014

They are mostly ignored, apart from very universal principals that apply to any kind of construction.

One kilometer of forest road costs a very tiny fraction of one kilometer of such a highway with two lanes.

We should learn from this for our IT solutions. We should think how big our IT solution might actually become. How many customers do we need to serve? What kind of sophisticated functionality needs to be added? Very often we see the mistake that IT solution are built too small. They do not scale, cannot easily be expanded or simply not serve the load or the availability requirements.

Companies like Google. Facebook, Twitter, Netflix or VK serve millions of users 7×24. There is even an implicit promise that there will be no down times and people start relying on this.

We expect banking software to be accurate to the cent. Not sometimes, but always.

Running a device in a chemical plant or steering a rocket requires absolutely reliable software. Errors can be very expensive and cost human lives.

On the other hand there is plenty of software that is useful. Of course it needs to work properly. But the requirements are much lower. A typical app for a mobile phone does not need to be able to run on millions of servers simultaneously. Downtimes during software updates are no problem. Usually a small development team is sufficient to build them. And when it comes to money, the real business logic for this is usually on the server. And we just should not control a chemical plant by a mobile phone app. But mobile phone apps usually have to come at neglectable prices to the users. They either have to pay themselves by the business that they indirectly promote or by a very small amount of money per user, usually combined with a very small number of users.

It is important to really understand the requirements well and build the right size of application. Building too small is very bad, but building too big can be as bad for the project and the organization. And even if the company is lucky and discovers that the software is so attractive that a bigger solution is necessary, then maybe this is a good moment to rewrite it and consider the first version as proof of concept.

Whenever a new highway is built on a route that is previously only covered by a forest road, the highway will usually be built from scratch, ignoring the routing of the old forest road. If the forest road becomes obsolete by that, then the money for the original construction of the forest road is neglectable. Trying to transform a forest road into a highway did happen in many small steps over centuries to create part of our current road network, but it is usually not a recommended approach.

Share Button

ä ö … in HTML

In the old days of the web, more than 20 years ago, we found a possibility to write German Umlaut letters and a lot of other letters and symbols using pure ASCII. These are called „entities„, btw.

Many people, including myself, started writing web pages using these transcriptions, in the assumption that they were required. Actually in the early days of the web there were some rumors that some browsers actually did not understand the character encodings that contained these letters properly, which was kind of plausible, because the late 80s and the 90s were the transition period where people discovered that computers are useful outside of the United States and at least for non-IT-guys it was or it should have been a natural requirement that computers understand their language or at least can process, store and transmit texts using the proper characters of the language. In case of German language this was not so terrible, because there were transcriptions for the special characters (ä->ae, ö->oe, ü->ue, ß->ss) that were somewhat ugly, but widely understandable to native German speakers. Other languages like Russian, Greek, Arabic or East-Asian languages were in more trouble, because they consist of „special characters“ only.

Anyway, this „ä“-transcription for web pages, which is actually superior to the „ae“, because the reader of the web page will read the correct „ä“, was part of the HTML-standard to support writing characters not on the keyboard. This was a useful feature in those days, but today we can find web pages that help use with the transliteration or just look up the word with the special characters in the web in order to write it correctly. Then we can as well copy it into our HTML-code, including all the special characters.

There could be some argument about UTF-8 vs. UTF-16 vs. ISO-8859-x as possible encodings of the web page. But in the area of the web this was never really an issue, because the web pages have headers that should be present and inform the browser about the encoding. Now I recommend to use UTF-8 as the default, because that includes all the potential characters that we might want to use sporadically. And then the Umlaut kann just directly be part of the HTML content. I converted all my web pages to use Umlaut-letters properly, where needed, without using entities in the mid 90s.

Some entities are still useful:

  • „&lt;“ for „<“, because „<“ is used as part of the HTML-syntax
  • „&amp;“ for „&“ for the same reason
  • „&gt;“ for „>“ for consistency with „<„
  • „&nbsp;“ for no-break-space, to emphasize it, since most editors make it hard to tell the difference between regular space and no-break-space.
Share Button