MS-Windows-Encodings with CMD: Bug or Feature?

Deutsch

Whoever is working with MS-Windows, should know these black windows with CMD running in them, even though they are not really popular. The Unix and Linux guys hate them, because they are really primitive compared to their shells. Windows guys like to work graphically. Or they prefer powershell or bash with cygwin. Linux and Unix have the equivalent of these windows, but usually they are white. Being able to configure the colors on both systems in any way this is of no relevance.

NT-based MS-Windows systems (NT 3.x, 4.x, 2000, XP, Vista, 7, 8, 10) have several subsystems and programs are running in them, for example Win64, Win32 (or Wow64 on 64-bit-systems), Win16, cygwin (if installed), DOS… Because programs for the DOS subsystem are typically started in a CMD window, and because some of the DOS commands have equally named and similarly operating pendents in the CMD window, the CMD window is sometimes called DOS-window, which is just incorrect. Actually this black window comes into existence in many situations. Whenever a program is started that has input or output (stdin, stdout, stderr), a black window is provided aroudn, if no redirection is in place. This applies for CMD. Under Linux (and Unix) with X11 it is the other way round. You start the program that provides the window and it automatically starts the default shell within that window, unless something else is stated.

Now I recommend an experiment. You just need an MS-Windows installation with any graphical editor like emacs, gvim, ultraedit, textpad, scite, or even notepad. And a cmd-window.

  • Please type these commands, do not use copy/paste
  • In the cmd-window cd into a directory you may write in.
  • echo "xäöüx" > filec.txt. Yes, there are ways to type these letters even with an American keyboard. 🙂
  • Open the file with a graphical editor. How do the Umlauts look?
  • Use the editor to create a second file in the same directory with contents like this: yäöüy.
  • view it in CMD:
  • type fileg.txt
  • How do the Umlauts look?

It is a feature or bug, that all common MS-Windows versions are putting the umlauts to different positions then the graphical editors. If you know how to fix this, let me know.

What has happened? In the early 80es MS-DOS came into existence. By that time standards for character encoding were not very good. Only ASCII or ISO-646-IRV existed, which was at least a big step ahead of EBCDIC. But this standardized only the lower 128 characters (7 Bit) and lacked at some characters for almost any language other than English. It was tried to put a small number of these additional letters into the positions of irrelevant characters like „@“, „[„, „~“, „$“ etc. And software vendors started to make use of the upper 128 characters. Commodore, Atari, MS-DOS, Apple, NeXT, TeX and „any“ software came up with a specific way for that, often specific for a language region.

These solutions where incompatible with each other between different software systems, sometimes even between versions or language versions of the same software. Remember that at that time networks were unusual and when they existed, they were proprietary to the operating system with bridge solutions being extremely difficult to implement. Even formats for floppy disks (the three-dimensional incarnations of the save button) had proprietary formats. So it did not hurt so much to have incompatible encodings.

But relatively early X11 which became the typical graphical system for Unix and later Linux started to use standard encodings like the ISO-8859-x family, UTF-8 and UTF-16. Linux was already on ISO-8859-1 in version 0.99 in the early 90es and never tried to invent its own character encoding. Thank god for that….

Today all relevant systems have moved to Unicode standard and standardized encodings like ISO-8869-x, UTF-8, UTF-16… But MS-Windows has done that only partially. The graphical system is using modern encodings or at leas Cp1252, which is a decent approximation. But the text based system with the black window, in which CMD is running, is still using encodings from the MS-DOS times more than 30 years ago, like Cp850. This results in a break within the system, which is at least very annoying, when working with cygwin or CMD-windows.

Those who have a lot of courage can change this in the registry. Just change the entries for OEMCP and OEMHAL in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage simultaneously. One of them is for input, the other one for output. So if you change only one, you will even get inconsistencies within the window… Sleep well with these night mares. 🙂
Research in the internet has revealed that some have tried to change to utf-8 (CP65001) and got a system that could not even boot as a result. Try it with a copy of a virtual system without too much risk, if you like… I have not verified this, so maybe it is just bad rumors to create damage for a great company that has brought is this interesting zoo of encodings within the same system. But anyway, try it at your own risk.
Maybe something like chcp and chhal can work as well. I have not tried that either…

It is up to you if you consider this whole issue a bug or a feature.

Share Button

PDF and PDF/A formats

The PDF format has experienced a success story on its way from being a quasi proprietary format that could only be dealt with using Adobe tools to a format that is specified and standardized and can be dealt with using open source tools and tools from different vendors. It has become accepted that PDF is primarily a print format and that for web content HTML is the better choice, which was not clear 15 years ago, when people coming from print layout who just considered themselves trivially capable of adding web to their portfolio just wanted to build whole web pages by just using PDF instead of HTML.

Now the format did change over time and there are always PDF files that use specific features that do not work in certain PDF viewers.

But there are requirements for maintaining documents over a long period of time. Just consider long term contracts that have a duration of 50-100 years. The associated documents usually need to be retained for that duration plus ten years. Alone the issue of storing data for such a long time and being able to read it physically is a challenge, but assuming that this issue is addressed and files can still be read in 110 years, the file format should be readable.

Now companies disappear. A lot of them in 100 years, probably even big ones like Adobe, Apple, Microsoft, Oracle and others. We do not know which companies will disappear, only that it is very likely that some companies that are big now will disappear. Proprietary software may make it to another vendor when shutting down the company, to pay the salaries of the former employees for some more days. But it might eventually disappear as well. Open source software has a better chance of being available in 100 years, but that cannot be absolutely guaranteed either, unless special attention is given to that software over such a long time. And if software is not maintained, it is highly unlikely that it will be able to run on the platforms that are common in 100 years.

So it is good to create a stable and simplified standard for long term archiving. Software for accessing that can be written from scratch based on that specification. And it is more likely to remain available, if continuous need for it can be seen.

The idea is a format called PDF/A, where A stands for „archive“, which is an option for storing PDF files over a very long period of time. Many cool features of PDF have been removed from PDF/A and make it more robust and easy to use. Important is also not to rely on additional data sources, for example for passwords of encrypted PDF files or for fonts. Encryption with password protection is a bad thing because it is quite likely that the password is gone in 100 years. Fonts need to be included, because finding them in 100 years might not be trivial. This usually means that proprietary fonts have to be avoided, unless the licensing allows inclusion of the fonts into the PDF file and unlimited reading. Including JavaScript, Video, Audio or Forms is also a bad idea. Video should be archived separately and it has the same issues as PDF for long term archiving.

Share Button

Random-Access and UTF-8

Deutsch

It is a nice thing to be able to use random access files and to have the possibility to efficiently move to any byte position for reading or writing.

This is even true for text files that have a fixed number of bytes per character, for example exactly one, exactly two or exactly four bytes per character. Maybe we should prefer the term „code point“ here.

Now the new standard for text files is UTF-8. In many languages each character is usually just one byte. But when moving to an arbitrary byte position this may be in the middle of a byte sequence comprising just one character (code point) and not at the beginning of a character. How should that be handled without reading the whole file up to that position?

It is not that bad, because UTF-8 is self synchronizing. It can be seen if a particular byte is the first byte of a byte sequence or a successive one. First bytes start with 0 or 11, successive bytes start with 10. So by moving forward or backward a little bit that can be handled. So going to a rough position is quite possible, when knowledge of the average number of bytes per character for that language is around.
But when we do not want to go to a rough character position or to an almost exact byte position, but to an exact character position, which is usually the requirement that we have, then things get hard.

We either need to read the file from the beginning to be sure or we need to use an indexing structure which helps starting in the middle and only completely reading a small section of the file. This can be done with extra effort as long as data is only appended to the end, but never overwritten in the middle of the file.

But dealing with UTF-8 and random access is much harder than with bytes. Indexing structures need to be maintained in memory when accessing the file or even as part of the file.

Share Button

Unicode, UTF-8, UTF-16, ISO-8859-1: Why is it so difficult?

Deutsch

Since about 20 years we have been kept busy with the change to Unicode.

Why is it so difficult?

The most important problem is that it is hard to tell how the content of a file is to be interpreted. We do have some hacks that often allow recognizing this:
The suffix is helpful for common and well defined file types, for example .jpg or .png. In other cases the content of the file is analyzed and something like the following is found in the beginning of the file:

#!/usr/bin/ruby

From this it can be deduced that the file should be executed with ruby, more precisely with the ruby implementation that is found under /usr/bin/ruby. If it should be the ruby that comes first in the path, something like

#!/usr/bin/env ruby

could be used instead. When using MS-Windows, this works as well, when using cygwin and cygwin’s Ruby, but not with native Win32-Ruby or Win64-Ruby.

The next thing is quite annoying. Which encoding is used for the file? It can be a useful agreement to assume UTF-8 or ISO-8859-1, but as soon as one team member forgets to configure the editor appropriately, a mess can be expected, because files appear that mix UTF-8 and ISO-8859-1 or other encodings, leading to obscure errors that are often hard to find and hard to fix.

Maybe it was a mistake when C and Unix and libc were defined and developed to understand files just as byte sequences without any meta information about the content. In the internet mime headers have proved to be useful for email and web pages and some other content. This allows the recipient of the communication to know how to interpret the content. It would have been good to have such meta-information also for files, allowing files to be renamed to anything with any suffix without loosing the readability. But in the seventies, when Unix and C and libc where initially created, such requirements were much less obvious and it was part of the beauty to have a very simple concept of an I/O-stream universally applicable to devices, files, keyboard input and some other ways of I/O. Also MS-Windows has probably been developed in C and has inherited this flaw. It has been tried to keep MS-Windows runnable on FAT-file-systems, which made it hard to benefit from the feature of NTFS of having multiple streams in a file, so the second stream could be used for the meta information. But as a matter of fact suffixes are still used and text files are analyzed for guessing the encoding and magic bytes in the beginning of binary files are used to assume a certain type.

Off course some text formats like XML have ways of writing the encoding within the content. That requires iterating through several assumptions in order to read up to that encoding information, which is not as bad as it sounds, because usually only a few encodings have to be tried in order to find that out. It is a little bit annoying to deal with this when reading XML from a network connection and not from a file, which requires some smart caching mechanism.

This is most dangerous with UTF-8 and ISO-8859-x (x=1,2,3,….), which are easy to mix up. The lower 128 characters are the same and the vast majority of the content consists of these characters for many languages. So it is easy to combine two files with different encodings and not recognizing that until the file is already somewhat in use and has undergone several conversion attempts to „fix“ the problem. Eventually this can lead to byte sequences that are not allowed in the encoding. Since spoken languages are usually quite redundant, it usually is possible to really fix such mistakes, but it can become quite expensive for large amounts of text. For UTF-16 this is easier because files have to start with FFFE or FEFF (two bytes in hex-notation), so it is relatively reliable to tell that a file is utf-16 with a certain endianness. There is even such a magic sequence of three bytes to mark utf-8, but it is not known by many people, not supported by the majority of the software and not at all commonly used.

In the MS-Windows-world things are even more annoying because the whole system is working with modern encodings, but this black CMD-windows is still using CP-850 or CP-437, which contain the same characters as ISO-8859-1, but in different positions. So an „ä“ might be displayed as a sigma-character, for example. This incompatibility within the same system does have its disadvantages. In theory there should be ways to fix this by changing some settings in the registry, but actually almost nobody has done that and messing with the registry is not exactly risk-less.

Share Button

Date Formats

Deutsch

It is quite common that we need to enter a date into a software or read a date displayed by a software.  Often it is paired with a time, forming a timestamp.  This might be a birthday or the deadline of an IT project nobody likes to be reminded of or whatever.  We deal with dates all the time.

It is a good idea to distinguish the internal representation of dates from the representation used for user interfaces. Unless legacy stands against it, internal String representation of dates (for example in XML) should follow ISO 8601.  Often purely binary or numerical representations are used internally and make sense.  If legacy stands against this, it is a good moment to question and hopefully discard this legacy.

ISO 8601 is this date format <year>-<month>-<day>, for example. 2012-11-16 (variants: 20121116 121116 12-11-16). I have been using this for the last 20 years, even on paper. What I like about it is that it is immediately obvious which part of the string stand for the year, the month and the date.  How do we interpret the date 03/04/05?  I also like that the ordering is following established standards.  For writing numbers, the most significant digit comes first, which would be the year or the most significant digit of the year.  For sorting this format is also obviously better than any other string representation.  Another advantage is that this is not the legacy date format of a major country, but is kind of international and neutral, thus acceptable to everybody.  Since more and more software and web pages are using this date format even in their user interface, I would assume that everybody has seen it and is able to recognize and interpret it.  By the way, I prefer writing it with the „-“ surrounding the month, so it is immediately clear to human reader, that this is a date and it is easier to decompose it.  8 digits are too much for most humans to grab at once.

For the user interface, good applications should recognize user preferences.  Most of the time a language setting is around and the applications should follow the conventions of this language settings.  Now the official date format for German language in Switzerland, Austria and Germany is ISO 8601, i.e. 2012-11-10 for today, but people are still using the legacy format 10.11.2012 more often and even most software is still clutching to the legacy formats.  I would really let the user choose between the legacy format and the standard format, rather than imposing the legacy.

In principal current operating systems provide mechanisms to set preferences and applications should follow them.  So it should be sufficient to set the preference of a certain date format together with the language settings once and retain this as long as the user account is valid.  But these mechanisms are somehow flawed.

  • it can be hard to change the default date format
  • is it a setting of the user account, of the specific computer or of the combination of the two?
  • Most software of today ignores these settings anyway, at least partly.
  • It is quite common that some software uses the localized date format internally and only works if the user has chose this format in his or her account settings.  This one really hurts, but I have seen it.

I recommend testing software with really distant language settings, such as Arabic, Chinese or Uzbek (if that is not your native language), to see if it works with any settings.  Even if it is possible to use data produced with Chinese when running with Arabic, for example.  This might help to eliminate such dependencies, not only those concerning the date format.  I consider this useful even if the software is meant to be used only be local users who are supposed to have the same native language.  Is that true, no foreigner around who runs his computer with his native language settings?  No future plans to use the software abroad, that come up several years later?

For entering dates, it has become a best practice to provide a calendar where the right year and month can be chosen and the day of the month can be clicked.  Everybody has used that with good (web-)applications already.  But it is useful to provide a shortcut by allowing to enter the date as a string.  Several string formats can be recognized, and nothing is wrong with recognizing the local legacy date format.  But ISO 8601 should always be recognized. A good example for this is the Linux date command, where you can provide a date in ISO 8601 formats.

In any case the date needs to be translated internally into a canonical format, to ensure that 2012-11-16 and 16.11.2012 are the same.

A suggestion I would make for the Posix-, Unix- and GNU-tools: the „date“ program has been around in Unix- and Linux (and probably MacOS X) systems for decades.  Unfortunately the default is to use an obscure US-format for output, ignoring locale settings. I am afraid that it is too late to change this, because to much software relies on this (mis-)behavior. But at least a companion like idate (for „international date“) could be provided that has the same functionality and the same options, but uses an ISO 8601 format as default for its output.  fgrep, grep and egrep are examples of providing slightly different default behaviors of basically the same program by providing three names or three variants for it.  So I would like to have this:

$ idate
2012-11-16T17:33:12

Maybe I will suggest this to the developers of core-utils, which contains the GNU variant of date commonly used in the Linux world.  ls is more friendly, because it recognizes an environment variable TIME_STYLE.

For now I suggest the following for .bashrc (even in cygwin, if you use MS-Windows):

export TIME_STYLE=long-iso
alias idate=’date „+%F %T“‚

Or if you are using tcsh the following for your .tcshrc:

setenv TIME_STYLE long-iso
alias idate ‚date „+%F %T“‚

A useful link to this topic: The Best of Dates, The Worst Of Dates
 

Share Button