Many web pages are still created for desktop devices and optimized for a certain screen format. Often an additional effort is spent on adding some mobile capability on top of that.
This tends to neglect the fact that viewing web pages with a mobile device is no longer an edge case, but quite a common practice. Some pages do not work at all on mobile phones, you just have to give up when trying to view them. Some are just unpleasant. Vertical scrolling is generally accepted. We are used to it and it is en par with our reading style. But having to scroll horizontally for each line is just too annoying, and we tend to give up soon, unless the content is really very interesting.
What needs to be observed?
- The font has to be large enough to be readable. It would be ideal to be able to change the font size at least between a number of choices.
- Horizontal scrolling is probably ok for some large tables, images or maps, but never for multiline text.
- We need the whole width of the tiny display for reading. Navigation bars on the left or right side need to disappear or move to the top or botton when we are reading the main part.
- It is bad to have buttons and links too close to each other, because it is harder to hit them with a finger than with a mouse. We simply can’t see any feedback on what is happening underneath our finger.
Maybe there are more topics.
Of course having more screen real estate is always better and it is a good idea to make use of it.
But we all know that some web pages work extremely well even on cell phones. And some do not at all.
Today the web page is built up in the browser. It gets transmitted from the server as HTML and the browser settings provide font sizes, formatting preferences etc. That is what typical web pages from the 90es were like. They all looked the same and they all worked reasonably well. Forget about the pages doing crap with usage blinking elements and other useless toys of the early web.
Anyway, something like a broken web evolved from that. Web designers wanted to impose their design on a web that was not ready for it. They started to heavily use crappy tricks like nested tables, transparent 1×1 images, images that contained text, frames (really bad!!), formatting information within HTML, like font-tags. We had those famous web pages that were „best viewed with browser xyz version uvw“. The HTML source was totally unreadable and could only, in best case, be processed with tools like frontpage, dream weaver etc. With the wrong browser they were appearing empty or totally messed up. Javascript added even more possibilities to mess around and to become more browser specific.
It was great, especially when being a web desginer and being able to charge for different variants of the same web page in order to actually support different browsers. This was exactly what the web was not meant to be and I think that the basic ideas of the inventors of the web were actually very sound and deserve an evolutionary enhancement.
A good step was the introduction of CSS. It put formatting on a cleaner basis, because formatting information could now be kept in CSS and separated from the content. Of course CSS and HTML needed to be compatible with each other, but HTML could be kept readable and editable, even with a common text editor and the CSS could be retained. I am aware of CSS successors like SASS and SCSS. From a more abstract point of view that is the same.
Another change came up, because web pages are more often generated dynamically on demand. I think that we are spending the vast majority of our time on such dynamic web pages. Google, Wikipedia, Youtube, Facebook, online shops, schedule information, map services, e-banking… you name it. Most of what we do is on dynamically generated web pages. Even this blog article is part of a dynamic web page generated on demand for you by WordPress, based on the contents that I have provided. I think that too many web pages are dynamically generated these days that should actually be static, but that is another discussion. Actually even the early days of the web knew CGI for creating dynamic web pages, but it was an exceptional case used whenever it was really necessary.
Another class of web applications uses JavaScript (like Angular JS) and is a revival of the classical client server architecture. Some see this as the successor for all server generated web pages. I actually think that both approaches should coexist. Some stuff that we are doing now would not be possible without the rich JS-based clients. Think about Google-Docs, modern Wiki-Editors, modern web mail clients, chats, twitter, facebook, google+ and many more. They all use something like this. But there are a lot of advantages in having applications based on server generated HTML with very little JavaScript. This could be covered in a future article…
The interesting question is how we can support mobile devices in a reasonable way.
In the late 90es we had the solution: WAP. You just had to write the pages in WML instead of HTML. That was optimized for mobile devices in many ways: The pages needed very small amounts of data to be transferred over the wireless networks, that were very slow those days. It was possible to see it on really tiny displays. Those days it was cool to have the tiniest cell phone in the team. And navigation was possible with a few simple buttons of the phone. Decent touch screens were not available to the mass market. So it was an ideal solution for the devices that were possible in those days.
Unfortunately it was quite uncommon to set up the same web page a second time in order to offer WAP and even worse to keep that variant up to date. Some did but it was only a small fraction of the web. Today server generated web pages could do that more easily. WordPress, Media-wiki or Google could provide their content in WAP format as well. But in those days static web pages were more common and dynamic web pages were programmed very specifically to a certain output format, usually HTML. HTML-code was usually hard coded in the program.
The salvation came from the super smart phones, that Nokia and Ericsson provided. They could just do „normal“ web pages. Suddenly cell phone users were no longer locked into the stagnating WAP universe, and could access everything. And web pages could drop the ugly second variant for WAP, if they ever had it. And yea, I assume that some WAP-pages are still around now, even if almost nobody cares.
The same web pages now worked for mobile devices, but not always well. The reasons have just been mentioned in the beginning.
How can web pages be provided universally?
1. The WAP approach can be revived by creating a different variant of the web page with HTML that is optimized for mobile devices. We actually find this quite often with two variants http://www.xyz.com/ and http://m.xyz.com/. It is possible to maintain these two variants, but laziness is actually good in IT. In this case it leads to writing the web pages once in any input format and generating the www-variant and the m-variant automatically from the same source. That can be a script that is run once after each change to generate two sets of static pages. Or it can be software that generates the requested variant dynamically just in time for each request. As long as this avoids having to maintain two or more variants in parallel, this is already acceptable. Maintaining the two variants manually should be a no-go.
2. Another approach is to have static HTML pages (or dynamically generated HTML that does not take the output device into account), but CSS offered in two or more variants. I find this more elegant than the first approach and I am confident that it will cause less problems in the long term. And it is for sure the more appropriate approach according to the HTML philosophy. It can be done by having the different variants encoded in one CSS file or by generating the CSS file dynamically for the different output devices. Maybe it is a little bit too original for reality to combine static HTML pages with CSS that is generated by rails, CGI or a servlet. If encoding different variants in the same CSS really does not work out, why not.
3. Even more radical is the idea of responsive design. In the ideal case, just one HTML and one CSS are enough for each page. They are done in such a way that the page works well with a wide range of display sizes and adapts itself to that size. I find that more beautiful than the second approach, because the variety of divices is large and still growing and less accessible to a limited number of fixed setups, that will be inaccurate or even wrong at some point.
Some simple elements of responsive design are already useful by themselves:
- <meta name=“viewport“ content=“width=device-width, initial-scale=1″/> in the header part of the page
- ideally no absolute sizes in CSS
- min-size, min-width and min-height are possible, but should only be used when really needed.
- for large images max-width: 100%; height: auto; in CSS
- we need to remove the width and height attributes from the img-tags for large images, even though we have learned at some time the opposite for optimizing rendering speed.
There is a lot more to do. Doing it really well or transforming an existing page to responsive design is going to be a big deal.
When using a CMS like Joomla, Drupal, Typo3, WordPress or Media-Wiki, these issues are abstracted away. It is interesting to check out, if the pages are already fine with mobile devices or if work needs to be done. I might look into these issues and write about it in the future.
Just to avoid questions: I am in the process of transforming my own pages to responsive design, but far from finished.
Schreibe einen Kommentar