Bring your own Device

This issue is quite controversial and it applies to laptops, tablets and smart phones.
Usually the „bringing“ is not really an issue, you can have anything in your bags and connect it via the mobile phone network as long as it does not absorb the working time.
But usually this implies a bit more.
There are some advantages in having company emails and calendar on a smart phone. This is convenient and useful. But there are some security concerns that should be taken serious. How is the calendar and the emails accessed? How confidential are the emails? Do they pass through servers that we do not trust? What happens, if a phone gets lost?
This is an area, where security concerns are often not taken too serious, because it is cool for top manager to have such devices. And they can just override any worries and concerns, if they like.
This can be compensated by being more restrictive in other areas. 😉
Anyway, the questions should be answered. In addition, the personal preferences for a certain type of phone are very strong. So the phone provided by the company might not be the one that the employee prefers, so there is a big desire to use the own phone or one that is similar to the own phone, which depends on the question of who pays the bills, how much of private telephony is allowed on the company phone and if there are work related calls to abusive times.
Generally the desirable path is to accept this and to find ways to make this secure.

The other issue is about the computer we work with. For some kind of jobs it is clear that the computer of the company is used, for example when selling railroad tickets or working in the post office or in a bank serving customers.

It shows that more creative people and more IT-oriented people like to have more control on the computer they work with.
We like to have hardware that is powerful enough to do the job. We like to be able to install software that helps us do our job. We like to use the OS and the software that we are skilled with. Sometimes it is already useful to be able to install this on the company computer or in a virtual computer within the company computer. Does the company allow this? It should, with some reasonable guidelines.

Some companies allow their employees to use their own laptops instead. They might give some money to pay for this and expect a certain level of equipment for that. Or just allow the employees to buy a laptop with their own money and use it instead of the company computer. They will do so and happily spend the money, even though it is wrong and the company should pay it. But the pain of spending some of the own money is for many people less than the pain of having to use crappy company equipment.

This rises the question of the network drive Q:, the outlook, MS-Word, MS-Excel,…
Actually this is not so much an issue, at least for the group we are talking here. Or becoming less of an issue.
Drive Q: can quite well be accessed from Linux, if the company policies allow it. But actually modern working patterns do not need this any more.
We can use a Wiki, like MediaWiki or Confluence for documentation. This is actually a bit better in many cases and I would see a trend in this direction, at least for IT-oriented teams.
Office-Formats and Email are more and more providing Web-Applications that can be used to work with them on Linux, for example. And MS-Office is already available for Linux, at least for Android, which is a Linux Variant. It might or might not come for Desktop Linux. LibreOffice is most of the time a useful replacement. Maybe better, maybe almost as good, depending on perspective… And there is always the possibility to have a virtual computer running MS-Windows for the absolutely mandatory MS-Windows-programs, if they actually exist. Such an image could be provided and maintained by the company instead of a company computer.

It is better to let the people work. To allow them to use useful tools. To pay them for bringing their own laptop or to allow them to install what they want on the company laptop. I have seen people who quit their job because of issues like this. The whole expensive MS-Windows-oriented universe that has been built in companies for a lot of money proves to be obsolete in some areas. A Wiki, a source code repository, … these things can be accessed over the internet using ssh or https. They can be hosted by third parties, if we trust the third party. Or they can be hosted by the company itself. Some companies work with distributed teams…

It is of course important to figure out a good security policy that allows working with „own“ devices and still provide a sufficient level of security. Maybe we just have to get used to other ways of working and to learn how to solve the problems that they bring us. In the end of the day we will see which companies are more successful. It depends on many factors, but the ability to provide a innovative and powerful IT and to have good people working there and actually getting stuff done is often an important factor.

Share Button

tmp-directories

On all computers we have some concept of a tmp-directory. Typically it is /tmp on Linux- and Unix-systems and something like C:/TEMP plus some subdirectory in each users home directory on MS-Windows.

In terms of software development this tends to be some dark area. Programs like to create some files there, store some stuff there and then maybe remove it, maybe not. And we do not know for sure, when we can delete these files and we actually do not want to care. Linux and Unix-Systems sometimes clear their tmp-directories on reboot, while providing an additional /var/tmp-directory, that survives reboot. Sometimes the tmp-directory is deducted from shared memory, so it is kind of a RAM-disk, but usually stored in the swap partition (or swap file) of our OS. Now this cleanup on reboot does not help too much, when we want to keep our system running for a long time.

These days most computers are somehow dedicated. Either they are virtual computers that run exactly one server application or a set of closely related server applications. Or it is a mobile phone, tablet or desktop computer that is typically used by only one person. But still we should not forget that the system should allow being used by several applications and by several users. So sharing the same tmp-directory for everyone can cause some conflicts. The Unix- and Linux-family has a way of setting file permissions for the tmp-directory itself and for its entries that stop users from reading, changing or deleting each others files, but still there is some concurrency about using the namespace of this one directory, which is usually quite elegantly bypassed by each software by using smart naming or by having the OS create unique names. But I would not consider it ideal. On the other hand, sometimes we might actually want to use the tmp-directory to share something between users or between processes, where this one tmp-directory might come in handy.

The approach of having a separate tmp-directory in each home directory and in a sub directory of each server application’s installation is tempting, because it separates name spaces, allows to disallow reading the directory entries by others and does not mix totally unrelated stuff in one directory. There is a drawback to this. We usually have different storage technologies. Some are optimized for reading, maybe even avoiding redundancy, because the system can be reinstalled. Some use sporadic writing, some are strictly read-only. And some use a lot of reading and writing. Some data is transient, some can be easily restored and some data needs to be stored redundantly to be safe. Depending on that we should aim to put it on Flash disks, or on a different RAID setup of hard disks. This is getting harder with virtualization, but eventually we can get to the point where virtual computers have disks of different characteristics, that are mapped to the appropriate hardware.

So there is no real good answer to this question, but I think that a tmp-directory that is separate from the home directory, but specific to each user, would be the best approach. Will this change? Probably not so easily. But maybe in some distant future.

Share Button

Powerful API Functions or Specific API Functions?

When designing APIs we should confront ourselves with the question what they should look like, what they should contain and what not. This is not mostly a question about development effort, but about creating a good API that can be used and save us development effort elsewhere.

There are always simple answers, but in the end we should balance certain partially contradicting desires to create something great.

One aspect will be discussed here. Some of us know functions in the libc of certain systems that we use to program in C. Favorite candidates are ioctl and fcntl. These functions include a wide range of functionality and actually do quite different things depending on the parameters. Primarily there is one parameter that selects the function. And then depending on this parameter there are several additional parameters, whose meaning totally depends on the first parameter.

I truly admire the libc and the Posix-API, because of what it can do and how it is accomplished and how clever the concepts are. But putting loosely related stuff into one catch-all-function and using a parameter that selects which function to actually execute is just wrong and it has been wrong even in the days when it was created. Now there is possibly some argument in favor of this design, because these functions are system calls, which are special, because they go immediately into the OS-kernel. Depending on the implementation of the OS there might be limits of the total number of system calls that the OS can support and it might be hard to change the interface between OS and libc too often, so a flexible system call comes in handy. In the concrete example, it is impossible to change it directly, because the POSIX-API has been standardized and this is one of the few standards that has remained relatively stable for 25 years and still offers great functionality. Linux, which strictly follows this standard, is by far the most widespread operating system today, especially on servers, mobile devices (Android) and devices that we perceive as just hardware like network routers, firewalls, … It is too valuable that programs written for the POSIX-API and of course using the defined functionality run on newer Linuxes.

But there is a lesson to learn for our own APIs. We should avoid putting too many different things into one API-function. I do not think that many of us will try to write an universal API-function like ioctl, but more subtle examples are quite common.

A typical pattern is this:

findPerson(name, email, phone_number)

We can provide a name, a phone number or an email address or a combination and then search for entries that match all of the entries that we had provided. This is still quite clear, but now we could also provide a list of phone_numbers, a list of email addresses etc…

Independent of the actual preference, it should be considered, that this are 7 functions. We can include or exclude any of the parameters, but the case that all are null is probably not supported. Or it is the eighth case that finds everything.

When we are talking about 1, 2, 3, or maybe 4 parameters, it is still possible. to create API-functions for all the combinations, like

findPersonByName(name)
findPersonByEmail(email)
findPersonByPhoneNumber(pone_number)
findPersonByNameAndEmail(name, email)
...
findPersonByNameAndEmailAndPhoneNumber(name, email, phone_number)

This will be clearer. When writing exhaustive automatic Tests, which will probably be „integration tests“, not „unit tests“, they have to be written against these seven variants anyway, no matter if it is one or seven functions. The implementation might also internally use „if“s or do the equivalent at query level by doing something like

SELECT * FROM PERSON P
WHERE
(:name IS NULL OR P.NAME = :name)
AND (:email IS NULL OR P.EMAIL = :email)
AND (:phone_number IS NULL OR P.PHONE_NUMBER = :phone_number);

which has actually eight paths, that need to be covered by tests, including the case where all three parameters are null, if that is not blocked by application code.

This also shows the limits of the classical approach, when the multitude of queries gets really complex. That might require a more generic approach, which is actually quite well exemplified by SQL or its embedded forms like JDBC. For typical IT projects, I would give the recommendation, not to go there and develop such a generic query DSL as part of the project. This usually leads to disaster, because the skills for designing a good language or a good generic framework are usually not available in the team and if we talk about budget, quality and schedule, it will usually blow anyway. So the reasonable approaches are either to use an existing well proven solution for the generic API or to just find out, what functionalities are actually needed and to provide them.

Some examples show the opposite, like Ruby on Rails, which was developed as part of a project effort. Another example is a relatively big company that developed a framework quite similar to Spring itself, before Spring was available. But these successes cannot easily be duplicated in our projects.

Share Button

WannaCry or better learn from it?

The malware WannaCry became quite well known, especially because it manifested itself on the displays of the German federal railroad and it even blocked most of the hospital infrastructure in the UK. Find some discussion on Bruce Schneier’s Blog… You find a a href=“https://www.schneier.com/blog/archives/2017/05/did_north_korea_1.html“>more elaborate article on his blog as well. Read Bruce’s blog article, he knows more about security than I do… 🙂

We might have observed, that this attack was targeting MS-Windows computers. The argument, that this is just because MS-Windows computers are more common, is no longer true. But the argument, that the MS-Windows developers just did a lousy job does not hold either. It was true 10, 15, 20, 25 years ago. We have seen it. But today I would assume, that they have improved and are doing a good job.

There is an argument, to favor open source over closed source for security reasons. If a software is open source, it is much more difficult to incorporate malicious features like backdoors into it or to leave security holes open by mistake, because the source code can be analyzed and fixed by anybody who has access to the internet and the capabilities. This is no guarantee, but it is a good thing.

The other argument is more like a question. How close are US companies to US government agencies? Do they do each other little favors? We do not know.

In any way, the people who did this malware attack are criminals and I regret that this has caused so much damage. Fortunately criminals are relatively rare. So the frequency of encountering them in daily life is usually not so high, unless we live in especially crime infested areas. But the internet connects us with criminals all over the world and allows them to damage us. So it might be ok in a good neighborhood not to lock the door or not to lock the bicycle. It gives a good feeling to trust our neighbors. But in the internet, the bad guys are there for sure and they will discover our unlocked virtual door. We can rely on that.

Share Button

ScalaUA 2017

About a month ago I visted the conference ScalaUA in Kiev.

This was the schedule.

It was a great conference and I really enjoyed everything, including the food, which is quite unusual for an IT-conference.. 🙂

I listened to the following talks:
First day:

  • Kappa Architecture, Juantomás García Molina
  • 50 shades of Scala Compiler, Krzysztof Romanowski
  • Functional programming techniques in real world microservices, András Papp
  • Scala Refactoring: The Good the Bad and the Ugly, Matthias Langer
  • ScalaMeta and the Future of Scala, Alexander Nemish
  • ScalaMeta semantics API, Eugene Burmako

I gave these talks:

  • Some thoughts about immutability, exemplified by sorting large amounts of data
  • Lightning talK: Rounding

Day 2:

  • Mastering Optics in Scala with Monocle, Shimi Bandiel
  • Demystifying type-class derivation in Shapeless, Yurii Ostapchuk
  • Reactive Programming in the Browser with Scala.js and Rx, Luka Jacobowitz
  • Don’t call me frontend framework! A quick ride on Akka.Js, Andrea Peruffo
  • Flawors of streaming, Ruslan Shevchenko
  • Rewriting Engine for Process Algebra, Anatolii Kmetiuk

Find recording of all the talks here:
https://www.scalaua.com/speakers-speeches-at-scalaua2017/

Share Button

Using non-ASCII-characters

Some of us still remember the times when it was recommended to avoid „special characters“ when writing on the computer. Some keyboards did not contain „Umlaut“-characters in Germany and we fell back to the ugly, but generally understandable way of replacing the German special characters like this: ä->ae, ö->oe, ü->ue, ß->sz or ß->ss. This was due to the lack of decent keyboards, decent entry methods, but also due to transmission methods that stripped the upper bit. It did happen in emails that they where „enhanced“ like this: ä->d, ö->v, ü->|,… So we had to know our ways and sometimes use ae oe ue ss. Similar issues applied to other languages like the Scandinavian languages, Spanish, French, Italian, Hungarian, Croatian, Slovenian, Slovak, Serbian, the Baltic languages, Esperanto,… in short to all languages that could regularly be written with the Latin alphabet but required some additional letters to be written properly.

When we wrote on paper, the requirement to write properly was more obvious, while email and other electronic communication via the internet of those could be explained as being something like short wave radio. It worked globally, but with some degradation of quality compared to other media of the time. So for example with TeX it was possible to write the German special letters (and others in a similar way) like this: ä->\“a, ö->\“o, ü->\“u, ß->\ss and later even like this ä->“a, ö->“o, ü->“u, ß->“s, which some people, including myself, even used for email and other electronic communication when the proper way was not possible. I wrote Emacs-Lisp-Software that could put my Emacs in a mode where the Umlaut keys actually produced these combination when being typed and I even figured out how to tweak an xterm window for that for the sake of using IRC where Umlaut letters did not at all work and quick online typing was the way to go, where the Umlaut-characters where automatically typed because I used 10-finger system typing words, not characters.

On the other hand TeX could be configured to process Umlaut characters properly (more or less, up to the issue of hyphenation) and I wrote Macros to do this and provided them to CTAN, the repository for TeX-related software in the internet around 1994 or so. Later a better and more generic solution become part of standard TeX and superseded this, which was a good thing. So TeX guys could type ä ö ü ß and I strongly recommended (and still recommend) to actually do so. It works, it is more user friendly and in the end the computer should adapt to the humans not vice versa.

The web could process Umlaut characters (and everything) from day one. The transfer was not an issue, it could handle binary data like images, so no stripping of high bit or so was happening and the umlaut characters just went through. For people having problems to find them on the keyboard, transcriptions like this were created: ä->ä ö->ö ü->ü ß->ß. I used them not knowing that they where not actually needed, but I relied on a Perl script to do the conversion so it was possible to actually type them properly.

Now some languages like Russian, Chinese, Greek, Japanese, Georgian, Thai, Korean use a totally different alphabet, so they had to solve this earlier, but others might know better how it was done in the early days. Probably it helped develop the technology. Even harder are languages like Arabic, Hebrew, and Farsi, that are written right to left. It is still ugly when editing a Wikipedia page and the switching between left-to-right and right-to-left occurs correctly, but magically and unexpected.

While ISO-8859-x seemed to solve the issue for most European languages and ISO-8859-1 became the de-facto standard in many areas, this was eventually a dead end, because only Unicode provided a way for hosting all live languages, which is what we eventually wanted, because even in quite closed environments excluding some combinations of languages in the same document will at some point of time prove to be a mistake. This has its issues. The most painful is that files and badly transmitted content do not have a clear information about the encoding attached to them. The same applies to Strings in some programming languages. We need to know from the context what it is. And now UTF-8 is becoming the predominant encoding, but in many areas ISO-8859-x or the weird cp1252 still prevail and when we get the encoding or the implicit conversions wrong, the Umlaut characters or whatever we have gets messed up. I would recommend to work carefully and keep IT-systems, configurations and programs well maintained and documented and to move to UTF-8 whenever possible. Falling back to ae oe ue for content is sixties- and seventies-technology.

Now I still do see an issue with names that are both technical and human readable like file names and names of attachments or variable names and function names in programming languages. While these do very often allow Umlaut characters, I would still prefer the ugly transcriptions in this area, because a mismatch of encoding becomes more annoying and ugly to handle than if it is just concerning text, where we as human readers are a bit more tolerant than computers, so tolerant that we would even read the ugly replacements of the old days.

But for content, we should insist on writing it with our alphabet. And move away from the ISO-8869-x encodings to UTF-8.

Links (this Blog):

Share Button

Tablet Computers

The idea of tablet computers is actually quite old and it has been tried a couple of times, at least up to prototypes. Probably a certain level of hardware and software was needed to make them both useful and affordable for enough people to become a mass product. This is actually a quite common thing. Some person, group or company has invented something really good, but they were not able to provide a sufficiently reliable, useful and affordable product to the market or just were not able to leave their home market efficiently. There are just a few examples for this, that I have observed.

  • Tilting trains have been tried in Germany, UK, Italy, Spain, Sweden Switzerland, Canada, France and Japan, in some countries several times. Many efforts become dead ends because the technology was not easily built in an affordable and reliable and maintainable way, so the mechanism was disabled or the trains were put out of service way too early. Italy actually made this technology work, but some of the train sets suffered serious deficiencies in quality, reliability and maintenance. Spain did the Talgo, which is less ambitious, because it uses gravity instead of an active mechanism and provides for a weaker effect. Sweden developed the X2000 trains, which seemed to work more or less well, but were quite expensive. But finally it seems that companies are able to produce good trains with this technology, like the relatively new Swiss ICN-trains.
  • A British company had produced trailer bikes for children already in the 1930s. They have one wheel and are attached to a parent’s bike. These were hard to get and they were almost unknown, even though the idea is great. In the 1990s German companies started to adopt the concept and actually produce them in good quality and sell them internationally, which was off course easier than 60 years earlier. They are now a common concept.
  • In the 1970s many bicycles had three speed hub gears. Derailleur gears already existed, but they were hard to use and fragile. For steeper roads it was possible to use a larger sprocket and to be able to climb slopes at the expense of lacking higher gears for flat sections. A British company produced a 5 speed hub gear, but it was extremely difficult to get and the quality was so poor that it would be almost half of the time in repair for a more active cyclist. Today we see mature hub gears with more than ten gears, but the derailleur technology has also become mature enough for the main stream.

So there are several requirements to success.

Another interesting aspect is that the actual usage might become different than anticipated. I understand that the tablet computers where sold as a „better replacement“ for PCs and Laptops in certain areas. I do not think that this is reasonable. Having a keyboard and a larger screen is usually better and it makes sense to transport a small or even a larger laptop. I have often had an external keyboard on top of the laptop, when I could afford to transport it and anticipated a heavy use. The netbook was so small that it did not hurt to have it in the luggage, but it was eventually hard to expand the memory and to get a replacement. A relatively small laptop still serves the purpose when a real computer is needed, but luggage is constrained.

The tablet computer does have some features that make it worth having one on top of a good phone and different sizes of Laptops. I am using an Android tablet, which is the most common OS for tablets, but there are off course some others, which I do not know well enough to write about them.

It is easier to switch between keyboard types. I am using the Cyrillic keyboard a lot and the computer with which I am writing this text has two external keyboards attached. I can switch with a key sequence, but this approach has its limitations. Probably buying a Laptop in Russia and just knowing the German keyboard without relying on the symbols on the keys would work for me. But the tablet makes this work with very little setup, while buying a physical Cyrillic keyboard in Switzerland is a bit harder, but still easy and buying a Laptop with Cyrillic keyboard layout does need some effort.

When doing small stuff, mostly reading or even some smaller emails, this is much better than the phone, but it can be used in the train, in the park, anywhere, where it is possible to sit. A laptop requires some kind of a table to be reasonably useful. There are seats with tables in the train, but that is a matter of luck to get one.

Finally we currently have a lot of Android Apps. They could be written for „normal“ desktop Linux as well or as web applications. Maybe that will happen. But currently some of them are available for Android, but not or not in a useful way for desktop Linux. This may change and it heavily depends on what we are actually using. But in my case it is true and it proved to be helpful to have the larger screen than on the Android phone.

Concerning the SIM card, I actually went the extra mile in terms of higher price and more effort for buying it in order to get a SIM card slot. I have not used it very much, because the extra SIM card is kind of expensive, moving SIM cards between devices is inconveniant and using the Android phone as a WiFi-Router seems to work well enough. But maybe this is useful when travelling a lot with SIM-cards from many countries to use just all the slots in older and newer phones and tablets and to use the device with the currently preferred SIM card as the WiFi router for all the others.

And finally it can be said that we can now buy fairly affordable good tablet computers. What I am missing is that tools from desktop Linux are usually not available on Android or only in a limited version. But the most common applications, a web browser and an email client are off course working on both…

Share Button

User Friendliness

I have made an interesting observation in terms of user friendliness. Let’s call it an anti-pattern…

I had booked a flight in the internet. Now the ticket was a number, which they sent to me by SMS. I went to the page of the airline and tried to do the check-in, in order to get a better seat, a lesser chance of being an overbooking-victim and to save some time and nerves at the airport.

Now it is necessary to enter some information each time, like Passport number, date of birth, my name, validity of passport and citizenship. By mistake I used the wrong citizenship without noticing and then the page asked me, if I have a Visa.

Now it was impossible to go back to fix this, so I had to cancel the whole process and enter all the information once again. At least I thought so. It was worse, because the wrong citizenship had put the whole booking into a weird status which could not be fixed anymore on the web application. It just refused to deal anymore with this ticket.

It was possible to use the app of the airline and to redo the whole thing on my tablet.

If there is information that is so important to get right, there are some suggestions:

  • Allow the user to go back to any form in the dialog and revisit the entries
  • Show the data that has been entered to make it easy to recognize the error
  • Give the user who messed up a second chance to fix it
  • The question is, why I have to enter my birthday, passport number, name etc. each time. They do not change frequently. Privacy is really a good thing, but I guess in the case of traveling by air all privacy issues are just a joke. Why not give at least some convenience?
  • Anybody who happens to get the relatively short ticket number can mess around with it, which could be very annoying, if he for example canceled the flight.
Share Button

Eatable Devices

Wearable devices have become normal and it is hard to buy clothes these days that do not actually belong to this category. If you don’t know, the NSA knows.

The new trend are eatable devices. It has been quite a challenge to create chips and batteries that can be chewed and swallowed without pain and without exploding while being eaten. But the newest chips are made of Bio-Silicon which is 100% eatable and riskless to eat. Batteries have been replaced by condensators, which proved to be the better choice.

Now this opens a huge range of applications. We can write apps that work on the cluster of eatables and support our diet, because they register exactly when, how and by whom they are eaten. Newer pricing models allow us to buy food and pay some part of the price only when we actually eat it. We can see when food is expired and discard it. Of course not in the regular garbage, but finally each household will now get a separate garbage can for electronic devices. Remember to put your clothes in there, even though you were not aware of the chips inside when you bought them. It will be better for the environment.

The labs of IBM and Coca Cola have jointly developed microchips that actually float in the drink and are so small that we usually do not see them. This will support the usual functions of eatable devices and will equip the drinker with so many chips inside the body that permanent tracking will become easy even without a cell phone.

Apple finally found their core business. Eatables. Apples. They almost taste like old style apples, are nicely designed and we buy them in the apple store instead of the supermarket. The first eaters waited half a week in front of the Apple stores to get the taste of the real Apple-apple as early as possible.

Share Button

Do we still need Experts when everything is in the Internet

We find a lot of information about pretty much everything on the internet. We do not have to remember things because we are always online and always able to find the information we are looking for. It is true. I do it, you do it, everybody does it. Wikipedia, Google, Forums and of course specific sites…

I just give an example, I once met a doctor, a physician. A patient had a problem that was unclear to him and he actually told the patient that he would google for the problem. In the end he came up with a very helpful solution for the guy, much better than what many of us would get in the same situation. I will not disclose any details.

Now why do we need an expert at all, if the expert did not know and found a more helpful answer than the other experts only by using a search engine that all of us can use?

Actually it is the conjunction of the expert and the online information that became so helpful. At least since the last presidential election in some country on the North American continent we have learned that media (and probably even the internet) may be unreliable and that truth is relative, if we at all accept the concept of truth.

Or to become more tangible, there are numerous sites that promise us easy solutions at least to questions that really many find important like diets, raising money easily (and legally), and a lot more, we all know it. It is quite easy to put a site online and put any information on there. It is a bit harder, but possible to get found. The author of the site can sell something that he would never buy himself or something that he believes, even though it is not true. I have met a person in Switzerland who seriously told me that eating would be unnecessary for humans and he or she just practiced it for the joy of eating. In case of medical advice it is quite obvious that this might be dangerous, but in almost any area we have more or less the same problem. A government agency that enforces that only the truth is written on web pages would be a nightmare. Just think of your favorite politician being in charge of such an agency…

But for the expert it is easy to recognize which information is serious and useful. And even easier to use the right keywords for searching.

And there is more. The expert knows the situation, consciously and subconsciously he combines experience and what he sees, hears, …. to solve the problem. And builds in the input from the search.

We should also think that searching is extremely efficient, but knowing 99% without searching is so much more efficient. Just think of languages. I speak a couple of them, and it is often useful to use online dictionaries or even translators. But needing them for every other word is inefficient and will actually sometimes lead to wrong understanding.

The information on the internet will become better. There will be new concepts implemented by sites for providing reliable content in certain areas. We already see Wikipedia, which is not 100% reliable, but probably about as good as a printed encyclopedia in this aspect.

Anyway, the experts will not become useless, but we will need them in the future as well.

Share Button