Devoxx 2017

I was lucky to get a chance to visit Devoxx in Antwerp the sixth time in a row. As always there were interesting talks to listen to. Some issues that were visible across different talks:

Java 9 is now out and Java 8 will soon go into the first steps of deprecation. The step of moving to Java 9 is probably the hardest in the history of Java. There were features in the past that brought very significant changes, but they were usually kind of optional, so adoption could be avoided or delayed. Java 9 brings the module system and a new level of abstraction in that classes of modules can be made public to other modules selectively or globally. Otherwise they can be by themselves declared as public, but only be visible within the module. This actually applies to classes of the standard library, that were always declared as being private, but that could not be efficiently hidden away from external usage. Now they suddenly do not work any more and a lot of software has some difficulty and needs to be adjusted to avoid these internal classes. Beyond that a lot of talks were about Java 9, for example also covering somewhat convenient methods for writing constant collections in code. Future releases will follow a path that is somewhat similar to that of Perl 5. Releases will be created roughly every half year and will include whatever is ready for inclusion at that time. Some releases will be supported for a longer time than others.

In the arena of non-Java JVM-languages the big winner seems to be Kotlin, while Groovy, Clojure, JRuby and Ceylon where not visible at the conference. Scala has retained its position as an important JVM language besides Java at this conference. The rise of Kotlin may be explained by the fact that Idea (IntelliJ) has become much more important as IDE than Eclipse and Netbeans, which already brings Kotlin onto every JVM-language-developer’s desktop. And Google has moved from Eclipse to Idea as recommended and supported IDE for Android-development and is now officially supporting Kotlin besides Java as language for Android-development. There were heroic efforts to do development in Scala, Clojure, Groovy for Android without support from Google, which is quite possible, but having to deploy the libraries with each app instead of having them already on the phone is a big disadvantage. The second largest mobile OS has added support for Swift as an alternative to Objective C and Swift and Kotlin are different languages, but they are sufficiently similar in terms of concepts and possibilities to ease development of Apps targeting the two most important mobile system platforms in mixed teams at least a bit. And Kotlin gives developers many of the cool and interesting features of Scala, while remaining a bit easier to learn and to understand, because some of the most difficult parts of Scala are left out. Anyway, Scala is not yet heavily challenged by Kotlin and remains important and I think that Clojure and JRuby and Groovy retain their importance and live in somewhat differenct niches than Scala and Kotlin. I would think that they are just a bit too small to be present on each Devoxx. Or it was just random effects about how much news there was about the languages and what kind of speeches had been proposed for them. On the other hand, I would assume that Ceylon has become a dead end, because it came out at the same time as Kotlin and tries to cover the same niche. It is hard to stay important in the same niche with a strong winner.

Then there was of course security security security… Even more important than in the past.

And a lot more…

I listened to the following talks:
Wednesday, 2017-11-08

Thursday, 2017-11-09

Friday, 2017-11-10

Links:

Previous years:

Btw. I always came to Devoxx by bicycle, at least part of the way…

Share Button

ScalaUA

About a month ago I visted the conference ScalaUA in Kiew.

This was the schedule.

It was a great conference and I really enjoyed everything, including the food, which is quite unusual for an IT-conference.. 🙂

I listened to the following talks:
First day:

  • Kappa Architecture, Juantomás García Molina
  • 50 shades of Scala Compiler, Krzysztof Romanowski
  • Functional programming techniques in real world microservices, András Papp
  • Scala Refactoring: The Good the Bad and the Ugly, Matthias Langer
  • ScalaMeta and the Future of Scala, Alexander Nemish
  • ScalaMeta semantics API, Eugene Burmako

I gave these talks:

  • Some thoughts about immutability, exemplified by sorting large amounts of data
  • Lightning talK: Rounding

Day 2:

  • Mastering Optics in Scala with Monocle, Shimi Bandiel
  • Demystifying type-class derivation in Shapeless, Yurii Ostapchuk
  • Reactive Programming in the Browser with Scala.js and Rx, Luka Jacobowitz
  • Don’t call me frontend framework! A quick ride on Akka.Js, Andrea Peruffo
  • Flawors of streaming, Ruslan Shevchenko
  • Rewriting Engine for Process Algebra, Anatolii Kmetiuk

Find recording of all the talks here:
https://www.scalaua.com/speakers-speeches-at-scalaua2017/

Share Button

Slick

Almost every serious programming language has to deal with database access, if not out of love, then at least out of practical necessity.

The theoretical background of a functional programming language is somewhat hostile to this, because pure functional langauges tend to dislike state and a database has the exact purpose to preserve state for a long time. Treating the database as somewhat external and describing its access with monads will at least please theoretical purists to some extent.

In case of Scala things are not so strict, it is allowed to leave the purely functional path where adequate. But there are reasons to follow it and to leave it only in well defined, well known, well understood and well contained exceptions… So the database itself may be acceptable.

Another intellectual and practical challenge is the fact that modern Scala architectures like to be reactive, like to uncouple things, become asynchronous. This is possible with databases, but it is very unusual for database drivers to support this. And it is actually quite contrary to the philosophy of database transactions. It can be addressed with future database drivers, but I do not expect this to be easy to use properly in conjunction with transactions.

It is worthwhile to think about
immutability and databases.
For now we can assume that the database is there and can be used.

So the question is, why not integrate existing and established persistence frameworks like JPA, Hibernate, Eclipslink and other OR-mapping-based systems into Scala, with Scala language binding?

Actually I am quite happy they did not go this path. Hibernate is broken and buggy, but the generic concept behind it is broken in my opinion. I do not think that it is theoretically correct, especially if we should program with waterproof transactions and actually defeat them by JPA-caching without even being aware of it? But for practical purposes, I actually doubt that taming JPA for non trivial applications is really easier than good old JDBC.

Anyway, Scala took the approach of building something in the lines of JDBC, but making it nicer. It is called Slick, currently Slick 3. Some links:
* Slick
* Slick 3.1.1
* Documentation
* Github
* Assynchronous DB access
* Recording of the Slick Workshop at ScalaX

What Slick basically does is building queries using Scala language. It does pack the results into Scala structures. Important is, that the DB is not hidden. So we just do the same as with JDBC, but in a more advanced way.

This article is inspired by my visit to Scala X.

Share Button

How to create ISO Date String

It is a more and more common task that we need to have a date or maybe date with time as String.

There are two reasonable ways to do this:
* We may want the date formatted in the users Locale, whatever that is.
* We want to use a generic date format, that is for a broader audience or for usage in data exchange formats, log files etc.

The first issue is interesting, because it is not always trivial to teach the software to get the right locale and to use it properly… The mechanisms are there and they are often used correctly, but more often this is just working fine for the locale that the software developers where asked to support.

So now the question is, how do we get the ISO-date of today in different environments.

Linux/Unix-Shell (bash, tcsh, …)

date "+%F"

TeX/LaTeX


\def\dayiso{\ifcase\day \or
01\or 02\or 03\or 04\or 05\or 06\or 07\or 08\or 09\or 10\or% 1..10
11\or 12\or 13\or 14\or 15\or 16\or 17\or 18\or 19\or 20\or% 11..20
21\or 22\or 23\or 24\or 25\or 26\or 27\or 28\or 29\or 30\or% 21..30
31\fi}
\def\monthiso{\ifcase\month \or
01\or 02\or 03\or 04\or 05\or 06\or 07\or 08\or 09\or 10\or 11\or 12\fi}
\def\dateiso{\def\today{\number\year-\monthiso-\dayiso}}
\def\todayiso{\number\year-\monthiso-\dayiso}

This can go into a file isodate.sty which can then be included by \include or \input Then using \todayiso in your TeX document will use the current date. To be more precise, it is the date when TeX or LaTeX is called to process the file. This is what I use for my paper letters.

LaTeX

(From Fritz Zaucker, see his comment below):

\usepackage{isodate} % load package
\isodate % switch to ISO format
\today % print date according to current format

Oracle


SELECT TO_CHAR(SYSDATE, 'YYYY-MM-DD') FROM DUAL;

On Oracle Docs this function is documented.
It can be chosen as a default using ALTER SESSION for the whole session. Or in SQL-developer it can be configured. Then it is ok to just call

SELECT SYSDATE FROM DUAL;

Btw. Oracle allows to add numbers to dates. These are days. Use fractions of a day to add hours or minutes.

PostreSQL

(From Fritz Zaucker, see his comment):

select current_date;
—> 2016-01-08


select now();
—> 2016-01-08 14:37:55.701079+01

Emacs

In Emacs I like to have the current Date immediately:

(defun insert-current-date ()
"inserts the current date"
(interactive)
(insert
(let ((x (current-time-string)))
(concat (substring x 20 24)
"-"
(cdr (assoc (substring x 4 7)
cmode-month-alist))
"-"
(let ((y (substring x 8 9)))
(if (string= y " ") "0" y))
(substring x 9 10)))))
(global-set-key [S-f5] 'insert-current-date)

Pressing Shift-F5 will put the current date into the cursor position, mostly as if it had been typed.

Emacs (better Variant)

(From Thomas, see his comment below):

(defun insert-current-date ()
"Insert current date."
(interactive)
(insert (format-time-string "%Y-%m-%d")))

Perl

In the Perl programming language we can use a command line call

perl -e 'use POSIX qw/strftime/;print strftime("%F", localtime()), "\n"'

or to use it in larger programms

use POSIX qw/strftime/;
my $isodate_of_today = strftime("%F", localtime());

I am not sure, if this works on MS-Windows as well, but Linux-, Unix- and MacOS-X-users should see this working.

If someone has tried it on Windows, I will be interested to hear about it…
Maybe I will try it out myself…

Perl 5 (second suggestion)

(From Fritz Zaucker, see his comment below):

perl -e 'use DateTime; use 5.10.0; say DateTime->now->strftime(„%F“);‘

Perl 6

(From Fritz Zaucker, see his comment below):

say Date.today;

or

Date.today.say;

Ruby

This is even more elegant than Perl:

ruby -e 'puts Time.new.strftime("%F")'

will do it on the command line.
Or if you like to use it in your Ruby program, just use

d = Time.new
s = d.strftime("%F")

Btw. like in Oracle SQL it is possible add numbers to this. In case of Ruby, you are adding seconds.

It is slightly confusing that Ruby has two different types, Date and Time. Not quite as confusing as Java, but still…
Time is ok for this purpose.

C on Linux / Posix / Unix


#include
#include
#include

main(int argc, char **argv) {

char s[12];
time_t seconds_since_1970 = time(NULL);
struct tm local;
struct tm gmt;
localtime_r(&seconds_since_1970, &local);
gmtime_r(&seconds_since_1970, &gmt);
size_t l1 = strftime(s, 11, "%Y-%m-%d", &local);
printf("local:\t%s\n", s);
size_t l2 = strftime(s, 11, "%Y-%m-%d", &gmt);
printf("gmt:\t%s\n", s);
exit(0);
}

This speeks for itself..
But if you like to know: time() gets the seconds since 1970 as some kind of integer.
localtime_r or gmtime_r convert it into a structur, that has seconds, minutes etc as separate fields.
stftime formats it. Depending on your C it is also possible to use %F.

Scala


import java.util.Date
import java.text.SimpleDateFormat
...
val s : String = new SimpleDateFormat("YYYY-MM-dd").format(new Date())

This uses the ugly Java-7-libraries. We want to go to Java 8 or use Joda time and a wrapper for Scala.

Java 7


import java.util.Date
import java.text.SimpleDateFormat

...
String s = new SimpleDateFormat("YYYY-MM-dd").format(new Date());

Please observe that SimpleDateFormat is not thread safe. So do one of the following:
* initialize it each time with new
* make sure you run only single threaded, forever
* use EJB and have the format as instance variable in a stateless session bean
* protect it with synchronized
* protect it with locks
* make it a thread local variable

In Java 8 or Java 7 with Joda time this is better. And the toString()-method should have ISO8601 as default, but off course including the time part.

Summary

This is quite easy to achieve in many environments.
I could provide more, but maybe I leave this to you in the comments section.
What could be interesting:
* better ways for the ones that I have provided
* other databases
* other editors (vim, sublime, eclipse, idea,…)
* Office packages (Libreoffice and MS-Office)
* C#
* F#
* Clojure
* C on MS-Windows
* Perl and Ruby on MS-Windows
* Java 8
* Scala using better libraries than the Java-7-library for this
* Java using better libraries than the Java-7-library for this
* C++
* PHP
* Python
* Cobol
* JavaScript
* …
If you provide a reasonable solution I will make it part of the article with a reference…
See also Date Formats

Share Button

Conversion of ASCII-graphics to PNG or JPG

Images are usually some obscure binary files. Their most common formats, PNG, SVG, JPEG and GIF are well documented and supported by many software tools. Libraries and APIs exist for accessing these formats, but also a phantastic free interactive software like Gimp. The compression rate that can reasonably be achieved when using these format is awesome, especially when picking the right format and the right settings. Tons of good examples can be found how to manipulate these image formats in C, Java, Scala, F#, Ruby, Perl or any other popular language, often by using language bindings for Image Magick.

There is another approach worth exploring. You can use a tool called convert to just convert an image from PNG, JPG or GIF to XPM. The other direction is also possible. Now XPM is a text format, which basically represents the image in ASCII graphics. It is by the way also valid C-code, so it can be included directly in C programms and used from there, when an image needs to be hard coded into a program. It is not generally recommended to use this format, because it is terribly inefficient because it uses no compression at all, but as intermediate format for exploring additional ways for manipulating images it is of interest.
An interesting option is to create the XPM-file using ERB in Ruby and then converting it to PNG or JPG.

Share Button

ScalaX 2014

Deutsch

I have been visiting the conference Scala eXchange ( #scalaX ) organized by Skillsmatter in London.

Here is some information about the talks that I have attended and the highlights:

The Binary Compatibility Challenge

Martin Odersky

Examples can be constructed for each of the four combinations of binary and source code compatibility.
We wish more of both. The talk is mostly about binary compatibility.
For Scala the conflict is innovation vs compatibility. Java chose one extreme to retain compatibility at any cost, which is considered more like a constraint then a restriction by the Java developers. It is easier for them because they control the JVM which is again close to the Java language. Clojure, Ruby, JRuby, Perl, Python, PHP and some others have less problems with this issue because the software is distributed as source code and compiled when run, just in time.
Reproducible builds are hard to achieve in Scala, just think of the many build tools like ivy, gradle, maven, sbt, ant, make (yes, I have seen it),…
The idea is to split the around 30 steps of compilation of scala into two groups. The first group could yield an intermediate format after around 10 internal compilation steps, which might be stored as tree of the program in a clever binary format. This could be a good compromise and address some of the issues, if kept stable. More likely will programs be combinable compatibly with this format than with binary or source only. It would also save time during compilation, which is a desirable improvement for scala.

REST on Akka: Connect to the world

Mathias Doenitz

Akka-http is the spray 2.0 or successor of spray. It follows the lines of spray, but improves on some of the shortcomings.
It should be used for reactive streams in Akka and is important enough to be part of core Akka.
TCP-flow-control can be used to implement „back pressure“.

Bootstrapping the Scala.js Ecosystem

Haoyi Li

Scala shall be compiled to as second alternative instead of JVM. The target is the browser, not so much server side JavaScript, where the JVM is available and better.
Advantage for applications: Some code can be used on both sides, for example HTML-tag-generation, validation etc. This is more elegant than using two languages. Also Scala might be considered a more sane language for the browser than JavaScript, although JavaScript is not such a bad language, but suffers like PHP and VBA from being used by non-developers who come from web design side and try a little JavaScript as part of their work, tripping into each of the pitfalls that we developers have already had some time ago when we had our first experience.
Libraries prove to be hard. They are huge and it is hard to transfer them. Optimization is needed to deal with this, like for Android development.
Reflection is not available on scala.js. Many things do not work because of that, but enough things to make it useful do work.
Serialization is another challenge, because many frameworks rely on reflection, but there seems to be a solution for that.
Integer types are a little bit crappy. JS only has double which can do 53 bit integers. Long has to be built from two doubles.

Introduction to Lambda Calculus

Maciek Makowski

Very theoretical talk. Lamba calculus is pure math or even more theoretical, pure theoretical informatics, but it can be made a complete programming language with some thinking. It can be used for dealing with issues like computability. Many nice proofs are possible. The theoretical essence of functional programming languages is there. Some key words: „Church Rosser Theorem“, „Programming with Lambda-Calculus“, „numbers as lambda expressions“ (church encoding), „y combinator“, „fixed point combinator“, „lambda cube“, „fourth dimension for Subtypes“, ….
Very small language, great for proofs, not relevant or applicable for practical purposes.

State of the Typelevel

Lars Hupel

Typelevel is inpired by Haskell. Libraries by them are scalaz, shapeless, spire, scalaz-stream, monocle and more.
We should strive to get correct programs and optimize where the needs arises.
The JVM integers are not good. Think of the silent overflow. Floats (float and double) are good for numerical mathematicians and scientists with knowledge in this area, who can deal with error propagation an numerical inaccuracy.
Off course numbers can be seen as functions, like this:
$x=f(.)$ mit $\bigwedge_{n \in \Bbb N: n>0} \frac{f(n)-1}{n} < x < \frac{f(n)+1}{n}$ Equality of real numbers cannot be decided in finite time. What is "Costate command coalgebra"? Monocle provides "lenses" and similar stuff known from Haskell... Good binary serialization formats are rare in the JVM world. How should the fear of scalaZ and monads be overcome? Remember: "A monad is a monoid in the category of endofunctors. So what is the problem?" as could be read on Bodil Stokke’s T-shirt.

Slick: Bringing Scala’s Powerful Features to Your Database Access

Rebecca Grenier

Slick is a library that generates and executes SQL queries. The conceptional weaknesses of JPA and Hibernate are avoided.
It has drivers for the five major DBs (PostgreSQL, mySQL/mariaDB, Oracle, MS-SQL-Server and DB2) and some minor DBs, but it is not free for the three commercial DBs.
Inner and outer joins are possible and can be written in a decent way.
With database dictionaries slick is now even able to generate code. Which I have done, btw. a lot using Perl scripts running on the DDL-SQL-script. But this is better, off course…

Upcoming in Slick 2.2

Jan Christopher Vogt

Monads haunt us everywhere, even here. Time to learn what they are. I will be giving a talk in the Ruby-on-Rails user group in Zürich, which will force me to learn it.
Here they come: monadic sessions….
Sessions in conjunction with futures are the best guarantee for all kinds of nightmares, because the SQL is sometimes executed when the session is no longer valid or the transaction is already over. When dealing with transactional databases a lot of cool programming patterns become harder. Just think of the cool java guys who execute stuff by letting a EJB-method return an instance of an inner class with the DB-session implicitely included there and calling a method which via JPA indirectly and implicitely does DB-access long after the EJB-method is over. Have fun when debugging this stuff. But we know about it and address it here.
At least Slick is theoretically correct, other than JPA which I conject to be theoretically incorrect, apart from the shortcomings of the concrete implementations.
Several statements can be combined with andThen or a for-comprehension. Be careful, by default this uses separate sessions and transactions, with very funny effects. But threads and sessions are expensive and must not be withheld during non-running non-SQL-activities by default. Reactiveness is so important. Futures and thread pools look promising, but this fails miserably when we have blocking operations involved, that will occupy our threads for nothing.
We would like to have assynchronous SQL-access, which can be done on DB- and OS-level, but JDBC cannot. So we have to work around on top of JDBC. Apart from using a reasonably low number of additional threads this approach seems to be viable.
Statically type checked SQL becomes possible in the future.

No more Regular Expressions

Phil Wills

I love the regex of Perl. Really. So why do effort to give up something so cool, even in non-perl-languages?
It is not as bad as it sounds. We retain regular expressions as a concept, just do not call them like that (for marketing reasons I assume) and write them differently. Writing them as strings between // is very natural in Perl, but it breaks the flow of the language in Scala. A typical programmatical scala-like approach is more natural and more type safe. And more robust in case of errors. org.paraboiled2 can be used. Capture is unfortunately positional, unlike in newer Perl-regex, where captures can be named. But it hurts less here.

Scala eXchange – Q&A Panel

Jon Pretty, Kingsley Davies, Lars Hupel, Martin Odersky, and Miles Sabin

Interesting discussions…

Why Scala is Taking Over the Big Data World

Dean Wampler

‚“Hadoop“ is the „EJB“ of our time.‘
MapReduce is conceptionally already functional programming. So why use Java and not Scala?
Some keywords: „Scalding“, „Storm“, „Summing bird“, „Spark“.
Scala can be more performant than python, which is so popular in this area, but migration has to be done carefully.

Case classes a la carte with shapeless, now!

Miles Sabin

Example: tree structure with backlinks. Hard to do in strict immutabilty. Shapeless helps.

Reactive Programming with Algebra

André Van Delft

Algebra can be fun and useful for programming. Algebraic interpretations were introduced.
Page is subscript-lang.org.
Algebra of communicationg processes. It is very powerful and can even be applied to other targets, for example operation of railroad systems.
Every program that deals with inpout is in its way a parser. So ideas from yacc and bison apply to them.

High Performance Linear Algebra in Scala

Sam Halliday

Lineare Algebra has been addressed extremely well already, so the wheel should not be reinvented.
TL;D, Netflix and Breeze.
Example for usage of that stuff: Kalman Filter.
netlib has reference implementation in Fortran, a well defined interface and a reliable set of automatic tests. How do we take this into the scala world?
Fortran with C-Wrapper for JNI. (cblas)
compile Fortran to Java. really.
Alternate implementations of the same test suite in C.
High-Performance is not only about speed and memory alone, but about those under the given requirements concerning correctness, precision and stability.
Hardware is very interesting. The CPU-manufacturers are talking with the netlib team.
Can GPU be used? Off course, but some difficulties are involved, for example transfer of data.
FPGA? maybe soon? Or something like GPU, without graphics and operating on the normal RAM?
We will see such stuff working in the future.

An invitation to functional programming

Rúnar Bjarnason

Referential transparency, something like compatibility with the memoize pattern.
Pure functions…
Parallelization..
Comprehensiveness.. The all time promise, will it be kept this time?
Functional programming is not good for the trenches of real life project, but for getting out of the trenches. This should be our dream. Make love not war, get rid of trenches…

Building a Secure Distributed Social Web using Scala & Scala-JS

Henry Story

Spargl is like SQL for the „semantic web“.
Developed with support from Oracle.
We can have relativiity of of truth while retaining the absolute truth. The speech was quite philosophical, in a good way.
Graphs can be isomorphic, but have a different level of trust, depending on where the copy lies.
Linked data protocol becomes useful.
How is spam and abuse avoided? WebID?
We are not dealing with „Big Data“ but with massively parallel and distributed „small data“.

TableDiff – a library for showing the difference between the data in 2 tables

Sue Carter

What is the right semantics for a diff?
What do we want to see? How are numbers and strings compared when occurring in fields?
Leave timestamps that obviously differ but do not carry much information out.

Evolving Identifiers and Total Maps

Patrick Premont

Idea is to have a map where get always returns something non-null. Smart type concepts avoid the compilation of a get call that would not return something.
Very interesting idea, but I find it more useful as theoretical concept rather than for practical purposes. The overhead seems to be big.

Summary

Overall it was a great conference.

Share Button

Visit to Scala Days in Berlin 2014

Deutsch

Around mid of June 2014 I have been visiting the Scala Days in Berlin. Like usual these events contain a lot of speeches, which were distributed in four tracks, apart from the key notes. The event location was a cinama, like the Devoxx in Antwerp, but this time one that has been transformed to something else many years ago, but good projectors were available. Major topics where issues about compiler construction which is a hard task, but looked at with the right functional perspective it can be derived from the simple task of writing an interpreter. This helps understanding language constructs in Scala, but the idea was applied to many other areas as well, for example for compiling and optimizing SQL queries and for analyzing source code.

Another major topic was „streams“, which can be useful for web services. Other than traditional web services which usually receive the whole request before starting to process it, concepts where discussed for dealing with large requests whose size is painful or impossible to keep at once in memory or who are even unlimited in size. These can also be applied to websockets. This demands processing data as soon as useful parts have arrived.

Another minor, but very interesting topic was development of Android apps with Scala. The commonly known approach is off course Scaloid, but an alternative, Macroid, was presented. It looked quite promising, because it allows to write nice Android apps with less code. A major worry is that scala apps consume too much memory due to their additional libraries. Because Scala uses its own libraries on top of the usual preinstalled Java libraries which are about 5 MB in size, this can easily anihilate the attractiveness of Scala for modern smart phone development, unless we assume rooted devices which have the Scala libs preinstalled. But that would seriously limit the scope of the app. This is not as bad as it sounds, because the build process contains one step in which unnecessary classes are removed, so that we only install what is really needed. When going as crazy as running Akka on the cell phone it becomes quite a challenge to configure this step because Akka uses a lot of reflection and so all these reflective entry points need to be configured, leaving a wide door open for bugs that occur at runtime when some class is not found.

API design was another interesting talk. Many ideas were quite similar to what I have heard in a API design traing for the Perl programming language by Damian Conway a couple of years ago, but off course there are many interesting Scala specific aspects to the topic. It is surprisingly hard to obtain binary compatibility of classes and this even forces to ugly compromises. So always recompiling everything looks tempting, but is not always reasonable. So ugly compromises remain part of our world, even when we are working with Scala.

Share Button

RISC und CISC

Vor 20 Jahren gab es einen starken Trend, Mikroprossoren mit RISC-Technologie (reduced instruction set computer), zu bauen. Jeder größere Hersteller wollte so etwas bauen, Sun mit Sparc, Silicon Graphics mit MIPS, HP mit PA-Risc, IBM schon sehr früh mit RS6000, was später, als man auf die Idee kam, das zu vermarkten, als PowerPC rebranded wurde, DEC mit Alpha u.s.w. Zu einer Zeit, als man zumindest in Gedanken noch für möglich hielt Assembler zu programmieren (auch wenn man es kaum noch tat), tat das noch richtig weh. Und Software war doch sehr CPU-abhängig, weil plattformunabhägige Sprache, die es damals selbstverständlich schon lange vor Java gab, einfach wegen des Overheads der Interpretation für viele Zwecke zu langsam waren. So behalf man sich mit C-Programmen mit wahren Orgien an ifdfefs und konnte die mit etwas Glück für die jeweilige Plattform aus CPU und einem UNIX-Derivat kompilieren. Der Wechsel der CPU-Architektur eines Herstellers war damals eine große Sache, z.B. bei Sun von Motorola 680×0 zu Sparc. Und die Assemblerprogrammierung der RISC-CPUs war ein Albtraum, an den sich auch erfahrene Assemblerprogrammierer kaum herangewagt haben. Zum Glück gab es damals schon sehr gut optimierende Compiler für C und Fortran und so konnte man das Thema einem ganz kleinen Personenkreis für die Entwicklung von kleinen Teilen des Betriebssytemkerns und kleinen hochperformanten Teilen großer Libraries überlassen.

Eigentlich sollte RISC ermöglichen, mit derselben Menge an Silizium mehr Rechenleistung zu erzielen, insgesamt also Geld und vielleicht sogar Strom zu sparen. Vor allem wollte man für die richtig coole Server-Applikation, die leider immer etwas zu ressourchenhungrig für reale Hardware war, endlich die richtige Maschine kaufen können. RISC war der richtige Weg, denn so hat man einen Optmierungsschritt beim Compiler, der alles optimal auf die Maschinensprache abbildet, die optimiert dafür ist, schnell zu laufen. Ich wüsste nicht, was daran falsch ist, wenn auch diese Theorie vorübergehend nicht zum Zuge kam. Intel konnte das Problem mit so viel Geld bewerfen, dass sie trotz der ungünstigeren CISC-Architektur immer noch schneller waren. Natürlich wurde dann intern RISC benutzt und das irgendwie transparent zur Laufzeit übersetzt, statt zur Compilezeit, wie es eigentlich besser wäre. Tatsache ist aber, dass die CISC-CPUs von Intel, AMD und Co. letztlich den RISC-CPUs überlegen waren und so haben sie sich weitgehend durchgesetzt.

Dabei hat sich die CPU-Abhängigkeit inzwischen stark abgemildert. Man braucht heute kaum noch Assembler. Die Plattformen haben sich zumindest auf OS-Ebene zwischen den Unix-Varianten angeglichen, so dass C-Programme leichter überall kompilierbar sind als vor 20 Jahren, mit cygwin sogar oft unter MS-Windows, wenn man darauf Wert legt. Applikationsentwicklung findet heute tatsächlich weitgehend mit Programmiersprachen wie Java, C#, Scala, F#, Ruby, Perl, Python u.ä. statt, die plattformunabhängig sind, mittels Mono sogar F# und C#. Und ein Wechsel der CPU-Architektur für eine Hardware-Hersteller ist heute keine große Sache mehr, wie man beim Wechsel eines Herstellers von PowerPC zur Intel-Architektur sehen konnte. Man kann sogar mit Linux dasselbe Betriebssystem auf einer unglaublichen Vielfalt von Hardware haben. Die große Mehrheit der schnellsten Supercomputer, die allermeisten neu vekrauften Smartphones und alle möglichen CPU-Architekturen laufen mit demselben Betriebssystemkern, kompiliert für die jeweilige Hardware. Sogar Microsoft scheint langsam fähig zu sein, verschiedene CPU-Architekturen gleichzeitig zu unterstützen, was lange Zeit außerhalb der Fähigkeiten dieser Firma zu liegen schien, wie man an NT für Alpha sehen konnte, was ohne große finanzielle Zuwendungen seitens DEC nicht aufrechterhalten werden konnte.

Aber nun, in einer Zeit, in der die CPU-Architektur eigentlich keine Rolle mehr spielen sollte, scheint alles auf Intel zu setzen, zum Glück nicht nur auf Intel, sondern auch auf einige konkurrierende Anbieter derselben Architektur wie z.B. AMD. Ein genauerer Blick zeigt aber, das RISC nicht tot ist, sonder klammheimlich als ARM-CPU seinen Siegeszug feiert. Mobiltelefone habe häufig ARM-CPUs, wobei das aus den oben genannten Gründen heute fast niemanden interessiert, weil die Apps ja darauf laufen. Tablet-Computer und Netbooks und Laptops sieht man auch vermehrt mit ARM-CPUs. Der Vorteil der RISC-Architektur manifestiert sich dort nicht in höherer Rechenleistung, sondern in niedrigerem Stromverbrauch.

Ist die Zeit reif und CISC wird in den nächsten zehn Jahren wirklich durch RISC verdrängt?

Oder bleibt RISC in der Nische der portablen stromsparendenen Geräte stark, während CISC auf Server und leistungsfähigen Arbeitsplatzrechnern dominierend bleibt? Wir werden es sehen. Ich denke, dass früher oder später der Vorteil der RISC-Architektur an Relevanz auf leistungsfähigen Servern und Arbeitsplatzrechnern gewinnen wird, weil die Möglichkeiten der Leistungssteigerung von CPUs durch mehr elektronischen Elementen pro Quadratmeter Chipfläche und pro Chip an physikalische Grenzen stoßen werden. Dann die bestmögliche Hardwarearchitektur mit guten Compilern zu kombinieren scheint ein vielversprechender Ansatz.

Die weniger technisch interessierten Nutzer wird diese Entwicklung aber kaum tangieren, denn wie Mobiltelefone mit Android werden Arbeitsplatzrechner mit welcher CPU-Architektur auch immer funktionieren und die Applikationen ausführen, die man dort installiert. Ob das nun plattformunabhängig ist, ob man bei kompilierten Programmen ein Binärformat hat, das mehrere CPUs unterstützt, indem einfach mehrere Varianten enthalten sind oder ob der Installer das richtige installiert, interessiert die meisten Benutzer nicht, solange es funktioniert.

Share Button