Devoxx UA 2020 (talks)

I watched the conference onlie and picked the following talks:

And on the second day:

Share Button

Devoxx UA

Most conferences have been cancelled, since it is difficult to hold a conference these days. The idea to move the conference online has been obviously around, but it was rejected by most organizers, because it it not the same and the all important chance to meet other people is just not the same.  So the Devoxx in Antwerp, which I like to visit every year, did not happen.  But Devoxx Ukraine decided to go for online.

So how did it work:  There were three tracks.  Each track was represented by a youtube channel, on which the live talk was transmitted.  Before and after the talks, professional moderators appeared in these channels, announced the speakers and did what moderators do in normal conferences.  The devoxx App worked on my cell phone and that seemed to be the most up to date schedule.

Some talks were really excellent.  I enjoyed them a lot even online.  For talks that are not so good, it requires more discipline to stay tuned.

The discussion was done in zoom channels that belonged to the three tracks.  So discussions could technically last until the end of the next talk.  The discussions in zoom also worked quite well, which was a surprise.

I think it is probably the right reaction, to have fewer conferences than usually in a year and to cancel some and move some to online, exactly as it is happening.  I think DevoxxUA had around 10’000 visitors, so they absorbed audiences of several conferences.  Also some common speakers at Devoxx conferences were not giving talks, so I assume that also part of the speakers decided that the online format is not ideal for them.

About the contents I will write in another Blog article.

Share Button

Object Creation: Builder vs. Constructor vs. Setter

When we create new objects, we are basically confronted with the need to provide at least one construction pattern.

Of course depending on the language we have more or less three ways to go that are commonly available.

Traditionally in OO it was mandatory to write setters and getters. In C++ or Java they really have names like getXyz or setXyz, but in Ruby or C# or Scala they can be written in such a way that they behave as if the attribute were public and could be assigned and read, by just magically calling the setters and getters internally. Actually Java does that internally for public attributes or more generally for attribute assignment and Hibernate can be configured to go via the getters and setters or via the internal attribute-assignment-getters and setters. This can be useful, to apply some DB-specific conversion in the getters and setters and to bypass it for the DB-access.

Why do we at all use these getters and setters? They were introduced to have flexibility to change the internal implementation without changing the API, because getters and setters can actually become more complex. This can be useful for DB-specific conversions in Hibernate, but apart from that in 25 years of OO-ish development this flexibility has hardly been used. Most of the time the set of attributes changes and the set of setters and getters changes simultanously. So one might ask the question, why we go by default the extra mile of adding getters and setters, when we could just make the attributes public and save some time. I am not asking, because that would decrease the life expectancy. But moving on demand from plain accessible attributes to getters and setters would be just a refactoring like changing the sets of attributes.

Now which of these are preferred and why?

First of all, we do need to read all attributes in some way, otherwise they are just a waste of space. Just forget for the moment programming low level APIs where bits have to be counted and dummy attributes have to be added to move the useful ones to the right position. But very often it turns out that we do not need to change them during the life time of the object. Now Ruby has a nice feature of setting everything up and then calling freeze, which makes the object, but not its sub-objects, immutable. I think it would be worth considering to add something like this to Scala and Java, for example. Clojure has something like this, actually, but with a slightly different flavor.

There is some advantage in knowing that the state of objects does not change. It is easier to reason about code. It helps really for creating thread safety and reentrance. And it even helps when passing around an object reference of an object, that still „lives“ somewhere else, for example when a sub-object comes out of a getter. In functional programming this is mandatory for all internal APIs, in other areas it is just something to make life easier, where the mutation is not actually needed.

So, there the setter go away, where not needed and we end up construction with a constructor that contains all attributes or variants with reasonable default values. This makes sense, when the attributes are few and there is no no risk of mixing them up. The builder pattern helps by naming the attributes in languages, that do not allow named parameters for the constructor out of the box. So it is useful in Java, but obsolete in many other languages. If attributes are final or constant or whatever it is called, the need for setters and getters is technically even less, because nothing can go wrong, the attribute can only be read. But in Java it is best practice to write getters and we should comply.

Now the issue arises that there is only a private or public multi-argument constructor and a lot of getters or whatever is used for reading the attribute in the specific language. And then a framework needs to create objects from XML, JSON or whatever automatically. And these frameworks tend to need at least a no-argument-constructor, often a public no-argument-constructor, that should be only for framework use. And the attributes have to be made non-final. If the framework is smart, it bypasses getters and setters and the no-arg-constructor is already enough.

Some frameworks actually require the setters. There we go to the old school world. We can impose a convention that the setters and the no-arg-constructor are there ONLY for the purposes of the framework and should not be used otherwise. Maybe that is a good approach, it is somewhat cleaner. But the box is opened and mistakes with such a convention will happen, so the question arises, why we need to deal with each attribute at last seven times: The attribute itself, its getter, its setter, the multiarg-constructor, the attribute of the builder, the with-method of the builder and the build-method.

Some things are easier, when moving to a new language and dumping all the garbage-traditions in that step.

But good developers can write good software in any reasonably good language that reasonably suits the purpose.

Links

Share Button

MapStruct

In the Java sphere we often develop the same data class several times. Each layer has its own variant and they are named almost the same, with some prefix or suffix or just the package name to distinguish. The set of attributes is the same (or almost the same), they have setters and getters. Or maybe only getters.

Nobody wants to write business logic two or three or four times, no matter how much support we have for copying code between the layers. And there OO is gone. We have to use anemic data objects, which was clearly introduced as an antipattern by Martin Fowler some years ago.

Since we are now using new paradigms and every couple of years, we no longer care and no longer know. OO was 25 years ago. Now we do FP and Microservices. And new frameworks. And many layers.

So, where does this come from?

First of all, the database access layer is Hibernate. I do not know why, because I think that plain JDBC would be easier, but Hibernate is already there and cannot be removed by arguments. Now Hibernate came with the promise that we can just use plain objects (POJOs) for our data and mirror database tables to classes and columns to attributes. Some XML-stuff had to be written and everything worked. Only writing the XML was such a pain that people immediately jumped to the annotations alternative once it was there. It was better and still is. But now the POJOs are obviously cluttered with Hibernate or JPA annotations. So they have to stay in the database layer. Actually there is a much stronger argument for this. Objects contain other objects and collections of objects. And possibly everything, if we go deeper recursively. So accessing the database should be a reasonably fast operations, so some attributes are loaded lazily. That means, they are only really loaded, when we need them. Which can go terribly wrong, because the transaction is no longer around and it is too late.

Also we have our idea what data classes have to look like, so there are some layers where we want no-argument-constructors and setters and getters, some layers where we want final attributes and constructors with all attributes and only getters and again others where we prefer to use a builder. And yes, each layer has its rules that need to be followed.

So, we keep them in the DB layer and map them to almost identical service layer objects without annotations. Then we work with these, write our business logic. Procedural programming mostly. Because the objects cannot have business logic. So we have classes with methods that are kind of behaving like static methods, but are non-static, because the framework wants it like that.

And then again, we build more and more layers, because each concern needs to be dealt in its own layer. And requires its own set of data objects, possibly with its own annotations for REST or SOAP or JSON or XML or whatever.

So, how do we move data between layers? At each layer boundary the data needs to be copied to the sister objects in the new layer. Now this is kind of stupid, programming something like

class HouseL1 {
private final int a;
private final String b;

public void HouseL1(HouseL2 l2) {
this.a = l2.getA();
this.b = l2.getB();
....
}
}

or with builder or with setters and getters, it is a lot of ugly work. And even worse, all sub-objects and collections of sub-objects have to be mapped. And their sub objects. And we possibly have to stop somewhere.

So we would like to avoid doing all this tedious stuff.

What can we do?

Is Java really the right language? Of course, we are writing enterprise software and we need type safety. Not real type safety like Scala, but a little bit of it feels good.

Reconsider our whole architecture and simplify it. Maybe it is possible to get rid of some layers and write much simpler software that does the same thing, only much faster and with less bugs. Ok, I’m only kidding here. We are talking enterprise software here. And yes, sometimes the layers do have real purposes and make sense.

Try to use the same data objects in all layers anyway? It has been tried. It works, but only in very simple settings.

Create the source code for the mapping. You can write a script for this or find one or find a tool or whatever. Parse the data classes and create the source code for the transformation methods. Or just write hibernate classes and create all other layers with their preferred setup in terms of mutability and construction and annotations from that.

But we are in Java, so why not use reflection and figure out at runtime how to map it. Find a library to do it for you or write your own. Performance? No problem. We use enterprise servers, of course.

So, in the end it is a good possibility to have a Java-tool that creates the source code for the transformation as part of the compile process.

MapStruct does exactly that. We write in interface for our transformations. With some annotations non obvious mapping behavior can be specified. Keep that list small and try to make it possible to automatically recognize the mappings, where possible. Then an extension to the maven compiler-plugin is added, that involves MapStruct to create an implementation for this interface at compile time and of course compile the implementation. And voila, it works. Even for classes with builders, if only the builder uses method names that are identical to the corresponding attribute name without „with“-prefix. So deal with it, name the methods of the builder like that.

And yes, we should get rid of setters, where we do not really need them. And we should not write constructors with 20 parameters, because the parameters will get messed up. In a language like Java, that does not yet have named parameters. And we do not want to couple layers, so constructors that use the sister class from another layer are not a good idea either, if we have more than two layers or so. So there we go with a builder..

In the end of the day, we can write good software with any reasonably good language and framework. But it is worth investigating how to do certain things. And it is really worth asking the question, why we are doing this at times when it is possible to make such choices.

Share Button

JSON instead of Java Serialization: The solution?

We start recognizing that Serialization is not such a good idea.

It is cool and can really work on a wide range of objects, even including complex and cyclic reference graphs. And it was essential for some older Java frameworks like EJB and RMI, which allowed remote access to Java objects and classes.

But it is no longer the future, Oracle will soon deprecate and later remove it. And it will happen this time, even though they really keep stuff around for a long time due to compatibility requirements.

Just to recap: it opens up security discussion, it opens up hidden behavior and makes it harder to reason about code, it creates tight coupling between remote components and it can result in bugs, that only occur at runtime and cannot be discovered at compile time. In short, it is not resilient.

So we need something else. Obvious candidates are XML, YAML and JSON. XML is of course an option and is powerful enough to do many things, but often a bit too clumbsy and too much boiler plate, so we try to move away from it. YAML and JSON kind of do the same thing, but it seems that JSON is winning the race and we all need to know JSON and many of us tend to skip YAML.

So why not use JSON. It is easy, it has good libraries and we can even find databases that work with JSON.

What JSON can express very well are scalars, lists and maps and combinations of these. This is quite exactly what we have in Perl, JavaScript or Clojure as basic building blocks. These languages support object oriented programming, but for simple stuff we go with these basic building blocks. And objects can be modelled as (hash-)maps, with the attribute names as keys. Actually JSON is valid JavaScript code.

We do have to change our thinking when moving from Java Serialization to JSON. JSON does not store any serializable object but just data. Maybe that is enough and that is what we actually want. It totally works in heterogeneous environments, where we are using different programming languages or different implementations.

There are good libraries. I have tried two, Jackson and GSON which both work well, recently mostly Jackson. It is important to think of Clojure, JavaScript, Perl or something like that without objects. So we loose type information, which can be considered good or bad, but if we can arrange ourselves with it, we avoid the tight coupling. JavaBeans are expressed exactly the same as a HashMap with the attribute names as keys. We can provide the top level class when deserializing, but at the child levels it will not be able to figure that out, if it relies on runtime information.

Example Code

Here it has been tried out. Find full example code on github.

A class that contains all kinds of stuff. Not prepared for really putting in nulls, but it is just experimental code…


package net.itsky.jackson;

import java.util.List;
import java.util.Map;
import java.util.Set;

public class TestObject {
    private Long l;
    private String s;
    private Boolean b;
    private Set set;
    private List list;
    private Map map;

    public TestObject(Long l, String s, Boolean b, Set set, List list, Map map) {
        this.l = l;
        this.s = s;
        this.b = b;
        this.set = set;
        this.list = list;
        this.map = map;
    }

    public TestObject() {
        // only for framework purposes
    }

    public Long getL() {
        return l;
    }

    public String getS() {
        return s;
    }

    public Boolean getB() {
        return b;
    }

    public Set getSet() {
        return set;
    }

    public List getList() {
        return list;
    }

    public Map getMap() {
        return map;
    }

    @Override
    public String toString() {
        return getClass().getSimpleName() + "("
+                "l=" + l + " (" + l.getClass() + ") "
                + " s=\"" + s + "\" (" + s.getClass() + ") "
                + " b=" + b + " (" + b.getClass() + ") "
                + " set=" + set  + " (" + set.getClass() + ") "
                + " list=" + list + " (" + list.getClass() + ") "
                + " map=" + map + " (" + map.getClass() + "))";
    }
}

And this is used for running everything. To play around more, it should probably be moved to tests..

package net.itsky.jackson;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.ObjectWriter;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;

import java.io.StringReader;
import java.util.List;
import java.util.Map;
import java.util.Set;

public class App {

    public static void main(String[] args) {
        try {
            Set s1 = ImmutableSet.of(1, 2, 3);
            Set s2 = ImmutableSet.of(1, 2, 3);
            Map m1 = ImmutableMap.of("A", "abc", "B", 3L, "C", s1);
            List l1 = ImmutableList.of("i", "e", "a", "o", "u");
            TestObject t1 = new TestObject(30303L, "uv", true, s2, l1, m1);
            Map m2 = ImmutableMap.of("r", "r101", "s", 202, "t", t1);
            List l2 = ImmutableList.of("ä", "ö", "ü", "å", "ø");
            Set s3 = ImmutableSet.of("x", "y", "z");
            TestObject t2 = new TestObject(40404L, "ijk", false, s3, l2, m2);
            ObjectMapper mapper = new ObjectMapper();
            ObjectWriter writer = mapper.writerWithDefaultPrettyPrinter();
            System.out.println("t2=" + t2);
            String json = writer.writeValueAsString(t2);
            System.out.println("json=" + json);
            StringReader stringReader = new StringReader(json);
            TestObject t3 = mapper.readValue(stringReader, TestObject.class);
            System.out.println("t3=" + t3);
        } catch (Exception ex) {
            RuntimeException rex;
            if (ex instanceof RuntimeException) {
                rex = (RuntimeException) ex;
            } else {
                rex = new RuntimeException(ex);
            }
            throw rex;
        }
    }
}

And here is the output:

t2=TestObject(l=40404 (class java.lang.Long)  s="ijk" (class java.lang.String)
  b=false (class java.lang.Boolean)  
set=[x, y, z] (class com.google.common.collect.RegularImmutableSet)
  list=[ä, ö, ü, å, ø] (class com.google.common.collect.RegularImmutableList)
  map={r=r101, s=202, 
t=TestObject(l=30303 (class java.lang.Long)
  s="uv" (class java.lang.String)  b=true (class java.lang.Boolean)
  set=[1, 2, 3] (class com.google.common.collect.RegularImmutableSet)
  list=[i, e, a, o, u] (class com.google.common.collect.RegularImmutableList)
  map={A=abc, B=3, C=[1, 2, 3]} (class com.google.common.collect.RegularImmutableMap))}
 (class com.google.common.collect.RegularImmutableMap))
json={
  "l" : 40404,
  "s" : "ijk",
  "b" : false,
  "set" : [ "x", "y", "z" ],
  "list" : [ "ä", "ö", "ü", "å", "ø" ],
  "map" : {
    "r" : "r101",
    "s" : 202,
    "t" : {
      "l" : 30303,
      "s" : "uv",
      "b" : true,
      "set" : [ 1, 2, 3 ],
      "list" : [ "i", "e", "a", "o", "u" ],
      "map" : {
        "A" : "abc",
        "B" : 3,
        "C" : [ 1, 2, 3 ]
      }
    }
  }
}
t3=TestObject(l=40404 (class java.lang.Long)  s="ijk" (class java.lang.String)  
b=false (class java.lang.Boolean)  
set=[x, y, z] (class java.util.HashSet)  
list=[ä, ö, ü, å, ø] (class java.util.ArrayList)  
map={r=r101, s=202, 
t={l=30303, s=uv, b=true, 
set=[1, 2, 3], 
list=[i, e, a, o, u], 
map={A=abc, B=3, C=[1, 2, 3]}}}
  (class java.util.LinkedHashMap))

Process finished with exit code 0

So the immediate object and its immediate attributes were deserialized properly to what we provided. But everything inside went to maps, lists and scalars.

The intermediate JSON does not carry the type information at all, so this is the best that can be done.
Often it is useful what we want. If not, we need to find something else or see if we can tweak JSON to carry type information.

It will be interesting to explore other serialization protocols…

Links

Share Button

Perl and Scala: what can they learn from each other?

Ironically Scala at first drew my interest, because I discovered that about ten years ago there was no really good understanding of how to do a good multithreading concept for Perl 6. I thought exploring how they do it in Scala, where it was already known to be good at that time, would give a more general understanding to this issue. At this time Perl 6 (now named „Raku“) was intended to rather go without multithreading capabilities than doing them badly. In the end I got dragged into Scala and found that by itself more interesting than the original issue. And Perl 6 community eventually found good answers for providing multithreaded capabilities anyway.

So why there are technical concepts in both of these languages, that are interesting and possibly could in some way be applied to the other one, there is an interesting parallel.

Both Scala and Perl have been „cool languages“ that were really strong in an area or even in a broader range of application areas. Both of them found a competitor, that was kind of an „inferior clone“ of them. PHP in its early versions was very similar to Perl, but „simplified“ and kind of a subset of what Perl provided. At that time Perl had a real boom, because the first Web applications came up and the only reasonable way to go was CGI and of course it was done with Perl. There were some early alternatives like Cold Fusion and ASP, but they never really become main stream, at least not outside of their respective communities. Now PHP eventually took over most of Perl’s CGI and has become a major building block of our current WWW. Wikipedia and this Blog run on PHP. Perl eventually also lost its leading position as system administration scripting language to Ruby and even more to Python and some others, but it is still there and has strong string parsing capabilities and a very useful ecosystem of libraries called CPAN.

Now Scala has found Kotlin to be a similar competitor. Besides being somewhat simpler Kotlin also shines with good tooling support. It comes from the same organization as IntelliJ IDEA, which is the usual IDE for most JVM-languages for people who rely neither on Emacs nor vi. So Kotlin support in IntelliJ is always going to be a high priority. And Kotlin is officially supported by Google as programming language for Android-apps. It seems to work well, allows for more modern development than the supported Java versions and has conceptionally a lot of similarity with Swift, which is the most modern programming language supported by Apple for IOS-Apps. There have been heroic and admirable approaches to allow for Android App development using other JVM-languages, especially Scala. But they all suffer from the same set of problems. In order to avoid installing too much language specific code in the app, dynamic language features that would require a compile capability, as commonly used for Groovy or Clojure have to be avoided. And the excessive use of the languages libraries has to be avoided, because they are not on the phone already, but a copy of them has to be shipped with the app, for each App. So the storage usage is much more than for Kotlin and Java Apps. And then we see an attempt, to reduce the size of the libraries, by only including what is needed. That is necessary, but it looks too fragile to really trust it. So, for Mobile Apps, it is Kotlin. Period. And then Kotlin is already there, so why not use it on the server as well. Yes, I do believe Scala is better, but that is not what everyone thinks and it needs to be much better to justify the additional language, where App-development for Android is already happening.

Now both Perl and Scala had some problems. To some extent, they are even sharing the exact same problem. It was the possibility to write really „cool“ code that was very smart, very short and could not be read by anybody else without very much time and very much knowledge. This can be done in any language, but Perl is the number one for this and I would put Scala as number two and C++ and C as number three and four, from the languages, that I know. It is a good idea to use some coding standards that allow for clean Scala or Perl code. But please remain reasonable and do not let bureaucrats come in charge of the coding standards to create a monster that drains all creativity. Allow using powerful features, but use them in a decent and readable way.

Now in both cases, there was an effort, to write a new version of the language, that was meant to be slightly incompatible and cleaned up some of the weaknesses and brought some improvements. In case of Perl this was Perl 6. It was developed for around 20 years and came out a few years ago. Eventually it turned out too different, so it was renamed to Raku. For Scala, a new language called „Dotty“ was developed. It was decided to make this the next major version, Scala 3. Even though it is much closer to Scala 2 than Raku to Perl 5, it is still incompatible and requires an effort to rewrite code. It is already seen that large Perl 5 projects are hardly moving to Raku, so Perl 5 is there to stay and Raku is just a second language within the same community. This will probably not happen like that with Scala, and the core language team will probably at some point of time concentrate on Scala 3. But large organizations that heavily invested into Scala cannot easily migrate, simply because it needs a lot of time and money. So we will probably also see some long term coexistence of Scala 2 and Scala 3. Maybe Scala 2 will be forked by major adopters. Or it will be supported for money from Lightbend.

Share Button

Devoxx UA and Devoxx BE 2019

In 2019 I visited Devoxx UA in Kiev and Devoxx BE in Antwerp.
Traveling was actually a little story by itself, so for now we can just assume that I magically was at the locations of DevoxxUA and DevoxxBE.

In Kiew I attended the following talks:

On Wednesday I attended the following talks in Antwerp:

On Thursday I attended the following talks in Antwerp:

On Friday I attended the following talks in Antwerp:

That’s it…
As always, a lot of these topics deserve an article in this blog. And a lot of video recordings from the conference are worth viewing.

Links

Share Button

Checked Exceptions in Java

In Java it is possible to declare a method with a „throws“-clause. For certain exceptions, that are not extending „RuntimeException“ or „Error“, this is actually required.

What looked like a good idea 25 years ago has proven to be a dead end. I do not know of any other major programming language that opts for declaring exceptions in this way. Slightly newer frameworks extend all their exceptions from RuntimeException, thus avoiding the need to declare them. Even in relatively early Java there was a weird way of working with exceptions in EJB, when it was required to write an interface and an implementation for the EJB. But it was strongly discouraged to let the implementation implement the interface, because it threw different exceptions. It was not the only weird thing about early EJB, of course. But without checked exceptions it would at least have been possible to let the implementation implement its interface.

We are now able to use Java 13 and as of Java 8 lambdas were introduced. With the introduction of lambdas the declared exceptions became especially painful and for this reason even Oracle has created twins for some essential exceptions that derive from RuntimeException, especially IOException.

We should face it: The throws clause has turned out to be a mistake and we should avoid this mistake by just using exceptions that do not have to be declared, at least in our APIs. It is not the only mistake, see Criticism of Java. Some of my other favorites are the lack of operator overloading for numeric types, the weird concept of Serializable and the lack of natively immutable collections and the lack of a convenient way to write some collections as code. But these issues are being worked on and we will eventually see some progress.

Links

Share Button

Exceptions to implement Program Logic

Sometimes it is conveniant to use exceptions for implementing the regular program logic.

Assuming we want to find some data and then process them. When no data is found, this step can be skipped, because there is nothing to do. So we could program something like this:


public Data readData(Param param) {
    Data data = db.read(param);
    if (data.isEmpty()) {
        throw new NotFoundException("nothing found");
    }
    return data;
}

public ProcessedData doWork(Param param) {
    try {
        Data input = readData(param);
        ....
        ....
        ....
        ProcessedData result = ....
        return result;
    } catch (NotFoundException nfex) {
        return ProcessedData.empty();
    }
}

And some other exceptions could also be handled in a similar way.

Of course some people say, that this is not good and an abuse of exceptions. But sometimes it is tempting.

So is this bad? And if so, why? Let’s find out.

This is some kind of weird obfuscation of the control flow, because throwing and catching of exceptions can be far apart and it can become quite unclear, from where in the stack which exceptions can be thrown. So there are good reasons to recommend using exceptions only for what they are meant for by their name. The Goto has never made it into Java and we are discouraged from using it in many other languages, like C. But languages like Java, C, Perl, Ruby and some others provide quite rich control flow relying neither on goto nor on exceptions by using „return“ anywhere in a function or method or subroutine, leaving loops with „break“ or „last“ or going to the next iteration with „next“ or „continue“. Perl and Java even allow to specify which of nested loops to leave with break or last. These mechanisms are very powerful and there is no urgent need to add exceptions or even gotos just to support the control flow.

Once moving to newer languages like Scala much of this is gone or at least strongly discouraged in a purely functional programming style. This makes programming Scala harder, and comes with benefits that might be worth the extra effort.

But in Java these functional purists have not become very strong yet, so using „break“, „continue“, „return“ etc. is still ok and quite powerful.

In Java there is another very major problem with exceptions. Many, if not most Java programs run in a framework or container like Spring, EJB/JEE, JBoss Fuse, for example. Now a piece of software becomes a software component, that can interact with other components through the framework. And exceptions are noticed by the framework. In many cases they have the effect that an ongoing transaction is marked as „rollback only“. So the whole processing continues normally, and when all the code from the components is finally done, the framework performs a rollback instead of a commit.

As long as exceptions are only used for handling errors or unsual situations, in which cases the rollback is probably the way to go anyway, everything is fine. But if we for example look up something and base the further processing on the outcome of this, then a NotFoundException will result in very counter intuitive behavior.

So the original rule of not abusing exceptions is actually not such a bad idea.

Share Button

Guava-Collections in Java-APIs

When we write APIs, that have parameters or as return values, it is a good idea to consider relying on immutable objects only. This applies also when collections are involved directly or indirectly as content of the classes that occur as return values or parameters. Changing what is given through the API in either direction can create weird side effects. It even causes different behavior, depending on weather the API works locally or via the network, because changes of the parameters are usually not brought back to the caller via some hidden back channel, unless we run locally. I use Java as an example, but it is quite an universal concept and applies to many languages. If we talk about Ruby, for example, the freeze method might be our friend, but it goes only one level deep and we actually want to deep-freeze.. Another story, maybe…

Now we can think of using Java’s own Collections.unmodifiableList and likewise. But that is not really ideal. First of all, these collections can still be modified by just working on the inner of the two collections:

List list = new ArrayList<>();
list.add("a");
list.add("b");
list.add("c");
List ulist = Collections.unmodifiableList(list);
list.add("d");
ulist.get(3)
-> "d"

Now list in the example above may already come from a source that we don’t really control or know, so it may be modified there, because it is just „in use“. Unless we actually copy the collection and put it into an unmodifiableXXX after that and only retain the unmodifiable variant of it, this is not really a good guarantee against accidental change and weird effects. Let’s not get into the issue, that this is just controlled at runtime and not at compile time. There are languages or even libraries for Java, that would allow to require an immutable List at compile time. While this is natural in languages like Scala, you have to leave the usual space of interfaces in Java, because it is also really a good idea to have the „standard“ interfaces in the APIs. But at least use immutable implementations.

When we copy and wrap, we still get the weird indirection of working through the methods of two classes and carrying around two instances. Normally that is not an issue, but it can be. I find it more elegant to use Guava in these cases and just copy into a natively immutable collection in one step.

For new projects or projects undergoing a major refactoring, I strongly recommend this approach. But it is of course only a good idea, if we still keep our APIs consistent. Doing one of 100 APIs right is not worth an additional inconsistency. It is a matter of taste, to use standard Java interfaces implemented by Guava or Guava interfaces and classes itself for the API. But it should be consistent.

Links

Share Button