10 days ago

The client-side in Vaadin 10 is completely different than the one in Vaadin 8 and the layouting manager from Vaadin 8 is completely gone. Vaadin 10 layouting is therefore something completely different to Vaadin 8 layouting. Naturally, it is something completely different to Android layouting. Here we will focus on layouting completely configured from server-side, and not from a CSS file.

For Android Developers

  1. There is no RelativeLayout.
  2. There is no AbsoluteLayout yet.
  3. There is VerticalLayout and HorizontalLayout to replace LinearLayout
  4. To replace ListView, create Vaadin Grid with one column, containing components as contents. The content will be lazy-loaded and the components will be constructed lazily. The issue here is that Vaadin Grid requires all rows to have exactly the same height (or at least it used to).
  5. There is no GridView. You can employ FlexLayout which allows for multi-line horizontal layouting, but there is no lazy loading and the components will be instantiated eagerly.
  6. There is no table layout supported out-of-the-box by Vaadin 10. You can try to employ css grid but it's quite new and only supported by latest browsers.

For Vaadin 8 Developers

  1. There is no AbsoluteLayout yet.
  2. There is no GridLayout. You can try to employ css grid but it's quite new and only supported by latest browsers.
  3. There is a replacement for VerticalLayout and HorizontalLayout but it behaves differently. Read on for more info.
  4. There is FormLayout and it is responsive. See the vaadin-form-layout element documentation for more details; use FormLayout class server-side.

Let's learn VerticalLayout and HorizontalLayout

The very important rule #1:

  1. There are no slots allocated for child components. Because of that, calling setSizeFull() or setWidth("100%")/setHeight("100%") on children will not fill in the slot - instead it will set the component to be of the same width or height as the parent layout is. Hence it will most probably overflow out of the parent layout. Therefore, never call setSizeFull() nor set width/height to 100% unless you absolutely know what you're doing. When you set the component to expand, it will be automatically enlarged.

The new HorizontalLayout and VerticalLayout are classes designed to mimic the old Vaadin 8 layouts as much as possible. Yet underneath they use a completely different algorithm so you can't expect a 100% compatibility. The new algorithm is called flexbox and you can learn about its capabilities in the Complete Guide to Flexbox. Don't pay too much attention to the terminology - both HorizontalLayout and VerticalLayout try to use the "older" Vaadin 8 terminology as much as possible, making it easier to understand the functionality.

We will also not use vanilla Vaadin 10; instead we will use the Karibu-DSL library which enhances both HorizontalLayout and VerticalLayout in a way that they are more compatible with Vaadin 8. The simplest way to experiment with the layouts is to clone the Karibu-DSL HelloWorld Application for Vaadin 10 project. Just follow the steps there to get the project up-and-running in no time.


We'll modify the MainView class as follows:

@BodySize(width = "100vw", height = "100vh")
@Viewport("width=device-width, minimum-scale=1.0, initial-scale=1.0, user-scalable=yes")
class MainView : VerticalLayout() {
    init {
        content { align(stretch, top) }

        horizontalLayout {
            content { align(left, baseline) }

            textField("Enter your name:")
            checkBox("Subscribe to spam")
            comboBox<String> {
                setItems("Once per day", "Twice per day")
            button("Create Account")

The focus here is the horizontalLayout's content { align(left, baseline) }. It tells the layout to align the children to the left, and vertically on the baseline:

The baseline setting keeps the components in line and aligned properly on a single line. If we were to change that setting to top, all components would move upwards (except the TextField which is the largest one and is held in-place by its label):

Note how the checkbox is no longer aligned properly with the rest of the components - instead it is glued to the top.

Let's demonstrate the 100% mistake. Let's set the textField's width to 100% as follows:

        textField("Enter your name:") {
            width = "100%"

The result is very weird: it will push other components to the right and shrinks them. I was expecting the first element to be as wide as the parent, pushing all other components to overflow; I was wrong and something else happened :)

Let's now remove the 100% width and let's continue. There are two more vertical alignments: center:

And bottom:

This almost resembles the baseline setting, but the combobox looks misaligned. Therefore, if you're aligning fields, most probably you'll want to use baseline.

There is one more setting which is very important if you compose other layouts in HorizontalLayout - the stretch setting.

Now for the horizontal alignments: there are the following options:

  • left, which corresponds to flexbox's start
  • center
  • right which corresponds to flexbox's right
  • around, between and evenly distributes the leftover space around the individual components.

To see the effect of those settings please see the justify-content flexbox setting.

Of course these settings work only if there is a leftover space. If the HorizontalLayout is set to wrap its contents or if at least one component expands, the leftover space is gone and these settings are ignored.


Now let's expand the TextField as follows:

        textField("Enter your name:") {
            isExpand = true

The isExpand property only works inside of a VerticalLayout or HorizontalLayout and it is an alias for setting of the flexGrow to 1.0. This makes the TextField expanded in a way that it will eat all available leftover space:

Of course you can set other components to different flexGrow settings; a component with the setting of flexGrow = 2.0 should roughly take two times the space of a component expanded by flexGrow = 1.0 (this depends also on what the inherent size of the component is).

When there is no leftover space (for example because the HorizontalLayout wraps its content) then the flexGrow/isExpand setting is ignored. However, in this case the parent VerticalLayout is set to stretch its children to the available width, and hence the HorizontalLayout will be set to fill parent and that creates leftover space in order for the flexGrow setting will work.

Note how the components were automatically enlarged, to accommodate the space they've grown into, without any need to set their width to 100%. Setting the width to 100% would break the flexbox algorithm completely and would introduce weird artifacts.

Since Vaadin 10 does not use any kind of layouting engine but the CSS engine, it is actually the browser who is responsible for laying out the children. This means that you can use the browser's built-in developer tools to play with the components - you can set the flexbox properties in any way you see fit. The only disadvantage is that the flexbox terminology will differ from that of HorizontalLayout.

The browser is a very powerful IDE which can help you debug CSS- and layout-related issue. Take your time and read slowly through the following tutorials, to get acquinted with the browser
developer tools:


When the components are so tightly packed that there is no room even for their preferred size, the flexShrink rules are taken into effect. That is however beyond the scope of this tutorial, you can read more about this at The Guide To Flexbox or Mozilla's flex-shrink documentation.

Overriding the default alignment for children

You can override the horizontal alignment per child, by using the verticalAlignSelf property:

        comboBox<String> {
            setItems("Once per day", "Twice per day")
            verticalAlignSelf = FlexComponent.Alignment.START

You however can't change the vertical alignment per-child.


The VerticalLayout behaves exactly the same as HorizontalLayout does, it just swaps the axes and lays out the component along the y axis.

Root Layout

You can use both VerticalLayout and HorizontalLayout as root layouts. Typically you set them to span the entire browser window by calling setSizeFull() - that's the only place where you should use the setSizeFull() call :-).

Of course these layouts are quite rudimentary and may not have enough expressive power to define the general UI of your app. As you grow accustomed to these layouts, you can then switch to FlexLayout which offers you all possibilities of the flexbox. See the mobile-first header/footer example in the Complete Guide to Flexbox article.

13 days ago

There are lots of Android SDK sucks and android development is a pain rants indicating that development on Android is a horrible experience. Instead of repeating the same thing over and over again, I'll try to sum up why I will definitely prefer Vaadin-based PWA for my next project over Android SDK:

  • Vaadin has no Fragments - no crazy lifecycle of create/start/resume/whatever. The app simply always runs. In Vaadin, components are used for everything; nesting components is perfectly fine as opposed to nesting fragments, which tend to produce weird random crashes (the famous moveToState() method).
  • Since in Vaadin there is no crazy lifecycle, you are not required to code defensively. You do not have to be always prepared to be passivated by the Android system; you do not need to shatter your algorithm into multiple methods, always having to be prepared to save state into some Bundle when Android decides to passivate your app. Google guys are apparently obsessed by passivation and making your code fragmented, defensive and really hard to maintain.
  • Vaadin components are lightweight - they don't use any native resources, they just produce html for the browser to draw. There is no complex lifecycle - the components simply attach and detach to the screen (represented by the UI class) as the user uses the app. Since the components do not use any native resources, they are simply garbage-collected when not needed anymore.
  • All UI components are Serializable; in a rare case when the session passivation is needed, they are automatically saved along with the http session when there are no requests ongoing.
  • Components are the unit of reuse - you compose components into more powerful components, into forms, or even into views. The component may be as small as a Button, or as big as a VerticalLayout with a complete form in it.
  • You use Java code to create and nest components; even better you can use Kotlin code to build your UIs in a DSL fashion. You structure the code as you see fit; no need to have 123213 layout XMLs for 45 screen sizes in one large folder.
  • You use a single file CSS to style your app - you don't have to analyze Android Theme shattered into 3214132 XML files.
  • You don't need Emulator nor a device - the browser can render the page as if in a mobile phone; you can switch between various resolutions in an instant.
  • No DEX compilation. The compilation is fast.
  • No fragmentation: you develop for Chrome and Firefox. You can ignore the Internet Explorer crap.
  • No need to fetch stuff in background thread. You actually have two UI threads: the Vaadin UI thread lives server-side, while the browser has its own UI thread; those two threads are completely separated. It is quite typical for a Vaadin app to block in the UI code until the data is fetched, since blocking server-side does not block the UI in the browser.
  • PWAs do not have to be installed. Your users can simply browse your site; if they like the app, they can use their browser to easily create a shortcut to your app on their home screen.
  • You will support both Android and iPhone with one code base.
  • Avoid the trouble of publishing your app on the app stores: the app does not need to be reviewed by anybody; you will not need to pay $100 yearly to Apple.
  • No data syncing necessary since the data will reside on the server.
  • Android's way of requesting for runtime permissions is so horrible: it will call back not your Runnable, but your Activity/Fragment. That disallows you from spliting your code as you see fit; instead you have to keep all of your logic in an Activity/Fragment, shattered because of Fragment-based method callbacks. And since runtime permissions will be required for all Android apps at Google Play at the end of 2018, instead of having to port your app to runtime permissions you can move to another platform.
  • You can find more components at https://vaadin.com/directory. Since Vaadin 10 uses Web Components, you can find even more components at https://www.webcomponents.org/ which can then be turned into Vaadin 10 components simply by implementing the server-side code.

Of course there is no silver bullet. This approach has the following disadvantages:

  • No access to native APIs - only the browser-provided APIs are available. While you can access GPS, you can't access for example the SD card (with local user photos), can't make calls etc.
  • No offline mode unless you develop your app fully or in partial in JavaScript, as a part of the service worker.
  • Vaadin app lives server-side and uses a browser. Because of that, you won't be able to achieve the same performance as with the native app. It's probably not a good fit for, say, an immersive 3D game since the browser is obviously slower than the native code. However, this approach is perfectly fine for a chat client (say, Slack), or a banking app or the like.
  • You will need a SSL certificate since PWAs only work over https
  • You will need to pay for cloud hosting, SSL certificate and DNS domain yourself. The most easy way to set up all that is with Heroku; you can also pay for a virtualized Linux server, get the SSL certificate for free from Let's Encrypt and setup Tomcat to host your app.
  • You are responsible for your app's security; when your app is hacked, all of your user data may be compromised.
  • All of your users will use the same VM; 10000+ users can kill your server unless you load-balance. Luckily it's quite easy with most good cloud providers.

Switching to literally any other platform puts you in charge of code structure. It's as if Google was poised to create a horrible developer hell - eventually you'll develop a Stockholm Syndrome with Android SDK. Just leave that horrible crap behind.

16 days ago

In the previous blog post we've used self-signed certificate with Docker Tomcat. However, in order to have a properly protected web site, we need to use a proper set of certificates. We'll use the Let's Encrypt authority to obtain the keys at no cost.

Preparing the server and DNS

First, create a virtual server running Ubuntu 17.10, then make sure you can SSH into that box or you can at least launch a console via the cloud vendor web page. Write down the server IP.

Second, register a DNS domain, for example at GoDaddy. I'll register your-page.eu but you're free to register whatever domain you see fit. You will then need to edit the DNS records for that domain, and make sure that A record points to your server's IP.

Obtaining SSL certificates from Let's Encrypt

Now let's obtain the certificates from Let's Encrypt. The easiest way is to use Certbot which comes pre-packaged in Ubuntu 17.10. Run the following in your box terminal:

sudo apt install certbot
certbot certonly

Certbot is going to ask you a couple of questions. I tend to prefer to use the standalone mode (the certbot takes over port 80 and does everything on its own; this collides with Tomcat listening on 80 but since we need to stop Tomcat to renew the certificates anyway this is perfectly fine). Then just make sure to pass in both your-page.eu and www.your-page.eu so that both domains are SSL-protected.

When Certbot finishes, you'll be able to find the certificate files at /etc/letsencrypt/live/your-page.eu. We will now register the certificates into Tomcat.

Running dockerized Tomcat with the certificates

Let's deploy your app in a way that everything can be started using the docker-compose up -d command. The easiest way is to create a directory named /root/app/ and place the following docker-compose.yml file there:

version: '2'
    image: tomcat:9.0.5-jre8
      - "80:8080"
      - "443:8443"
      - /etc/letsencrypt:/etc/letsencrypt
      - ./server.xml:/usr/local/tomcat/conf/server.xml

Note: we need to map the entire /etc/letsencrypt into the Docker container; just mapping the /etc/letsencrypt/live/your-page.eu folder alone is not enough since those pem files are symlinks which would stop working in Docker and Tomcat would fail with FileNotFoundException.

Feel free to use the server.xml from the self-signed openssl article, but change the appropriate connector part to:

   <Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
              maxThreads="150" SSLEnabled="true" >
       <UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
       <Certificate certificateKeyFile="/etc/letsencrypt/live/your-page.eu/privkey.pem"
                    type="RSA" />

Store the server.xml into /root/app/.

Now go into /root/app and try to run the Tomcat server:

cd /root/app
docker-compose up

The pems are not password-protected so it should work.

Note: if Tomcat stops on Deploying web application directory and seems to do nothing, it may have ran out of entropy. You can verify this by running cat /proc/sys/kernel/random/entropy_avail - if this prints anything less than 100-200 then just run apt install haveged to help building up the entropy pool. Tomcat will take 2 minutes to start at first, but when the entropy is high, further starts will be much quicker. You can read more at Tomcat Hangs.

Now you can visit https://your-page.eu and you should see the Tomcat welcome page protected by https. Good job!

You can also verify at https://www.digicert.com/help/ that your SSL certificate is working properly.

Auto-renewing certificates

Let's Encrypt certificates only last for 90 days. Luckily the certificates can be renewed when they are older than 60 days, which gives us some time to do the renewal process. The certificates can be updated automatically using the Certbot. The Certbot can run hook scripts before and after the renewal happens, so that:

  • we can stop Tomcat in order for the Certbot to take over the port 80,
  • we can start Tomcat to reload certificates when the new certificates are in place

Let's create two scripts: /root/start_server:


set -e -o pipefail

cd /root/app
docker-compose up -d



set -e -o pipefail

cd /root/app
docker-compose down

Luckily docker-compose down will block until all containers are stopped, and it will unbind from port 80 prior terminating, which gives room for Certbot's own server.

To run the renewal process automatically, run

crontab -e

and add the following line:

30 2 * * 1 certbot renew --pre-hook "/root/stop_server" --post-hook "/root/start_server" >> /var/log/le-renew.log

That will make Certbot to check for new certificates every week at night. Certbot will do nothing if the certificates do not need to be renewed (since they're younger than 60 days). If the certificates need to be renewed (they are older than 60 days) it will stop the Tomcat server, update the certificates and then start the Tomcat server.

That's it.

16 days ago

It is possible to use pem-style certificates with Tomcat Docker image, without any need to store them first into the Java keystore. This is excellent since not only it is easier to generate self-signed certificate with the openssl command, this can also be used with certificates produced by Let's Encrypt.

Let's first see how to use the self-signed keys with the Tomcat Docker 9 image. First we generate the self-signed certificate:

$ openssl req -x509 -newkey rsa:4096 -keyout localhost-rsa-key.pem -out localhost-rsa-cert.pem -days 36500

Specify "changeit" as a password (or any other password of your chosing); the Common Name/FQDN is your domain, say, testing.com.

Place the generated localhost-rsa-*.pem files into a ssl/ folder. Now create the following docker-compose.yml file:

version: '2'
    image: tomcat:9.0.5-jre8
      - "8080:8080"
      - "8443:8443"
      - ./ssl:/usr/local/tomcat/ssl
      - ./server.xml:/usr/local/tomcat/conf/server.xml

Place the following server.xml file next to the docker-compose.yml file:

<?xml version="1.0" encoding="UTF-8"?>
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at


  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  See the License for the specific language governing permissions and
  limitations under the License.
<!-- Note:  A "Server" is not itself a "Container", so you may not
     define subcomponents such as "Valves" at this level.
     Documentation at /docs/config/server.html
<Server port="8005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <!-- Security listener. Documentation at /docs/config/listeners.html
  <Listener className="org.apache.catalina.security.SecurityListener" />
  <!--APR library loader. Documentation at /docs/apr.html -->
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <!-- Prevent memory leaks due to use of particular java/javax APIs-->
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <!-- Global JNDI resources
       Documentation at /docs/jndi-resources-howto.html
    <!-- Editable user database that can also be used by
         UserDatabaseRealm to authenticate users
    <Resource name="UserDatabase" auth="Container"
              description="User database that can be updated and saved"
              pathname="conf/tomcat-users.xml" />

  <!-- A "Service" is a collection of one or more "Connectors" that share
       a single "Container" Note:  A "Service" is not itself a "Container",
       so you may not define subcomponents such as "Valves" at this level.
       Documentation at /docs/config/service.html
  <Service name="Catalina">

    <!--The connectors can use a shared executor, you can define one or more named thread pools-->
    <Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
        maxThreads="150" minSpareThreads="4"/>

    <!-- A "Connector" represents an endpoint by which requests are received
         and responses are returned. Documentation at :
         Java HTTP Connector: /docs/config/http.html
         Java AJP  Connector: /docs/config/ajp.html
         APR (HTTP/AJP) Connector: /docs/apr.html
         Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
    <Connector port="8080" protocol="HTTP/1.1"
               redirectPort="8443" />
    <!-- A "Connector" using the shared thread pool-->
    <Connector executor="tomcatThreadPool"
               port="8080" protocol="HTTP/1.1"
               redirectPort="8443" />
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443
         This connector uses the NIO implementation. The default
         SSLImplementation will depend on the presence of the APR/native
         library and the useOpenSSL attribute of the
         Either JSSE or OpenSSL style configuration may be used regardless of
         the SSLImplementation selected. JSSE style configuration is used below.
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
               maxThreads="150" SSLEnabled="true">
            <Certificate certificateKeystoreFile="conf/localhost-rsa.jks"
                         type="RSA" />
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443 with HTTP/2
         This connector uses the APR/native implementation which always uses
         OpenSSL for TLS.
         Either JSSE or OpenSSL style configuration may be used. OpenSSL style
         configuration is used below.
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
               maxThreads="150" SSLEnabled="true" >
        <UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
            <Certificate certificateKeyFile="ssl/localhost-rsa-key.pem"
                         type="RSA" />

    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

    <!-- An Engine represents the entry point (within Catalina) that processes
         every request.  The Engine implementation for Tomcat stand alone
         analyzes the HTTP headers included with the request, and passes them
         on to the appropriate Host (virtual host).
         Documentation at /docs/config/engine.html -->

    <!-- You should set jvmRoute to support load-balancing via AJP ie :
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
    <Engine name="Catalina" defaultHost="localhost">

      <!--For clustering, please take a look at documentation at:
          /docs/cluster-howto.html  (simple how to)
          /docs/config/cluster.html (reference documentation) -->
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>

      <!-- Use the LockOutRealm to prevent attempts to guess user passwords
           via a brute-force attack -->
      <Realm className="org.apache.catalina.realm.LockOutRealm">
        <!-- This Realm uses the UserDatabase configured in the global JNDI
             resources under the key "UserDatabase".  Any edits
             that are performed against this UserDatabase are immediately
             available for use by the Realm.  -->
        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"

      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true">

        <!-- SingleSignOn valve, share authentication between web applications
             Documentation at: /docs/config/valve.html -->
        <Valve className="org.apache.catalina.authenticator.SingleSignOn" />

        <!-- Access log processes all example.
             Documentation at: /docs/config/valve.html
             Note: The pattern used is equivalent to using pattern="common" -->
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />


The directory structure should look like this:

├── docker-compose.yml
├── server.xml
└── ssl
    ├── localhost-rsa-cert.pem
    └── localhost-rsa-key.pem

That's it - just run docker-compose up in the . folder and Tomcat should boot up. Now you can visit https://localhost:8443 and the browser should warn you about the self-signed certificate - just store an exception for the site and you should see the Tomcat welcome page served over https.

about 1 month ago

I have to admit upfront that my brain is somehow incompatible with JPA. Every time I try to use JPA, I will somehow wind up fighting weird issues, be it LazyInitializationException with Hibernate, or weird sequence generator preallocation issues with EclipseLink. I don't know what to use merge() for; I lack the mean to reattach a bean and overwrite its values from the database. I don't want to bore you to death with all of the nitty-gritty details; all the reasons are summed in Issue #3 Remove JPA. I think that the number 1 reason of why JPA is incompatible with me is that

JPA is built around the idea that the Java classes, not the database, is the source of truth.

My way of thinking is the other way around: I think that the database (and not the JPA) is the source of truth. This way of thinking is however incompatible with the very core of the JPA technology and makes it impossible to use JPA for me.

The Source Of Truth

To allow Java beans to be the source of truth, JPA defines certain techniques and requires Hibernate to sync Java beans with the database behind the scenes. That is unfortunately a non-trivial issue which cannot be done automatically in certain cases, and that's why JPA abstraction suffers from the following issues:

  • Hibernate must track changes done to the bean, in order to sync them to the underlying database. That is done by intercepting all calls to setters, which can only be done by runtime class enhancement (for every entity Hibernate creates a class which extends that entity, overrides all setters and injects some Hibernate core class which is notified of changes). Such class can hardly be called a POJO since passing that class around will start throwing LazyInitializationException like hell. That is the black magic, pure and evil.
  • It intentionally lacks the method to reattach a bean and overwrite its contents with the database contents - this goes against the idea of having Java class the source of truth.
  • By abstracting joins into Collections, it causes the programmers to unintentionally write code which suffers from the 1+N loading issues.

Since I'm very bullheaded, instead of trying to attune my mind towards JPA I explored the different approach: let's step back and make the database the source of truth again.

Database As The Source Of Truth

Having Database as the source of truth means that the data in your bean may be obsolete the moment it leaves the database (since there might be somebody else already modifying that thing). The optimistic/pessimistic locking is the way around that, and to have the most recent data you will need to requery from the database.

The bean can now ditch all of its sync capabilities and become just a snapshot: a sample of the state of the database row at some particular time. Changes to the bean properties will no longer be propagated automatically to the database, but need to be done explicitly by the developer. The developer is once again in charge.

There - I have solved the whole detached/managed/lazy initialization exception nonsense for you :-)

The beans are once again POJOs with getters and setters, simply populated from the database by reading the data from the JDBC ResultSet and calling bean's setters.

That can be achieved by a simple reflection. Moreover, no runtime enhancement is needed since the bean will never track changes; to save changes we will need to issue the INSERT statement. Very important thought: since there's no runtime enhancement, the bean is really a POJO and can be actually made serializable and passed around in the app freely; you don't need DTOs anymore.

The Vaadin-on-Kotlin Persistency Layer: vok-db

There is a very handy library called Sql2o which maps JDBC ResultSet to Java POJOs. This idea forms the very core of vok-db. To explain the data retrieving capabilities of the vok-db library, let's begin by implementing a simple function which finds a bean by its ID.

vok-db: Data Retrieval From Scratch

The findById() function will simply issue a select SQL statement which selects a single row, then passes it to Sql2o to populate the bean with the contents of that row:

fun <T : Any> Connection.findById(clazz: Class<T>, id: Any): T? =
        createQuery("select * from ${clazz.databaseTableName} where id = :id")
                .addParameter("id", id)

The function is rather simple: it is actually nothing more but a simple Kotlin reimplementation of the very first "Execute, fetch and map" example in the Sql2o home page. The Sql2o API will do nothing more but create a JDBC PreparedStatement, run it, create instances of clazz using the zero-arg constructor, populate the bean with data and return the first one. Using this approach it is easy to implement more of the finders, such as findAll(clazz: Class<T>), findBySQL(clazz: Class<T>, where: String). However, let's focus on the findById() for now.

To obtain the Sql2o's Connection (which is a wrapper of JDBC's connection), we simply poll the HikariCP connection pool and use the db{} method to start+commit the transaction for us. This will allow us to retrieve beans from the database easily, simply by calling the following statement from anywhere of your code, be it a background thread, or a Vaadin button click listener:

val person: Person = db { connection.findById(Person::class.java, 25L) }

This is nice, but we can do better. In the dependency injection world, for every bean there is a Repository or DAO class which defines such finder methods and fetches instances of that particular bean. However, we have no DI and therefore we do not have to have a separate class injected somewhere, just for the purpose of accessing a database. We are free to attach those finders right where they belong: as factory methods of the Person class:

class Person {
  companion object {
    fun findById(id: Long): Person? = db { connection.findById(Person::class.java, id) }

This will allow us to write the finder code even more concise:

val person = Person.findById(25L)

Yet, adding those pesky findById() (and all other finder methods such as findAll()) manually to every bean is quite a tedious process. Is there perhaps some easier, more automatized way to add those methods to every bean? Yes there is, by the means of mixins and companion objects.

As a first attempt, we will define a Dao interface as follows:

interface Dao<T> {
  fun findById(clazz: Class<T>, id: Any): T? = db { connection.findById(clazz, id) }

Remember that the companion object is just a regular object, and as such it can implement interfaces?

class Person {
  companion object : Dao<Person>

This allows us to call the finder method as follows:

val person = Person.findById(Person::class.java, 25L)

Not bad, but the Person::class.java is somewhat redundant. Is there a way to force Kotlin to fill it for us? Yes, by the means of extension methods and reified type parameters:

interface Dao<T> {}
inline fun <ID: Any, reified T : Entity<ID>> Dao<T>.findById(id: ID): T? = db { con.findById(T::class.java, id) }

Now the finder code is perfect:

val person = Person.findById(25L)

vok-db follows this idea and provides the Dao interface for you, along with several of utility methods for your convenience. By the means of extension methods, you can define your own global finder methods, or you can simply attach finder methods to your beans as you see fit.

You can even define an extension property on Dao which produces Vaadin's DataProvider, fetching all instances of that particular bean. Luckily, you don't have to do that since vok-db already provides one for you.

Next up, data modification.

vok-db: Data Modification

Unfortunately Sql2o doesn't support persisting of the data, so we will have to do that ourselves. Luckily, it's very simple: all we need to do is to generate a proper insert or update SQL statement:

interface Entity<ID: Any> : Serializable {
    fun save() {
        db {
            if (id == null) {
                // not yet in the database, insert
                val fields = meta.persistedFieldDbNames - meta.idDbname
                con.createQuery("insert into ${meta.databaseTableName} (${fields.joinToString()}) values (${fields.map { ":$it" }.joinToString()})")
                id = con.key as ID
            } else {
                val fields = meta.persistedFieldDbNames - meta.idDbname
                con.createQuery("update ${meta.databaseTableName} set ${fields.map { "$it = :$it" }.joinToString()} where ${meta.idDbname} = :${meta.idDbname}")

Simply by having your beans implement the Entity interface, you will gain the ability to save and/or create database rows:

data class Person(override var id: Long, var name: String): Entity<Long> {
  companion object : Dao<Person>

val person = Person.findById(25L)
person.name = "Duke Leto Atreides"

Pure simplicity:

about 2 months ago

I'm sure that Thomas Kriebernegg, the CEO of App Radar will be happy to persuade you that you can make a living by selling Android Apps. He needs to - his way of making a living is to sell services to Android devs so he better make sure there are lots of Android devs; discouraging devs from becoming Android devs would be a financial suicide. But why not trying out your luck? After all, Flappy Birds was making $50,000 a day in ads; Clash of Clans makes millions of dollars. Let me tell you a fucking secret: you are not going to be the author of Flappy Birds and Clash of Clans, and let me tell you why.

First, creating Flappy Birds requires no technical skills; my grandmother could do that (if she knew what a smartphone is, and if she would be able to copy-paste from StackOverflow). It's the idea that counts; and there are but a few good ideas. Maybe a hundred ideas per year? How many of Android developers (or developer wannabies) there are, millions? You will have better odds by betting in a lottery. Far easier than fighting with retarded Android SDK, too. If you have a brilliant idea, someone else already had it and there's already an app for that. The probability of you being the first one is very low. I'd say that for every Flappy Birds dev there are 10,000 devs that earn a measly $10 per day tops. Oops.

Second, you are not an asshole that would cherish the thought of having a sustainable business model based on selling virtual axes $5 apiece to addicted kids, making them shell $500 a month, destroying their family income. A.K.A. Free to play.

Let's assume that you want to help people. You can try to create a decent helpful app and sell it for, say, $2. It will not make you rich but you hope it make you some decent money, right? Now that may be true for Apple since those users are willing to shell big buck for Steve Jobs's fart; but it's certainly not true for Android since that's the other extreme - shitload of niggards who have just spent a whopping $100 for a fucking phone so everything else on that phone is better free right?

Now that's just mean. There are lots of very nice users out there and I have met lots of them. They are however easily overshadowed by that 1% of guys who think that by spending $2 for your app gives them right to command you and swarm you with feature requests, expecting that you will fix them for free of course since I've already paid for the app and it's not doing what I want. I've seen a French guy, complaining how a highly specialized catalogue worth thousands of hours of work costs a whopping $20. That's like a price of a fucking bagel and a fucking latte right below the fucking Eiffel tower, oh so very expensive Mon dieu!

Now let's amuse ourselves by pretending that you will manage to get five hundred of new (that's not a recurring payment, that's one-off payment) paying users per per month (you won't). That's $1000 a month, right (it's not but for the sake of good laugh let's pretend it is)? Wrong. Google will take 30% off that, and your country/EU/whatever will take additional 20% VAT off that, thank you very much. You will receive ~$550 a month tops. And then the tax office steps into the game; if you intend to play a honest citizen then you need to say goodbye to another $200 for taxes, unemployment insurances, health insurances and pension funds. You will be left with $350 which you can wipe your ass with. If you are a student in Somalia that may be enough; if you have a mortgage and two screaming kids, believe me $350 is laughable.

You will never have a stream of thousands new paying users, hundreds tops. By trying to sell apps on Android, you will most probably end up being underpaid. But! You will also meet lots of really nice people which will let you know how you actually helped them with that little app of yours. That feeling is highly rewarding on its own. You will learn how to talk to customers (because those guys are actually your customers) and you will try to make them pay more and fail (because nobody wants to pay for software). You will learn a lot, and after you do, you will have to abandon that app of yours which you've nursed like your own child, and move someplace else where you can put your experience to use and earn a decent money instead of peanuts from Google Play.

Possible options to earn some money

  • You can charge for your app upfront but nobody will buy it since it's new, untested and cannot be evaluated upfront.
  • You can add in-app payments but nobody will purchase those unless they're forced to
  • The income from in-app payments will be laughable yet there will be fucking retards who will call you greedy bastard.

You can beg for money on Patreon but that's a) undignified b) ineffective since nobody'll send you money just like that unless they must, or it appeals to them. You will only receive money here from people whose lives you really made simpler because of the app, and they are just so nice to help you.

After a year of pledging your Patreon support will be somewhere around $100, and it will stay there. And stopping the development of your app (because it doesn't pay off) just feels that you've betrayed those nice chaps supporting you on Patreon.

You can try to charge for implementing feature requests (bountysource.com), but to this day I haven't actually seen anybody paying for a feature request. People will just open those requests and then they will just hope you will get bored/excited enough to fix that bug for free. Even if they are willing to pay, they will pay a measly $20 for something that takes 4 hours to fix, test and release and the real cost is somewhere around $400.

Trying to get rich by selling useful apps on Android won't work and you will be abused by people who believe they have the right to make you work for free, so that they can have a $40 dinner. You will do that in your free time since you will have to have a fucking job to actually earn some money, unless you are unemployed or a student. You will have zero free time.

You will also meet countless of really nice people which you will make genuinely happy (if your app is useful). That is actually a reward on its own, and a really good feeling. But it will not outweight the total loss of free time for a couple of bucks.

The best way to play this game is to not to play it. If you have a great idea, create a game and flood all social networks with it, then sell virtual crap and/or ads. That's about the only way to get rich on Google Play.

How to make a money then?

Place the centre of your business elsewhere:

  • Dropbox is making profit by selling space on their cloud drive; the Android app is free to enable you to reach that space. That's why they offer the Dropbox API for free - it's enabling devs to create Dropbox-based apps which will then lure in more customers.
  • Uber is making profit by taking 10% off the charge; the app is just an enabler to have more customers.
  • Facebook is making profit ... well I have no idea how, but it's certainly not by selling the Facebook app.
  • Banks - they'll charge you $7 monthly and will offer you a banking app for free.

Make a profit elsewhere, then create a free Android app as an enablement, to lure customers towards your services.

The alternative is to feed off your fellow Android developers.

You are the product

When you develop for Android, you are the product.

Google tries to lure in as much Android developers as possible, since they will populate Google Play store with apps (of dubious quality). However, the quality doesn't matter - all that matters is the count. Google can now say "Look how many apps we have! There's an app for everything". Having shitload of apps on Google Play is critical - if you don't have them, your phones are worthless, as Microsoft learned the hard way.

Then there are tools that can be sold to Android developers. Crashlytics (which supplement useless error reporting in Google Play), Firebase, ...

Then there are competing app stores which feed by charging 30% off of the app's price. More apps mean more sales and that means more money.

Then there are nice guys at App Radar which will help you make your app more visible. The dev - you - are their customer; more customers means more money.

That's why nobody will tell you upfront how badly the Android API sucks, or how bad the fragmentation is, or that you won't make a living. That would repel potential Android devs, and that would mean less apps on Google Play, less customers for Firebase etc. Instead they will talk about fucking Flappy Bird and how that particular lottery ticket made the author rich.

You will not create another Flappy Bird. You will fall into the other category.

The Google Play content

Taking the above into account, Google Play apps generally fall into the following categories:

  1. Free apps to lure the user into the center of the business which lies elsewhere
  2. Free apps designed by students which learn on how to create Android apps, and how to market them. These apps have often dubious quality and are abandoned by their creators at some point.
  3. Games which draw the money by using the abominable practice similar to gambling
  4. Useful apps which are either:
    • abandoned, left to rot on the Play store,
    • downloaded in such a numbers that it is profitable to develop further (rare),
    • made in spare time by an enthusiast which will abandon the app eventually since it does't pay off; rare and getting extinct.

That means that Google Play is full of games and rotten crap, with rare brilliant apps getting rarer. And that is the direct consequence of Android users not willing to pay for apps.


If you want to make a money, go with 1) or 3). If you want to learn, go with 2) . You may try to go with 4) on Google Play but the probability of you making sustainable business on that is somewhere around 0,01%. You may try to go with 4) on Apple App Store which yields much higher probability of actually creating a sustainable business.

2 months ago

Teamcity is quite a nice continuous integration server from Jetbrains. The set up is not that easy though since you typically need to install the TeamCity server itself, and at least one agent which will perform the building itself.

Luckily, with Docker, you can set up TeamCity very easily on any PC, without the fuss of installing the server and the agent. The following docker-compose.yml configuration file will direct Docker to download appropriate images both for TC server and for TC agent and run them. Create a folder named teamcity and drop the docker-compose.yml file in there:

version: '2'
    image: jetbrains/teamcity-server:2017.2.1
     - TEAMCITY_SERVER_MEM_OPTS=-Xmx2g -XX:MaxPermSize=270m -XX:ReservedCodeCacheSize=350m
     - ""
     - ./server/datadir:/data/teamcity_server/datadir
     - ./server/logs:/opt/teamcity/logs
    image: jetbrains/teamcity-agent
     - SERVER_URL=http://teamcity-server:8111
     - ./agent/conf:/data/teamcity_agent/conf

Then, create the following folders inside of the teamcity folder which will store both the server and the agent persistent folders:


Now just start the containers with:

docker-compose up

Now you can direct your browser to http://localhost:8111 and configure TC to use the embedded database; then just head to agents and authorize the teamcity-agent so that TC server will trust it and will run the builds there.

Building Android APKs with TeamCity

Building an Android app requires the Android SDK to be preinstalled in the Agent itself. That can be done quite easily - we will extend the teamcity-agent docker image and add the SDK on top of that. In the teamcity folder above, create a folder named agent-android-docker and put the following Dockerfile there:

FROM jetbrains/teamcity-agent


ENV DEBIAN_FRONTEND noninteractive

COPY sdk /sdk

RUN apt-get -qq update && \
    apt-get install -qqy --no-install-recommends \
      curl \
      html2text \
      libc6-i386 \
      lib32stdc++6 \
      lib32gcc1 \
      lib32ncurses5 \
      lib32z1 \
      unzip \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN mkdir -p $ANDROID_HOME/licenses/ \
  && echo "8933bad161af4178b1185d1a37fbf41ea5269c55" > $ANDROID_HOME/licenses/android-sdk-license \
  && echo "84831b9409646a918e30573bab4c9c91346d8abd" > $ANDROID_HOME/licenses/android-sdk-preview-license

ADD packages.txt /sdk
RUN mkdir -p /root/.android && \
  touch /root/.android/repositories.cfg && \
  ${ANDROID_HOME}/tools/bin/sdkmanager --update && \
  (while [ 1 ]; do sleep 5; echo y; done) | ${ANDROID_HOME}/tools/bin/sdkmanager --package_file=/sdk/packages.txt

RUN chmod -R a+rwx ${ANDROID_HOME}

You will also need the packages.txt along the Dockerfile itself, with the following contents:


Then, download the Android SDK Tools for Linux (Not the whole Studio, just the SDK Tools; the download links are at the bottom of the page). As of the writing of this article, the direct link to the SDK Tools is this: https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip . Unpack the zip file to agent-android-docker/sdk, so that the contents of the sdk folder contain the folder tools.

To use this new Agent Image, we will simply direct the Docker Compose to build it - just modify the docker-compose.yml file and change it as follows:

version: '2'
    image: jetbrains/teamcity-server:2017.2.1
     - TEAMCITY_SERVER_MEM_OPTS=-Xmx2g -XX:MaxPermSize=270m -XX:ReservedCodeCacheSize=350m
     - ""
     - ./server/datadir:/data/teamcity_server/datadir
     - ./server/logs:/opt/teamcity/logs
    build: agent-android-docker
     - SERVER_URL=http://teamcity-server:8111
     - ./agent/conf:/data/teamcity_agent/conf

Done - just run docker-compose up.

4 months ago

When Kotlin 1.1 brought in Coroutines, at first I wasn't impressed. I thought coroutines to be just a bit more lightweight threads: a lighter version of the OS threads we already have, perhaps easier on memory and a bit lighter on the CPU. Of course, this alone can actually make a ground-breaking difference: for example, Docker is just a light version of a VM, but it's quite a difference. It's just that I failed to see the consequences of this ground-breaking difference; also, performance-wise in Vaadin apps, it seems that there is little need to switch away from threads (read Are Threads Obsolete With Vaadin Apps? for more info).

So, is there something else to Coroutines with respect to Vaadin? The answer is yes.

What are Coroutines?

Coroutine is a block of code that looks just like a regular block of code. The main difference is that you can pause (suspend) that block by calling a special function; later on you can direct the code to continue the execution; the function will simply end and the calling code resumes execution. This is achieved by compiler magic and thus no support from JVM/OS is necessary. The Informal Kotlin Coroutines introduction offers great in-depth explanation. Even better is Roman Elizarov's Coroutine Video - the most clear explanation of Coroutines that I've seen (here is the part 2: Deep Dive into Coroutines on JVM).

There are the following problems with Coroutines:

  • the compiler magic (a nasty state machine is produced underneath);
  • the block looks like just a regular synchronous code, but suddenly out of nowhere completely another thread may continue the execution (after the coroutine is resumed). That's not something we can easily attune our mental model to.
  • They're not easy to grok.

Just disadvantages, what the hell? I can do pausing with threads already, simply by calling wait() or locks or semaphores etc; so what's the big fuss? The important difference is that at the suspension point, the coroutine detaches from the thread - it saves its call stack, the state of the variables and bails out, allowing the thread to do something else, or even terminate. When the coroutine is resumed later on, it can pick any thread to restore the stack+state in, and continue execution. Aha! Now this is something we can't do with threads.

The Blocking Dialogs

The coroutines suspend at suspension functions, and we can resume them as we see fit, say, from a button click handler. Consider the following Vaadin click handler:

deleteButton.addClickListener { e ->
  if (confirmDialog("Are you sure?")) {

This can't be done with the regular approach since we can't have blocking dialogs with Vaadin. The problem here is as follows: assume that confirmDialog() function blocks the Vaadin UI thread to wait for the user to click the "Yes" or "No" button. However, in order for the dialog to be actually drawn in the browser, the Vaadin UI thread must finish and produce response for the browser. But Vaadin UI thread can't finish since it's blocked in the confirmDialog function.

However, consider the following suspend function:

suspend fun confirmDialog(message: String = "Are you sure?"): Boolean {
    return suspendCancellableCoroutine { cont: CancellableContinuation<Boolean> ->
        val dlg = ConfirmDialog(message, { response -> cont.resume(response) })

When the dialog's Yes or No button is clicked, the block is called with response of true (or false, respectively), which resumes the continuation, and with it, the calling block.

This allows us to write the following code in the button's click handler:

deleteButton.addClickListener { e ->
  launch(vaadin()) {
    if (confirmDialog("Are you sure?")) {

The launch method slates the coroutine to be run later (via the UI.access() call). The click handler does not wait for the coroutine to run but terminates immediately, hence it does not block. Since the listener is now finished and there is no other UI code to run, Vaadin will now run UI.access()-scheduled things, including our coroutine (coroutine is the block after the launch method).

The coroutine will create the confirm dialog and will suspend (and detach from the UI thread). That allows the UI thread to finish and draw the dialog in the browser. Later on, when the "Yes" button is clicked, the coroutine is resumed in the UI thread. It will close the dialog (the code is left out for brevity), deletes the entity and finishes the coroutine and the UI thread as well, allowing the dialog close to be propagated towards the browser.

And alas, with the help of compiler magic, we now have blocking dialogs in Vaadin. Of course those dialogs are not really blocking - that still can not be done. What the compiler actually does is that it will break the coroutine code into two parts:

  • the first part creates the dialog and exits;
  • the second one runs on button click, closes the dialog and deletes the entity.

The compiler will convert our nice block into a nasty state automaton class. However, the illusion in the source code is perfect.

Further Down The Rabbit Hole

The above coroutine is rather simple - there are no cycles, try/catch blocks, and only a simple if statement. I wonder if the illusion will hold with a more complex code. Let's explore further.

We will create a simple movie tickets ordering page. The workflow is as follows:

  1. we will first check whether there are tickets available; if yes then
  2. we'll ask the user for confirmation on purchase;
  3. if the user confirms, we make the purchase.

We will access a very simple dummy REST service for the query and for purchase; the app will also provide server-side implementations for these dummy services, for simplicity reasons. Since those REST calls can take a while, we will display a progress dialog for the duration of those requests.

Let's pause briefly and visualize the code using the old, callback approach. We can block safely in the UI thread during the duration of those REST calls, but then we can't display the progress dialog: it's the same situation as with the blocking confirmation dialog above. Hence we need to run that REST request in the background, end the UI request code, and resume when the REST request is done. Lots of REST clients allow async callbacks - for example Retrofit2 (which is usuually my framework of choice). Unfortunately Retrofit2's dark secret is that it blocks a thread until the data is retrieved. With NIO2 we can do better, hence we'll use Async Http Client. In pseudo-code, the algorithm would look like this:

get("http://localhost:8080/rest/tickets/available").andThen({ r ->
  showConfirmDialog({ response ->
    if (response) {
      post("http://localhost:8080/rest/tickets/purchase").andThen({ r ->
        notifyUser("The tickets has been purchased");
    } else {
      notifyUser("Purchase canceled");

Behold, the Pyramid of Doom. And that's just the happy flow - now we need to add error handling on every call, and also make the whole thing cancelable - if the user cancels and navigates to another View, we need to clean up those dialogs. Brrr.

Luckily we have Coroutines, and hence we can rewrite the purchaseTickets() like this:

private fun purchaseTicket(): Job = launch(vaadin()) {
    // query the server for the number of available tickets. Wrap the long-running REST call in a nice progress dialog.
    val availableTickets: Int = withProgressDialog("Checking Available Tickets, Please Wait") {

    if (availableTickets <= 0) {
        Notification.show("No tickets available")
    } else {

        // there seem to be tickets available. Ask the user for confirmation.
        if (confirmDialog("There are $availableTickets available tickets. Would you like to purchase one?")) {

            // let's go ahead and purchase a ticket. Wrap the long-running REST call in a nice progress dialog.
            withProgressDialog("Purchasing") { RestClient.buyTicket() }

            // show an info box that the purchase has been completed
            confirmationInfoBox("The ticket has been purchased, thank you!")
        } else {

            // demonstrates the proper exception handling.
            throw RuntimeException("Unimplemented ;)")

Looks sequential, easy to read, easy to understand, but it has full power of error handling and cancelation handling:

  • Error handling - we simply throw an exception as we would in a sequential code, and the kotlin-produced state machine will take care that it is propagated properly through try{}catch blocks and out of the coroutine. There, our Vaadin integration layer will catch the exception and will call the standard Vaadin error handler.
  • Cancelation: the confirmDialog() function simply tells the continuation to close the dialog when the coroutine is completed (both successfully, exceptionally or canceled). The withProgressDialog() is even simpler - it will simply use try{}finally to hide the dialog; coroutine will then call the finally part even when canceled.

Coroutines allowed us to do amazing things here: we were able to use the sequential programming approach which produced readable/debuggable code, which is at the same time highly asynchronous and highly performant. I personally love that I am not forced to use Rx/Promise's thenCompose, thenAccept and all of that functional programming crap - we can simply use the built-in language constructs as if, try{}catch again.

The whole example project is at github: Vaadin Coroutines Demo - visit the page for more instructions on how to run the project for yourself and open it in your IDE. It's very simple.

You can try out the cancelation for yourself: normally it should be hooked on View's detach() which is called when navigating away, but it's easier to play around when you have a button for that (the "Cancel Purchase" button in the example app).

Further Ideas

Your application may need to access a relational database instead of making REST calls, to fetch the data. JDBC drivers only allows blocking access mode - that's fine since it's fine to block in JVM with Vaadin. Yet, there are fully async database drivers (for example for MySQL and PostgreSQL there are drivers on Github), so feel free to go ahead and use them with Continuations.

Unfortunately, we can't use async approach with Vaadin's Grid's DataProvider. This is because the DataProvider API is not asynchronous: the nature of the API requires us to compute the data and return it in the request itself. However, that's completely fine since it's OK to block in JVM.

Of course you can implement even more longer running processes. Perhaps it might be possible to save the state of a suspended coroutine to, say, a database. That would allow us to develop and run long-running business processes in your app, in a really simple way, with the full power of the Kotlin programming language. The possibilities are interesting.

4 months ago

Let me start by saying that threading model (or, rather, single thread model) provides the most simplistic programming paradigm - the imperative synchronous sequential programming. It is easy to see what the code is doing, it is easy to debug, and it is easy to reason about (unless you throw in dynamic proxies, injected interfaces with runtime implementation lookup strategy and other "goodies" of the DI) - that's why having such paradigm is generally desirable. Of course that holds only when you have one thread. Adding threads with no defined communication restrictions overthrows this simplicity and creates the most complex programming paradigm.

A new async hype of using Rx/Promises/Actors/whatever is high right now. node.js guys preach the new, event-based model to be superior to the current threading model. This marketing makes sense since they only have one thread (or, rather, one event queue), thus they can't do otherwise. However, compared to the simplicity of the single thread model, any code using async techniques will be order of magnitude harder to understand and debug:

  • callback-based programming introduces Callback Hell/Pyramid of Doom; implementing error-handling on top of that turns the pyramid into unmaintainable mess.
  • Rx/Promises introduces combinators such as thenCompose, thenAccept and others. While native to functional programming, it is quite foreign to imperative programming style and introduces a completely new programming vocabulary instead of reusing language constructs such as if, try{}catch etc. This vocabulary also tend to differ from library to library; also the API-to-code ratio is quite high and creates high verbosity (Java programmers could tell stories ;)
  • Misusing Actors to create complex app tends to create a mesh of messages which is impossible to debug or reason about; any bug is hard to spot, reason about its origin and fix.

True, these techniques do offer a programming paradigm which is easier to use than "here, have multiple threads and do whatever you like" - that's a ticket to hell (unless you use 1-master-n-slave threads paradigm). But they produce code far more complex than the single thread model does.

Simplicity Is Discarded Too Easily

With Vaadin, you only have one thread per session (or user). Since sessions are generally independent and do not tend to communicate, with Vaadin you effectively have one thread where you do things synchronously - update the UI, fetch stuff from REST/database synchronously, update the UI some more and return. That is the most simple programming paradigm you can get. Moving to Rx/Promises/Actors for whatever reasons will make your program more complex by an order of magnitude.

Unfortunately, simplicity is typically discarded too easily since it only creates indirect costs: the development is slower since the code is way more complex and harder to modify/add features to, there are more bugs so the maintenance is slower. When simplicity is discarded, the result is typically a big ball of mud nobody dares to refactor.

You don't want to go there. Make it work correctly; only then make it work fast. Don't include Rx in your project solely as an excuse to learn something new.

Debunking Arguments Against Threads

Typically there is the performance argument against threads: the threads are too heavyweight, and JVM can only create 3000 native threads anyway before something horrible happens, hence we must pursue alternatives. But that is simply not true, at least for a typical Vaadin app.

Vanilla OpenJDK on my Ubuntu Linux can create 12000 threads just fine, before failing to create additional thread with OutOfMemoryError. But that's only because there is a systemd hard limit on user process number which is around 12288; lifting that limit will allow JVM to create 33000 threads (in 4 seconds, so quite fast) before failing. To go even further, we need to increase the available heap size, or decrease stack size, fiddle with Linux ulimits etc. It's doable, and it is worth the preserved simplicity. And we don't even need that many threads after all.

According to the Vaadin scalability study, Vaadin supports up to 10000 concurrent users on one machine. That's 10000 native threads which JVM and the OS can perfectly handle. Typically your page will go nowhere that limit. You're not Google nor Facebook, and thus you don't need to employ complex techniques those guys need to employ. Start small, start simple; when the page grows, the business catches up, the idea is validated and your budget increases, only then you can fiddle with performance.

Regarding native threads performance: Ruby had green threads but they moved to native threads; Python had native threads from the start. Well, to be honest they still have GIL so they can still use but one CPU core, but the point is that those guys considered OS threads lightweight enough so that they started to use them.

As the time passed and JVM got more optimized:

  • Intra-process context switching became more cheap
  • locks on JVM are now based on CAS and thus very cheap
  • JVM can ramp up 30000 threads in 3-5 seconds on my laptop without any issues
  • 10000 threads with default stack trace of 512k will consume 5GB of memory; that could be handled by the virtual memory OS subsystem - if I don't use Spring/JavaEE/other huge stack generators then the stack is typically nearly empty and can be left unallocated/swapped out
  • Majority of threads typically sleep/are parked, and thus they are not eligible for preemptive multitasking/context switching and thus generate no observable CPU overhead
  • The overhead is mostly memory upkeep for stack and some thread metadata.

Therefore, using the threaded imperative model with Vaadin is completely fine and will create no obvious bottlenecks.


Quoting Donald Knuth:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

I suggest you start simple unless it's absolutely clear that it won't suffice anymore; only then employ more complex techniques. Also, if you're not a functional paradigm guy, don't hop on the Rx/Promises bandwagon - use Kotlin's Coroutines since they produce code that's vastly simpler and superior.

4 months ago

Nicolas Frankel put some of my feelings into an excellent article Is Object-Oriented Programming compatible with an enteprise context?. During the development of Aedict Online (still as a Java-based JavaEE/JPA app back then, now it's Vaadin-on-Kotlin-based), when trying to find rows from JPA+DB I often tried to find the necessary functionality on the JPA entity itself. Wouldn't it be great if, upon user's log in, we could find the user and verify his password using the following code?

fun login(username: String, password: String): User? {
  val user = User.findByUsername(username) ?: return null
  if (!user.verifyPassword(password)) return null
  return user

Unfortunately, I was not able to implement the User.findByUsername static method properly on JavaEE. I was forced by JavaEE to move such finders to an enterprise bean, because that was the only way to have transactions and the EntityManager. All my attempts to create such finders from JPA Entity static method failed:

  • The DI engine makes it impossible to look up beans from a static method
  • Creating a singleton bean with all things injected, providing them from a static context is a pattern discouraged in the DI world.

Eventually I realized that it was the DI itself who directly forbade me to group the code in a logical way, finders (they are in fact factory methods!) together with the Entity itself. I believe that the need to keep the common functionality together is one of the core concepts in pragmatic OOP, and I felt that DI was violating this. As the time passed I got so frustrated that I no longer could see the merits of the DI technology itself. After realizing this I hesitantly rejected the DI approach and tried to prototype a different approach.

I was learning Kotlin back then and I have just learned of global functions with blocks. What if those things could to the transaction for me, outside of EJB context? Such heresy, working with a database just like that! The idea is descibed here, but the gist is that it's dead easy to write a function which runs given block in a transaction, creates an EntityManager and provides it: the db method.

With that arsenal at hand, I was able to simply write this code:

class User {
  companion object {
    fun findByUsername(username: String): User? = db {
      em.createQuery("select * from User where username = :username")
        .setParameter("username", username)
        .singleResult as User?

It was that easy. I added a HikariCP connection pooling on top of that and I felt ... betrayed. The Dependency Injection propaganda crumbled and I saw DI standing naked, a monstrosity of dynamic proxies and enormous stack traces, bringing no value but obfuscating things, making things very complex and forcing me to split code in a way that I perceived unnatural (and anti-OOP). By moving away from DI I was expecting a great loss of functionality; I expected that I would have to tie things together painfully. Everybody uses DI so it's right to do so, right?

Instead, the DI-less code worked just like that - there was no need to inject lookup service into my JPA entity as Nicholas had to do, I did not had to resort to use Aspect Weaving or whatever clever technique my tired brain refused to comprehend. Things were simple again. I've embraced the essence of simplicity. Nice poem. Back to earth, Apollo.

I started to use static finders and loved them ever since. Later on I have created a simple Dao which allowed me to just define the User class as follows:

class User : Entity<Long> {
  companion object : Dao<User>

And the Dao would bring in useful static methods such as findAll(), count() etc, and the Entity would bring in instance methods such as save(), delete(), which allowed me to write code such as this:

val p = Person(name = "Albedo", age = 130)
expect(p) { Person.findById(p.id!!) }
expect(1) { Person.count() }

Ultimately I ditched the JPA crap away and embraced Sql2o with CRUD additions. You can find this approach being used in the Beverage Buddy Vaadin 10 Sample App, along with other goodies, such as:

grid.dataSource = Review.dataSource

By throwing the DI away, I no longer had to run the container with Spring Maven plugin or fiddle with Arquillian since I could just run the code as it was, from a simple JUnit tests: Server-side Web testing.

But that's story for another day.