9 months ago

It is possible to use pem-style certificates with Tomcat Docker image, without any need to store them first into the Java keystore. This is excellent since not only it is easier to generate self-signed certificate with the openssl command, this can also be used with certificates produced by Let's Encrypt.

Let's first see how to use the self-signed keys with the Tomcat Docker 9 image. First we generate the self-signed certificate:

$ openssl req -x509 -newkey rsa:4096 -keyout localhost-rsa-key.pem -out localhost-rsa-cert.pem -days 36500

Specify "changeit" as a password (or any other password of your chosing); the Common Name/FQDN is your domain, say, testing.com.

Place the generated localhost-rsa-*.pem files into a ssl/ folder. Now create the following docker-compose.yml file:

version: '2'
services:
  web:
    image: tomcat:9.0.5-jre8
    ports:
      - "8080:8080"
      - "8443:8443"
    volumes:
      - ./ssl:/usr/local/tomcat/ssl
      - ./server.xml:/usr/local/tomcat/conf/server.xml

Place the following server.xml file next to the docker-compose.yml file:

<?xml version="1.0" encoding="UTF-8"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<!-- Note:  A "Server" is not itself a "Container", so you may not
     define subcomponents such as "Valves" at this level.
     Documentation at /docs/config/server.html
 -->
<Server port="8005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <!-- Security listener. Documentation at /docs/config/listeners.html
  <Listener className="org.apache.catalina.security.SecurityListener" />
  -->
  <!--APR library loader. Documentation at /docs/apr.html -->
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <!-- Prevent memory leaks due to use of particular java/javax APIs-->
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <!-- Global JNDI resources
       Documentation at /docs/jndi-resources-howto.html
  -->
  <GlobalNamingResources>
    <!-- Editable user database that can also be used by
         UserDatabaseRealm to authenticate users
    -->
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>

  <!-- A "Service" is a collection of one or more "Connectors" that share
       a single "Container" Note:  A "Service" is not itself a "Container",
       so you may not define subcomponents such as "Valves" at this level.
       Documentation at /docs/config/service.html
   -->
  <Service name="Catalina">

    <!--The connectors can use a shared executor, you can define one or more named thread pools-->
    <!--
    <Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
        maxThreads="150" minSpareThreads="4"/>
    -->


    <!-- A "Connector" represents an endpoint by which requests are received
         and responses are returned. Documentation at :
         Java HTTP Connector: /docs/config/http.html
         Java AJP  Connector: /docs/config/ajp.html
         APR (HTTP/AJP) Connector: /docs/apr.html
         Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
    -->
    <Connector port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    <!-- A "Connector" using the shared thread pool-->
    <!--
    <Connector executor="tomcatThreadPool"
               port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    -->
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443
         This connector uses the NIO implementation. The default
         SSLImplementation will depend on the presence of the APR/native
         library and the useOpenSSL attribute of the
         AprLifecycleListener.
         Either JSSE or OpenSSL style configuration may be used regardless of
         the SSLImplementation selected. JSSE style configuration is used below.
    -->
    <!--
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
               maxThreads="150" SSLEnabled="true">
        <SSLHostConfig>
            <Certificate certificateKeystoreFile="conf/localhost-rsa.jks"
                         type="RSA" />
        </SSLHostConfig>
    </Connector>
    -->
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443 with HTTP/2
         This connector uses the APR/native implementation which always uses
         OpenSSL for TLS.
         Either JSSE or OpenSSL style configuration may be used. OpenSSL style
         configuration is used below.
    -->
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
               maxThreads="150" SSLEnabled="true" >
        <UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
        <SSLHostConfig>
            <Certificate certificateKeyFile="ssl/localhost-rsa-key.pem"
                         certificateFile="ssl/localhost-rsa-cert.pem"
certificateKeyPassword="changeit"
                         type="RSA" />
        </SSLHostConfig>
    </Connector>

    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />


    <!-- An Engine represents the entry point (within Catalina) that processes
         every request.  The Engine implementation for Tomcat stand alone
         analyzes the HTTP headers included with the request, and passes them
         on to the appropriate Host (virtual host).
         Documentation at /docs/config/engine.html -->

    <!-- You should set jvmRoute to support load-balancing via AJP ie :
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
    -->
    <Engine name="Catalina" defaultHost="localhost">

      <!--For clustering, please take a look at documentation at:
          /docs/cluster-howto.html  (simple how to)
          /docs/config/cluster.html (reference documentation) -->
      <!--
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
      -->

      <!-- Use the LockOutRealm to prevent attempts to guess user passwords
           via a brute-force attack -->
      <Realm className="org.apache.catalina.realm.LockOutRealm">
        <!-- This Realm uses the UserDatabase configured in the global JNDI
             resources under the key "UserDatabase".  Any edits
             that are performed against this UserDatabase are immediately
             available for use by the Realm.  -->
        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
               resourceName="UserDatabase"/>
      </Realm>

      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true">

        <!-- SingleSignOn valve, share authentication between web applications
             Documentation at: /docs/config/valve.html -->
        <!--
        <Valve className="org.apache.catalina.authenticator.SingleSignOn" />
        -->

        <!-- Access log processes all example.
             Documentation at: /docs/config/valve.html
             Note: The pattern used is equivalent to using pattern="common" -->
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />

      </Host>
    </Engine>
  </Service>
</Server>

The directory structure should look like this:

.
├── docker-compose.yml
├── server.xml
└── ssl
    ├── localhost-rsa-cert.pem
    └── localhost-rsa-key.pem

That's it - just run docker-compose up in the . folder and Tomcat should boot up. Now you can visit https://localhost:8443 and the browser should warn you about the self-signed certificate - just store an exception for the site and you should see the Tomcat welcome page served over https.

 
10 months ago

I have to admit upfront that my brain is somehow incompatible with JPA. Every time I try to use JPA, I will somehow wind up fighting weird issues, be it LazyInitializationException with Hibernate, or weird sequence generator preallocation issues with EclipseLink. I don't know what to use merge() for; I lack the mean to reattach a bean and overwrite its values from the database. I don't want to bore you to death with all of the nitty-gritty details; all the reasons are summed in Issue #3 Remove JPA. I think that the number 1 reason of why JPA is incompatible with me is that

JPA is built around the idea that the Java classes, not the database, is the source of truth.

My way of thinking is the other way around: I think that the database (and not the JPA) is the source of truth. This way of thinking is however incompatible with the very core of the JPA technology and makes it impossible to use JPA for me.

The Source Of Truth

To allow Java beans to be the source of truth, JPA defines certain techniques and requires Hibernate to sync Java beans with the database behind the scenes. That is unfortunately a non-trivial issue which cannot be done automatically in certain cases, and that's why JPA abstraction suffers from the following issues:

  • Hibernate must track changes done to the bean, in order to sync them to the underlying database. That is done by intercepting all calls to setters, which can only be done by runtime class enhancement (for every entity Hibernate creates a class which extends that entity, overrides all setters and injects some Hibernate core class which is notified of changes). Such class can hardly be called a POJO since passing that class around will start throwing LazyInitializationException like hell. That is the black magic, pure and evil.
  • It intentionally lacks the method to reattach a bean and overwrite its contents with the database contents - this goes against the idea of having Java class the source of truth.
  • By abstracting joins into Collections, it causes the programmers to unintentionally write code which suffers from the 1+N loading issues.

Since I'm very bullheaded, instead of trying to attune my mind towards JPA I explored the different approach: let's step back and make the database the source of truth again.

Database As The Source Of Truth

Having Database as the source of truth means that the data in your bean may be obsolete the moment it leaves the database (since there might be somebody else already modifying that thing). The optimistic/pessimistic locking is the way around that, and to have the most recent data you will need to requery from the database.

The bean can now ditch all of its sync capabilities and become just a snapshot: a sample of the state of the database row at some particular time. Changes to the bean properties will no longer be propagated automatically to the database, but need to be done explicitly by the developer. The developer is once again in charge.

There - I have solved the whole detached/managed/lazy initialization exception nonsense for you :-)

The beans are once again POJOs with getters and setters, simply populated from the database by reading the data from the JDBC ResultSet and calling bean's setters.

That can be achieved by a simple reflection. Moreover, no runtime enhancement is needed since the bean will never track changes; to save changes we will need to issue the INSERT statement. Very important thought: since there's no runtime enhancement, the bean is really a POJO and can be actually made serializable and passed around in the app freely; you don't need DTOs anymore.

The Vaadin-on-Kotlin Persistency Layer: vok-db

There is a very handy library called Sql2o which maps JDBC ResultSet to Java POJOs. This idea forms the very core of vok-db. To explain the data retrieving capabilities of the vok-db library, let's begin by implementing a simple function which finds a bean by its ID.

vok-db: Data Retrieval From Scratch

The findById() function will simply issue a select SQL statement which selects a single row, then passes it to Sql2o to populate the bean with the contents of that row:

fun <T : Any> Connection.findById(clazz: Class<T>, id: Any): T? =
        createQuery("select * from ${clazz.databaseTableName} where id = :id")
                .addParameter("id", id)
                .executeAndFetchFirst(clazz)

The function is rather simple: it is actually nothing more but a simple Kotlin reimplementation of the very first "Execute, fetch and map" example in the Sql2o home page. The Sql2o API will do nothing more but create a JDBC PreparedStatement, run it, create instances of clazz using the zero-arg constructor, populate the bean with data and return the first one. Using this approach it is easy to implement more of the finders, such as findAll(clazz: Class<T>), findBySQL(clazz: Class<T>, where: String). However, let's focus on the findById() for now.

To obtain the Sql2o's Connection (which is a wrapper of JDBC's connection), we simply poll the HikariCP connection pool and use the db{} method to start+commit the transaction for us. This will allow us to retrieve beans from the database easily, simply by calling the following statement from anywhere of your code, be it a background thread, or a Vaadin button click listener:

val person: Person = db { connection.findById(Person::class.java, 25L) }

This is nice, but we can do better. In the dependency injection world, for every bean there is a Repository or DAO class which defines such finder methods and fetches instances of that particular bean. However, we have no DI and therefore we do not have to have a separate class injected somewhere, just for the purpose of accessing a database. We are free to attach those finders right where they belong: as factory methods of the Person class:

class Person {
  companion object {
    fun findById(id: Long): Person? = db { connection.findById(Person::class.java, id) }
  }
}

This will allow us to write the finder code even more concise:

val person = Person.findById(25L)

Yet, adding those pesky findById() (and all other finder methods such as findAll()) manually to every bean is quite a tedious process. Is there perhaps some easier, more automatized way to add those methods to every bean? Yes there is, by the means of mixins and companion objects.

As a first attempt, we will define a Dao interface as follows:

interface Dao<T> {
  fun findById(clazz: Class<T>, id: Any): T? = db { connection.findById(clazz, id) }
}

Remember that the companion object is just a regular object, and as such it can implement interfaces?

class Person {
  companion object : Dao<Person>
}

This allows us to call the finder method as follows:

val person = Person.findById(Person::class.java, 25L)

Not bad, but the Person::class.java is somewhat redundant. Is there a way to force Kotlin to fill it for us? Yes, by the means of extension methods and reified type parameters:

interface Dao<T> {}
inline fun <ID: Any, reified T : Entity<ID>> Dao<T>.findById(id: ID): T? = db { con.findById(T::class.java, id) }

Now the finder code is perfect:

val person = Person.findById(25L)

vok-db follows this idea and provides the Dao interface for you, along with several of utility methods for your convenience. By the means of extension methods, you can define your own global finder methods, or you can simply attach finder methods to your beans as you see fit.

You can even define an extension property on Dao which produces Vaadin's DataProvider, fetching all instances of that particular bean. Luckily, you don't have to do that since vok-db already provides one for you.

Next up, data modification.

vok-db: Data Modification

Unfortunately Sql2o doesn't support persisting of the data, so we will have to do that ourselves. Luckily, it's very simple: all we need to do is to generate a proper insert or update SQL statement:

interface Entity<ID: Any> : Serializable {
    ...
    fun save() {
        db {
            if (id == null) {
                // not yet in the database, insert
                val fields = meta.persistedFieldDbNames - meta.idDbname
                con.createQuery("insert into ${meta.databaseTableName} (${fields.joinToString()}) values (${fields.map { ":$it" }.joinToString()})")
                        .bind(this@Entity)
                        .executeUpdate()
                id = con.key as ID
            } else {
                val fields = meta.persistedFieldDbNames - meta.idDbname
                con.createQuery("update ${meta.databaseTableName} set ${fields.map { "$it = :$it" }.joinToString()} where ${meta.idDbname} = :${meta.idDbname}")
                        .bind(this@Entity)
                        .executeUpdate()
            }
        }
    }
    ...
}

Simply by having your beans implement the Entity interface, you will gain the ability to save and/or create database rows:

data class Person(override var id: Long, var name: String): Entity<Long> {
  companion object : Dao<Person>
}

val person = Person.findById(25L)
person.name = "Duke Leto Atreides"
person.save()

Pure simplicity:

 
10 months ago

I'm sure that Thomas Kriebernegg, the CEO of App Radar will be happy to persuade you that you can make a living by selling Android Apps. He needs to - his way of making a living is to sell services to Android devs so he better make sure there are lots of Android devs; discouraging devs from becoming Android devs would be a financial suicide. But why not trying out your luck? After all, Flappy Birds was making $50,000 a day in ads; Clash of Clans makes millions of dollars. Let me tell you a fucking secret: you are not going to be the author of Flappy Birds and Clash of Clans, and let me tell you why.

First, creating Flappy Birds requires no technical skills; my grandmother could do that (if she knew what a smartphone is, and if she would be able to copy-paste from StackOverflow). It's the idea that counts; and there are but a few good ideas. Maybe a hundred ideas per year? How many of Android developers (or developer wannabies) there are, millions? You will have better odds by betting in a lottery. Far easier than fighting with retarded Android SDK, too. If you have a brilliant idea, someone else already had it and there's already an app for that. The probability of you being the first one is very low. I'd say that for every Flappy Birds dev there are 10,000 devs that earn a measly $10 per day tops. Oops.

Second, you are not an asshole that would cherish the thought of having a sustainable business model based on selling virtual axes $5 apiece to addicted kids, making them shell $500 a month, destroying their family income. A.K.A. Free to play.

Let's assume that you want to help people. You can try to create a decent helpful app and sell it for, say, $2. It will not make you rich but you hope it make you some decent money, right? Now that may be true for Apple since those users are willing to shell big buck for Steve Jobs's fart; but it's certainly not true for Android since that's the other extreme - shitload of niggards who have just spent a whopping $100 for a fucking phone so everything else on that phone is better free right?

Now that's just mean. There are lots of very nice users out there and I have met lots of them. They are however easily overshadowed by that 1% of guys who think that by spending $2 for your app gives them right to command you and swarm you with feature requests, expecting that you will fix them for free of course since I've already paid for the app and it's not doing what I want. I've seen a French guy, complaining how a highly specialized catalogue worth thousands of hours of work costs a whopping $20. That's like a price of a fucking bagel and a fucking latte right below the fucking Eiffel tower, oh so very expensive Mon dieu!

Now let's amuse ourselves by pretending that you will manage to get five hundred of new (that's not a recurring payment, that's one-off payment) paying users per per month (you won't). That's $1000 a month, right (it's not but for the sake of good laugh let's pretend it is)? Wrong. Google will take 30% off that, and your country/EU/whatever will take additional 20% VAT off that, thank you very much. You will receive ~$550 a month tops. And then the tax office steps into the game; if you intend to play a honest citizen then you need to say goodbye to another $200 for taxes, unemployment insurances, health insurances and pension funds. You will be left with $350 which you can wipe your ass with. If you are a student in Somalia that may be enough; if you have a mortgage and two screaming kids, believe me $350 is laughable.

You will never have a stream of thousands new paying users, hundreds tops. By trying to sell apps on Android, you will most probably end up being underpaid. But! You will also meet lots of really nice people which will let you know how you actually helped them with that little app of yours. That feeling is highly rewarding on its own. You will learn how to talk to customers (because those guys are actually your customers) and you will try to make them pay more and fail (because nobody wants to pay for software). You will learn a lot, and after you do, you will have to abandon that app of yours which you've nursed like your own child, and move someplace else where you can put your experience to use and earn a decent money instead of peanuts from Google Play.

Possible options to earn some money

  • You can charge for your app upfront but nobody will buy it since it's new, untested and cannot be evaluated upfront.
  • You can add in-app payments but nobody will purchase those unless they're forced to
  • The income from in-app payments will be laughable yet there will be fucking retards who will call you greedy bastard.

You can beg for money on Patreon but that's a) undignified b) ineffective since nobody'll send you money just like that unless they must, or it appeals to them. You will only receive money here from people whose lives you really made simpler because of the app, and they are just so nice to help you.

After a year of pledging your Patreon support will be somewhere around $100, and it will stay there. And stopping the development of your app (because it doesn't pay off) just feels that you've betrayed those nice chaps supporting you on Patreon.

You can try to charge for implementing feature requests (bountysource.com), but to this day I haven't actually seen anybody paying for a feature request. People will just open those requests and then they will just hope you will get bored/excited enough to fix that bug for free. Even if they are willing to pay, they will pay a measly $20 for something that takes 4 hours to fix, test and release and the real cost is somewhere around $400.

Trying to get rich by selling useful apps on Android won't work and you will be abused by people who believe they have the right to make you work for free, so that they can have a $40 dinner. You will do that in your free time since you will have to have a fucking job to actually earn some money, unless you are unemployed or a student. You will have zero free time.

You will also meet countless of really nice people which you will make genuinely happy (if your app is useful). That is actually a reward on its own, and a really good feeling. But it will not outweight the total loss of free time for a couple of bucks.

The best way to play this game is to not to play it. If you have a great idea, create a game and flood all social networks with it, then sell virtual crap and/or ads. That's about the only way to get rich on Google Play.

How to make a money then?

Place the centre of your business elsewhere:

  • Dropbox is making profit by selling space on their cloud drive; the Android app is free to enable you to reach that space. That's why they offer the Dropbox API for free - it's enabling devs to create Dropbox-based apps which will then lure in more customers.
  • Uber is making profit by taking 10% off the charge; the app is just an enabler to have more customers.
  • Facebook is making profit ... well I have no idea how, but it's certainly not by selling the Facebook app.
  • Banks - they'll charge you $7 monthly and will offer you a banking app for free.

Make a profit elsewhere, then create a free Android app as an enablement, to lure customers towards your services.

The alternative is to feed off your fellow Android developers.

You are the product

When you develop for Android, you are the product.

Google tries to lure in as much Android developers as possible, since they will populate Google Play store with apps (of dubious quality). However, the quality doesn't matter - all that matters is the count. Google can now say "Look how many apps we have! There's an app for everything". Having shitload of apps on Google Play is critical - if you don't have them, your phones are worthless, as Microsoft learned the hard way.

Then there are tools that can be sold to Android developers. Crashlytics (which supplement useless error reporting in Google Play), Firebase, ...

Then there are competing app stores which feed by charging 30% off of the app's price. More apps mean more sales and that means more money.

Then there are nice guys at App Radar which will help you make your app more visible. The dev - you - are their customer; more customers means more money.

That's why nobody will tell you upfront how badly the Android API sucks, or how bad the fragmentation is, or that you won't make a living. That would repel potential Android devs, and that would mean less apps on Google Play, less customers for Firebase etc. Instead they will talk about fucking Flappy Bird and how that particular lottery ticket made the author rich.

You will not create another Flappy Bird. You will fall into the other category.

The Google Play content

Taking the above into account, Google Play apps generally fall into the following categories:

  1. Free apps to lure the user into the center of the business which lies elsewhere
  2. Free apps designed by students which learn on how to create Android apps, and how to market them. These apps have often dubious quality and are abandoned by their creators at some point.
  3. Games which draw the money by using the abominable practice similar to gambling
  4. Useful apps which are either:
    • abandoned, left to rot on the Play store,
    • downloaded in such a numbers that it is profitable to develop further (rare),
    • made in spare time by an enthusiast which will abandon the app eventually since it does't pay off; rare and getting extinct.

That means that Google Play is full of games and rotten crap, with rare brilliant apps getting rarer. And that is the direct consequence of Android users not willing to pay for apps.

Summary

If you want to make a money, go with 1) or 3). If you want to learn, go with 2) . You may try to go with 4) on Google Play but the probability of you making sustainable business on that is somewhere around 0,01%. You may try to go with 4) on Apple App Store which yields much higher probability of actually creating a sustainable business.

 
11 months ago

Teamcity is quite a nice continuous integration server from Jetbrains. The set up is not that easy though since you typically need to install the TeamCity server itself, and at least one agent which will perform the building itself.

Luckily, with Docker, you can set up TeamCity very easily on any PC, without the fuss of installing the server and the agent. The following docker-compose.yml configuration file will direct Docker to download appropriate images both for TC server and for TC agent and run them. Create a folder named teamcity and drop the docker-compose.yml file in there:

version: '2'
services:
  teamcity-server:
    image: jetbrains/teamcity-server:2017.2.1
    environment:
     - TEAMCITY_SERVER_MEM_OPTS=-Xmx2g -XX:MaxPermSize=270m -XX:ReservedCodeCacheSize=350m
    ports:
     - "127.0.0.1:8111:8111"
    volumes:
     - ./server/datadir:/data/teamcity_server/datadir
     - ./server/logs:/opt/teamcity/logs
  teamcity-agent:
    image: jetbrains/teamcity-agent
    environment:
     - SERVER_URL=http://teamcity-server:8111
    volumes:
     - ./agent/conf:/data/teamcity_agent/conf

Then, create the following folders inside of the teamcity folder which will store both the server and the agent persistent folders:

server/datadir
server/logs
agent/conf

Now just start the containers with:

docker-compose up

Now you can direct your browser to http://localhost:8111 and configure TC to use the embedded database; then just head to agents and authorize the teamcity-agent so that TC server will trust it and will run the builds there.

Building Android APKs with TeamCity

Building an Android app requires the Android SDK to be preinstalled in the Agent itself. That can be done quite easily - we will extend the teamcity-agent docker image and add the SDK on top of that. In the teamcity folder above, create a folder named agent-android-docker and put the following Dockerfile there:

FROM jetbrains/teamcity-agent

ENV ANDROID_HOME "/sdk"
ENV SDK_HOME "/opt"

ENV PATH "$PATH:${ANDROID_HOME}/tools"
ENV DEBIAN_FRONTEND noninteractive

COPY sdk /sdk

RUN apt-get -qq update && \
    apt-get install -qqy --no-install-recommends \
      curl \
      html2text \
      libc6-i386 \
      lib32stdc++6 \
      lib32gcc1 \
      lib32ncurses5 \
      lib32z1 \
      unzip \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN mkdir -p $ANDROID_HOME/licenses/ \
  && echo "8933bad161af4178b1185d1a37fbf41ea5269c55" > $ANDROID_HOME/licenses/android-sdk-license \
  && echo "84831b9409646a918e30573bab4c9c91346d8abd" > $ANDROID_HOME/licenses/android-sdk-preview-license

ADD packages.txt /sdk
RUN mkdir -p /root/.android && \
  touch /root/.android/repositories.cfg && \
  ${ANDROID_HOME}/tools/bin/sdkmanager --update && \
  (while [ 1 ]; do sleep 5; echo y; done) | ${ANDROID_HOME}/tools/bin/sdkmanager --package_file=/sdk/packages.txt

RUN chmod -R a+rwx ${ANDROID_HOME}

You will also need the packages.txt along the Dockerfile itself, with the following contents:

add-ons;addon-google_apis-google-24
build-tools;25.0.3
extras;android;m2repository
extras;google;google_play_services
extras;google;m2repository
extras;m2repository;com;android;support;constraint;constraint-layout;1.0.2
platform-tools
platforms;android-25

Then, download the Android SDK Tools for Linux (Not the whole Studio, just the SDK Tools; the download links are at the bottom of the page). As of the writing of this article, the direct link to the SDK Tools is this: https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip . Unpack the zip file to agent-android-docker/sdk, so that the contents of the sdk folder contain the folder tools.

To use this new Agent Image, we will simply direct the Docker Compose to build it - just modify the docker-compose.yml file and change it as follows:

version: '2'
services:
  teamcity-server:
    image: jetbrains/teamcity-server:2017.2.1
    environment:
     - TEAMCITY_SERVER_MEM_OPTS=-Xmx2g -XX:MaxPermSize=270m -XX:ReservedCodeCacheSize=350m
    ports:
     - "127.0.0.1:8111:8111"
    volumes:
     - ./server/datadir:/data/teamcity_server/datadir
     - ./server/logs:/opt/teamcity/logs
  teamcity-agent:
    build: agent-android-docker
    environment:
     - SERVER_URL=http://teamcity-server:8111
    volumes:
     - ./agent/conf:/data/teamcity_agent/conf

Done - just run docker-compose up.

 
about 1 year ago

When Kotlin 1.1 brought in Coroutines, at first I wasn't impressed. I thought coroutines to be just a bit more lightweight threads: a lighter version of the OS threads we already have, perhaps easier on memory and a bit lighter on the CPU. Of course, this alone can actually make a ground-breaking difference: for example, Docker is just a light version of a VM, but it's quite a difference. It's just that I failed to see the consequences of this ground-breaking difference; also, performance-wise in Vaadin apps, it seems that there is little need to switch away from threads (read Are Threads Obsolete With Vaadin Apps? for more info).

So, is there something else to Coroutines with respect to Vaadin? The answer is yes.

What are Coroutines?

Coroutine is a block of code that looks just like a regular block of code. The main difference is that you can pause (suspend) that block by calling a special function; later on you can direct the code to continue the execution; the function will simply end and the calling code resumes execution. This is achieved by compiler magic and thus no support from JVM/OS is necessary. The Informal Kotlin Coroutines introduction offers great in-depth explanation. Even better is Roman Elizarov's Coroutine Video - the most clear explanation of Coroutines that I've seen (here is the part 2: Deep Dive into Coroutines on JVM).

There are the following problems with Coroutines:

  • the compiler magic (a nasty state machine is produced underneath);
  • the block looks like just a regular synchronous code, but suddenly out of nowhere completely another thread may continue the execution (after the coroutine is resumed). That's not something we can easily attune our mental model to.
  • They're not easy to grok.

Just disadvantages, what the hell? I can do pausing with threads already, simply by calling wait() or locks or semaphores etc; so what's the big fuss? The important difference is that at the suspension point, the coroutine detaches from the thread - it saves its call stack, the state of the variables and bails out, allowing the thread to do something else, or even terminate. When the coroutine is resumed later on, it can pick any thread to restore the stack+state in, and continue execution. Aha! Now this is something we can't do with threads.

The Blocking Dialogs

The coroutines suspend at suspension functions, and we can resume them as we see fit, say, from a button click handler. Consider the following Vaadin click handler:

deleteButton.addClickListener { e ->
  if (confirmDialog("Are you sure?")) {
    entity.delete()
  }
}

This can't be done with the regular approach since we can't have blocking dialogs with Vaadin. The problem here is as follows: assume that confirmDialog() function blocks the Vaadin UI thread to wait for the user to click the "Yes" or "No" button. However, in order for the dialog to be actually drawn in the browser, the Vaadin UI thread must finish and produce response for the browser. But Vaadin UI thread can't finish since it's blocked in the confirmDialog function.

However, consider the following suspend function:

suspend fun confirmDialog(message: String = "Are you sure?"): Boolean {
    return suspendCancellableCoroutine { cont: CancellableContinuation<Boolean> ->
        val dlg = ConfirmDialog(message, { response -> cont.resume(response) })
        dlg.show()
    }
}

When the dialog's Yes or No button is clicked, the block is called with response of true (or false, respectively), which resumes the continuation, and with it, the calling block.

This allows us to write the following code in the button's click handler:

deleteButton.addClickListener { e ->
  launch(vaadin()) {
    if (confirmDialog("Are you sure?")) {
      entity.delete()
    }
  }
}

The launch method slates the coroutine to be run later (via the UI.access() call). The click handler does not wait for the coroutine to run but terminates immediately, hence it does not block. Since the listener is now finished and there is no other UI code to run, Vaadin will now run UI.access()-scheduled things, including our coroutine (coroutine is the block after the launch method).

The coroutine will create the confirm dialog and will suspend (and detach from the UI thread). That allows the UI thread to finish and draw the dialog in the browser. Later on, when the "Yes" button is clicked, the coroutine is resumed in the UI thread. It will close the dialog (the code is left out for brevity), deletes the entity and finishes the coroutine and the UI thread as well, allowing the dialog close to be propagated towards the browser.

And alas, with the help of compiler magic, we now have blocking dialogs in Vaadin. Of course those dialogs are not really blocking - that still can not be done. What the compiler actually does is that it will break the coroutine code into two parts:

  • the first part creates the dialog and exits;
  • the second one runs on button click, closes the dialog and deletes the entity.

The compiler will convert our nice block into a nasty state automaton class. However, the illusion in the source code is perfect.

Further Down The Rabbit Hole

The above coroutine is rather simple - there are no cycles, try/catch blocks, and only a simple if statement. I wonder if the illusion will hold with a more complex code. Let's explore further.

We will create a simple movie tickets ordering page. The workflow is as follows:

  1. we will first check whether there are tickets available; if yes then
  2. we'll ask the user for confirmation on purchase;
  3. if the user confirms, we make the purchase.

We will access a very simple dummy REST service for the query and for purchase; the app will also provide server-side implementations for these dummy services, for simplicity reasons. Since those REST calls can take a while, we will display a progress dialog for the duration of those requests.

Let's pause briefly and visualize the code using the old, callback approach. We can block safely in the UI thread during the duration of those REST calls, but then we can't display the progress dialog: it's the same situation as with the blocking confirmation dialog above. Hence we need to run that REST request in the background, end the UI request code, and resume when the REST request is done. Lots of REST clients allow async callbacks - for example Retrofit2 (which is usuually my framework of choice). Unfortunately Retrofit2's dark secret is that it blocks a thread until the data is retrieved. With NIO2 we can do better, hence we'll use Async Http Client. In pseudo-code, the algorithm would look like this:

showProgressDialog();
get("http://localhost:8080/rest/tickets/available").andThen({ r ->
  hideProgressDialog();
  showConfirmDialog({ response ->
    if (response) {
      showProgressDialog();
      post("http://localhost:8080/rest/tickets/purchase").andThen({ r ->
        hideProgressDialog();
        notifyUser("The tickets has been purchased");
      });
    } else {
      notifyUser("Purchase canceled");
    }
  });
});

Behold, the Pyramid of Doom. And that's just the happy flow - now we need to add error handling on every call, and also make the whole thing cancelable - if the user cancels and navigates to another View, we need to clean up those dialogs. Brrr.

Luckily we have Coroutines, and hence we can rewrite the purchaseTickets() like this:

private fun purchaseTicket(): Job = launch(vaadin()) {
    // query the server for the number of available tickets. Wrap the long-running REST call in a nice progress dialog.
    val availableTickets: Int = withProgressDialog("Checking Available Tickets, Please Wait") {
        RestClient.getNumberOfAvailableTickets()
    }

    if (availableTickets <= 0) {
        Notification.show("No tickets available")
    } else {

        // there seem to be tickets available. Ask the user for confirmation.
        if (confirmDialog("There are $availableTickets available tickets. Would you like to purchase one?")) {

            // let's go ahead and purchase a ticket. Wrap the long-running REST call in a nice progress dialog.
            withProgressDialog("Purchasing") { RestClient.buyTicket() }

            // show an info box that the purchase has been completed
            confirmationInfoBox("The ticket has been purchased, thank you!")
        } else {

            // demonstrates the proper exception handling.
            throw RuntimeException("Unimplemented ;)")
        }
    }
}

Looks sequential, easy to read, easy to understand, but it has full power of error handling and cancelation handling:

  • Error handling - we simply throw an exception as we would in a sequential code, and the kotlin-produced state machine will take care that it is propagated properly through try{}catch blocks and out of the coroutine. There, our Vaadin integration layer will catch the exception and will call the standard Vaadin error handler.
  • Cancelation: the confirmDialog() function simply tells the continuation to close the dialog when the coroutine is completed (both successfully, exceptionally or canceled). The withProgressDialog() is even simpler - it will simply use try{}finally to hide the dialog; coroutine will then call the finally part even when canceled.

Coroutines allowed us to do amazing things here: we were able to use the sequential programming approach which produced readable/debuggable code, which is at the same time highly asynchronous and highly performant. I personally love that I am not forced to use Rx/Promise's thenCompose, thenAccept and all of that functional programming crap - we can simply use the built-in language constructs as if, try{}catch again.

The whole example project is at github: Vaadin Coroutines Demo - visit the page for more instructions on how to run the project for yourself and open it in your IDE. It's very simple.

You can try out the cancelation for yourself: normally it should be hooked on View's detach() which is called when navigating away, but it's easier to play around when you have a button for that (the "Cancel Purchase" button in the example app).

Further Ideas

Your application may need to access a relational database instead of making REST calls, to fetch the data. JDBC drivers only allows blocking access mode - that's fine since it's fine to block in JVM with Vaadin. Yet, there are fully async database drivers (for example for MySQL and PostgreSQL there are drivers on Github), so feel free to go ahead and use them with Continuations.

Unfortunately, we can't use async approach with Vaadin's Grid's DataProvider. This is because the DataProvider API is not asynchronous: the nature of the API requires us to compute the data and return it in the request itself. However, that's completely fine since it's OK to block in JVM.

Of course you can implement even more longer running processes. Perhaps it might be possible to save the state of a suspended coroutine to, say, a database. That would allow us to develop and run long-running business processes in your app, in a really simple way, with the full power of the Kotlin programming language. The possibilities are interesting.

 
about 1 year ago

Let me start by saying that threading model (or, rather, single thread model) provides the most simplistic programming paradigm - the imperative synchronous sequential programming. It is easy to see what the code is doing, it is easy to debug, and it is easy to reason about (unless you throw in dynamic proxies, injected interfaces with runtime implementation lookup strategy and other "goodies" of the DI) - that's why having such paradigm is generally desirable. Of course that holds only when you have one thread. Adding threads with no defined communication restrictions overthrows this simplicity and creates the most complex programming paradigm.

A new async hype of using Rx/Promises/Actors/whatever is high right now. node.js guys preach the new, event-based model to be superior to the current threading model. This marketing makes sense since they only have one thread (or, rather, one event queue), thus they can't do otherwise. However, compared to the simplicity of the single thread model, any code using async techniques will be order of magnitude harder to understand and debug:

  • callback-based programming introduces Callback Hell/Pyramid of Doom; implementing error-handling on top of that turns the pyramid into unmaintainable mess.
  • Rx/Promises introduces combinators such as thenCompose, thenAccept and others. While native to functional programming, it is quite foreign to imperative programming style and introduces a completely new programming vocabulary instead of reusing language constructs such as if, try{}catch etc. This vocabulary also tend to differ from library to library; also the API-to-code ratio is quite high and creates high verbosity (Java programmers could tell stories ;)
  • Misusing Actors to create complex app tends to create a mesh of messages which is impossible to debug or reason about; any bug is hard to spot, reason about its origin and fix.

True, these techniques do offer a programming paradigm which is easier to use than "here, have multiple threads and do whatever you like" - that's a ticket to hell (unless you use 1-master-n-slave threads paradigm). But they produce code far more complex than the single thread model does.

Simplicity Is Discarded Too Easily

With Vaadin, you only have one thread per session (or user). Since sessions are generally independent and do not tend to communicate, with Vaadin you effectively have one thread where you do things synchronously - update the UI, fetch stuff from REST/database synchronously, update the UI some more and return. That is the most simple programming paradigm you can get. Moving to Rx/Promises/Actors for whatever reasons will make your program more complex by an order of magnitude.

Unfortunately, simplicity is typically discarded too easily since it only creates indirect costs: the development is slower since the code is way more complex and harder to modify/add features to, there are more bugs so the maintenance is slower. When simplicity is discarded, the result is typically a big ball of mud nobody dares to refactor.

You don't want to go there. Make it work correctly; only then make it work fast. Don't include Rx in your project solely as an excuse to learn something new.

Debunking Arguments Against Threads

Typically there is the performance argument against threads: the threads are too heavyweight, and JVM can only create 3000 native threads anyway before something horrible happens, hence we must pursue alternatives. But that is simply not true, at least for a typical Vaadin app.

Vanilla OpenJDK on my Ubuntu Linux can create 12000 threads just fine, before failing to create additional thread with OutOfMemoryError. But that's only because there is a systemd hard limit on user process number which is around 12288; lifting that limit will allow JVM to create 33000 threads (in 4 seconds, so quite fast) before failing. To go even further, we need to increase the available heap size, or decrease stack size, fiddle with Linux ulimits etc. It's doable, and it is worth the preserved simplicity. And we don't even need that many threads after all.

According to the Vaadin scalability study, Vaadin supports up to 10000 concurrent users on one machine. That's 10000 native threads which JVM and the OS can perfectly handle. Typically your page will go nowhere that limit. You're not Google nor Facebook, and thus you don't need to employ complex techniques those guys need to employ. Start small, start simple; when the page grows, the business catches up, the idea is validated and your budget increases, only then you can fiddle with performance.

Regarding native threads performance: Ruby had green threads but they moved to native threads; Python had native threads from the start. Well, to be honest they still have GIL so they can still use but one CPU core, but the point is that those guys considered OS threads lightweight enough so that they started to use them.

As the time passed and JVM got more optimized:

  • Intra-process context switching became more cheap
  • locks on JVM are now based on CAS and thus very cheap
  • JVM can ramp up 30000 threads in 3-5 seconds on my laptop without any issues
  • 10000 threads with default stack trace of 512k will consume 5GB of memory; that could be handled by the virtual memory OS subsystem - if I don't use Spring/JavaEE/other huge stack generators then the stack is typically nearly empty and can be left unallocated/swapped out
  • Majority of threads typically sleep/are parked, and thus they are not eligible for preemptive multitasking/context switching and thus generate no observable CPU overhead
  • The overhead is mostly memory upkeep for stack and some thread metadata.

Therefore, using the threaded imperative model with Vaadin is completely fine and will create no obvious bottlenecks.

Conclusion

Quoting Donald Knuth:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

I suggest you start simple unless it's absolutely clear that it won't suffice anymore; only then employ more complex techniques. Also, if you're not a functional paradigm guy, don't hop on the Rx/Promises bandwagon - use Kotlin's Coroutines since they produce code that's vastly simpler and superior.

 
about 1 year ago

Nicolas Frankel put some of my feelings into an excellent article Is Object-Oriented Programming compatible with an enteprise context?. During the development of Aedict Online (still as a Java-based JavaEE/JPA app back then, now it's Vaadin-on-Kotlin-based), when trying to find rows from JPA+DB I often tried to find the necessary functionality on the JPA entity itself. Wouldn't it be great if, upon user's log in, we could find the user and verify his password using the following code?

fun login(username: String, password: String): User? {
  val user = User.findByUsername(username) ?: return null
  if (!user.verifyPassword(password)) return null
  return user
}

Unfortunately, I was not able to implement the User.findByUsername static method properly on JavaEE. I was forced by JavaEE to move such finders to an enterprise bean, because that was the only way to have transactions and the EntityManager. All my attempts to create such finders from JPA Entity static method failed:

  • The DI engine makes it impossible to look up beans from a static method
  • Creating a singleton bean with all things injected, providing them from a static context is a pattern discouraged in the DI world.

Eventually I realized that it was the DI itself who directly forbade me to group the code in a logical way, finders (they are in fact factory methods!) together with the Entity itself. I believe that the need to keep the common functionality together is one of the core concepts in pragmatic OOP, and I felt that DI was violating this. As the time passed I got so frustrated that I no longer could see the merits of the DI technology itself. After realizing this I hesitantly rejected the DI approach and tried to prototype a different approach.

I was learning Kotlin back then and I have just learned of global functions with blocks. What if those things could to the transaction for me, outside of EJB context? Such heresy, working with a database just like that! The idea is descibed here, but the gist is that it's dead easy to write a function which runs given block in a transaction, creates an EntityManager and provides it: the db method.

With that arsenal at hand, I was able to simply write this code:

class User {
  ..
  companion object {
    fun findByUsername(username: String): User? = db {
      em.createQuery("select * from User where username = :username")
        .setParameter("username", username)
        .singleResult as User?
    }
  }
}

It was that easy. I added a HikariCP connection pooling on top of that and I felt ... betrayed. The Dependency Injection propaganda crumbled and I saw DI standing naked, a monstrosity of dynamic proxies and enormous stack traces, bringing no value but obfuscating things, making things very complex and forcing me to split code in a way that I perceived unnatural (and anti-OOP). By moving away from DI I was expecting a great loss of functionality; I expected that I would have to tie things together painfully. Everybody uses DI so it's right to do so, right?

Instead, the DI-less code worked just like that - there was no need to inject lookup service into my JPA entity as Nicholas had to do, I did not had to resort to use Aspect Weaving or whatever clever technique my tired brain refused to comprehend. Things were simple again. I've embraced the essence of simplicity. Nice poem. Back to earth, Apollo.

I started to use static finders and loved them ever since. Later on I have created a simple Dao which allowed me to just define the User class as follows:

class User : Entity<Long> {
  companion object : Dao<User>
}

And the Dao would bring in useful static methods such as findAll(), count() etc, and the Entity would bring in instance methods such as save(), delete(), which allowed me to write code such as this:

val p = Person(name = "Albedo", age = 130)
p.save()
expect(p) { Person.findById(p.id!!) }
expect(1) { Person.count() }

Ultimately I ditched the JPA crap away and embraced Sql2o with CRUD additions. You can find this approach being used in the Beverage Buddy Vaadin 10 Sample App, along with other goodies, such as:

grid.dataSource = Review.dataSource
category.delete()

By throwing the DI away, I no longer had to run the container with Spring Maven plugin or fiddle with Arquillian since I could just run the code as it was, from a simple JUnit tests: Server-side Web testing.

But that's story for another day.

 
about 1 year ago

It is indeed possible to write Vaadin 10 Flow apps in Kotlin. Here are a couple of resources to get you started:

Those videos will also help you to download and start with the very simple skeleton application.

  • The skeleton application which contains everything necessary to build the app and launch it, and it only introduces a very simple UI: a button and a text field. You can find all the necessary information here: karibu10-helloworld-application
  • A more polished application which shows additional components and also demonstrates the ability to connect to a Polymer Template: the Beverage Buddy application. It has no database access support and only serves in-memory dummy data. It is a part of the Karibu DSL framework; instructions to get it and launch it: Beverage Buddy.
  • The same Beverage Buddy application as above, but uses an embedded full-blown H2 SQL database and is based on the Vaadin-on-Kotlin framework: Beverage Buddy.

Please feel free to post any questions at the Vaadin Forums.

 
about 1 year ago

Traditionally, the testing of a web portal is done from the browser, by a tool which typically using Selenium under the hood. This type of testing is closest to the user experience - it tests the web page itself. Unfortunately, it has several major drawbacks:

  • It is slow as hell. A typical Selenium test case takes at least 5-10 seconds to complete, depending on the complexity of the test.
  • Unreliable - the integration with Firefox breaks on every Firefox update; sometimes Selenium itself fails or timeouts randomly.
  • Either requires proper IDs, or you have to use CSS/XPath selectors which are very flaky, tend to break on web portal upgrades and tend to grow to absurd lengths, making them maintainability nightmare.

Luckily with Vaadin, being a component-oriented framework, we can do better. How about we stopped testing the Browser, the client-server transport layer and the server-side http parsing code and started to test the app logic directly? Vaadin - being a server-oriented framework - holds all of the component information and hierarchy server-side after all.

By bypassing the browser and running tests "server-side", with all of the business logic directly accessible on classpath from JUnit tests, we gain:

  • Speed. Server-side tests are typically 100x faster than Selenium and run in ~60 milliseconds.
  • Reliable. We don't need arbitrary sleeps since we're server-side and we can hook into data fetching.
  • Run headless since there's no browser.
  • Can be run after every commit since they're fast.
  • You don't even need to start the web server itself since we're bypassing the http parsing altogether!

Yeah, we can't test CSS with those, but we can now test all the nitty-gritty form validation paths which were just too painful and slow to test with Selenium; that leaves Selenium just for testing the happy paths. This follows the concept of the Test Pyramid:

  • Most tests are quickly running simple JUnit tests which test individual classes
  • In less numbers there are server-side tests which test the business logic, a basic integration and the "dark corners" of the UI rarely triggered by the user (e.g. validation, crappy user input etc). These tests are able to test surprisingly large coverage of the app.
  • That leaves Selenium just tests few happy flows and the overall system integration.

How to do that

Now if you use Vaadin, there is an alternative to this. Typically what we want is to test our business logic, which resides server-side. Typically the server-side logic is triggered by an user action: the user clicks a button in a browser, which runs the client-side code to rely a RPC to server-side, where Vaadin will run all attached click listeners, which will then execute our logic.

How about we removed the user, the client-side and the RPC from the equation? Certainly we can "click" the button (read run the click listeners) server-side, using just the Vaadin Button API. Certainly we can run a JUnit test which calls button.click(), given that the environment is sufficiently prepared.

As it turns out, it is rather easy to prepare the environment. All that's needed is:

  • We need to create VaadinSession and set it as current, so that the UI code can rely that there is a session.
  • We need to create the UI and set it as current,
  • And we need to obtain Vaadin UI lock, purely because the Vaadin Framework server-side code contains countless asserts that the lock is held.

Once that's done we can simply create Vaadin components and UIs directly from the JUnit tests as follows:

@Before
fun mockVaadin() {
    MockVaadin.setup()
}

@Test
fun testButtonCaption() {
    val button = Button("Click Me")
    expect("Click Me") { button.caption }
}

(the MockVaadin.setup() just creates VaadinServletService, VaadinSession and creates Vaadin UI, which provides enough environment for Vaadin components to work happily).

Well this test is not really useful. We need to test our app instead of the Vaadin Button, right? For a really simple app with no navigator we need to:

  • Instantiate our UI since that creates our app's UI,
  • Find components and interact with them - e.g. press the "Login" button etc.

Luckily, lot of this is provided by the Karibu-Testing library. You can see the library in action in Karibu Helloworld Application. When you build the app using ./gradlew, all tests (including the UI ones) will run automatically. The UI test is just a simple class:

class MyUITest {
    @Before
    fun mockVaadin() {
        MockVaadin.setup({ MyUI() })
    }

    @Test
    fun simpleUITest() {
        val layout = UI.getCurrent().content as VerticalLayout
        _get<TextField> { caption = "Type your name here:" }.value = "Baron Vladimir Harkonnen"
        _get<Button> { caption = "Click Me" }._click()
        expect(3) { layout.componentCount }
        expect("Thanks Baron Vladimir Harkonnen, it works!") { (layout.last() as Label).value }
        expect("Thanks Baron Vladimir Harkonnen, it works!") { _get<Label>().value }
    }
}

The mockVaadin() function sets up the test environment and instantiates our UI; now we're ready to test!

The simpleUITest() is the main meat. The test method runs with Vaadin UI lock acquired and thus we can access the Vaadin components directly. The _get method is provided by Karibu-Testing and serves for component finding: the _get<TextField>(caption = "Type your name here:") invocation finds the one text fields with given caption in the current UI; if there are none or multiple of those the function fails.

The _click() method on a Button is really useful since it asserts that the button is actually visible, enabled and not read-only and thus can be interacted with; if all of this is satisfied then the button is actually clicked, which invokes the click listeners etc etc.

All that's left is to assert the state of the application - we can assert that a Label has been added and it has a proper contents.

Note that no web server is launched, not even an embedded Jetty - it's unnecessary. This makes the test run very fast and really simple to debug - just try to place a breakpoint into the "Click Me" button click handler and try debugging the test from your IDE.

A more complex example

With more complex apps which sport the navigator pattern, we moreover need to:

  • Instantiate our UI since that creates our app's general outlook,
  • Navigate to the view under testing since that populates the page with components,
  • Auto-discover all of @AutoView views so that the navigator will work properly.
  • Mock our app initialization (since we will have no server and those ContextListeners won't be run automatically)

Since we don't use Spring nor JavaEE components, we don't have to painfully start/stop those fat environments. And since we don't need the web server either, we are still able to write just a basic JUnit tests as follows:

class CrudViewTest {
    companion object {
        @JvmStatic @BeforeClass
        fun bootstrapApp() {
            autoDiscoverViews("com.github")
            Bootstrap().contextInitialized(null)
        }

        @JvmStatic @AfterClass
        fun teardownApp() {
            Bootstrap().contextDestroyed(null)
        }
    }

    @Before
    fun mockVaadin() {
        MockVaadin.setup({ -> MyUI() })
    }

    @Before @After
    fun cleanupDb() {
        Person.deleteAll()
    }

    @Test
    fun testEdit() {
        // pre-fill the database with a single person
        Person(name = "Leto Atreides", age = 45, dateOfBirth = LocalDate.of(1980, 5, 1), maritalStatus = MaritalStatus.Single, alive = false).save()
        
        // open the Person CRUD page.
        CrudView.navigateTo()

        // now click the "Edit" button in the Vaadin Grid, on the first person (0th index).
        val grid = _get<Grid<*>>()
        grid._clickRenderer(0, "edit")

        // the CreateEditPerson dialog should now pop up. Change the name and Save.
        _get<TextField>(caption = "Name:").value = "Duke Leto Atreides"
        _get<Button>(caption = "Save")._click()

        // assert the updated person
        expect(listOf("Duke Leto Atreides")) { Person.findAll().map { it.name } }
    }
}

This tests a fairly complex functionality of the Vaadin-on-Kotlin example CRUD app, namely the ability to edit a person. Let's break it down into individual pieces:

  • the bootstrapApp() will use the Karibu-Testing built-in autoDiscoverViews() method to find and register all @AutoViews; then it will simply call the WAR bootstrap which will initialize Vaadin-on-Kotlin and run the migrations.
  • The teardownApp() will call the WAR teardown which will close the database connection pool and tears down Vaadin-on-Kotlin
  • The mockVaadin() will mock Vaadin environment exactly as in the earlier simple example. It will also instantiate our MyUI which sets up the Navigator and creates the page UI.
  • Since the database is initialized and ready, we can use it freely; thus we can for example remove all persons in the cleanupDb() method. The Person itself is just a Sql2o entity; by the means of Dao<Person> it gains the extension methods such as deleteAll() and other DAO methods. To learn more about this neat-and-simple database access, just watch the Vaadin-on-Kotlin Part 2 Video.
  • The bulk of the test is located in the testEdit() method which shows the person list, then clicks the "Edit" button on the first person, changes the name in the opened dialog and hits the Save button. Then we can assert the state of the database that the person has indeed been updated.

You can see the test itself here: CrudTestView.kt; you can run the app for yourself simply by typing this into your terminal:

git clone https://github.com/mvysny/vaadin-on-kotlin
cd vaadin-on-kotlin
./gradlew vok-example-crud-sql2o:appRun

The Karibu Testing Library is available which implements this approach. You can find more information at Vaadin on Kotlin Example Project; for more information on Vaadin on Kotlin please visit the official Vaadin On Kotlin page. There is a short introductionary video on Youtube.

It takes 2 seconds on my machine to bootstrap JVM, load all classes, initialize everything, run both of those two tests and then clean up. Any further tests will typically take 60-120 milliseconds to complete since everything is already loaded and initialized. This is very fast as compare to launching a Spring application, not to mention launching a JavaEE application, launching a browser... Also, the whole setup is very easy, especially when compared to setting up a testbed to launch JavaEE server; just a tiny bit of magic as compared to tons of magic when using Spring.

 
about 1 year ago

So you have built your first awesome VoK app - congratulations! You will probably want to launch it outside of your IDE, or even deploy it somewhere on the interwebz. Just pick up any virtual server provider with your favourite Linux distribution.

There are following options to launch your app:

From Gradle, using the Gretty plugin

With proper configuration and build.gradle file, you can simply use the Gretty Gradle plugin to launch wars. Gretty comes pre-configured in the sample quickstarter VoK HelloWorld App, so you can grab the Gradle configuration from there:

git clone https://github.com/mvysny/vok-helloword-app
cd vok-helloworld-app
./gradlew web:appRun

You can configure Gretty to launch your app either in Jetty or Tomcat, please read more here: http://akhikhl.github.io/gretty-doc/Gretty-configuration.html . To launch it in cloud, just ssh to your virtual server, install openjdk 8, git clone your app to the server and run ./gradlew appRun.

From Docker, using the Tomcat image

Really easy - just head to the folder where your my-fat-app-1.0-SNAPSHOT.war file is, and run the following line:

docker run --rm -ti -v "`pwd`:/usr/local/tomcat/webapps" -p 8080:8080 tomcat:9.0.5-jre8

This command will start the Tomcat docker image and will mount your local folder with your war app into Tomcat's webapps/ folder. Your app will then run at http://localhost:8080/my-fat-app-1.0-SNAPSHOT . To run at http://localhost:8080 just rename your war to ROOT.war. To launch in your virtual server, just copy the WAR into your server, install docker and run the abovementioned command.

You can also create a Tomcat-based Docker image with your app easily. Just drop the following Dockerfile next to your war file:

FROM tomcat:9.0.5-jre8
RUN rm -rf /usr/local/tomcat/webapps/*
COPY *.war /usr/local/tomcat/webapps/ROOT.war
VOLUME /usr/local/tomcat/logs

Then, just run this in the directory with the Dockerfile:

docker build -t my-app .
docker run --rm -ti -p 8080:8080 my-app

Or even better, just use docker-compose.yml:

version: '2'
services:
  web:
    build: .
    ports:
     - "8080:8080"

Then all you need to run is just

docker-compose up --build --force-recreate

(--build will always rebuild the image so that you can play around with the Dockerfile; --force-recreate will delete old containers and create new ones so that Tomcat won't try to restore sessions from previous run).

Note: the Tomcat docker image now comes with unlimited JCE policy: println("Cipher strength: ${Cipher.getMaxAllowedKeyLength("AES")}") prints Cipher strength: 2147483647

Fat jar

It is possible to package Jetty or Tomcat into the WAR itself, add a launcher class and then simply run it via java -jar your-app.war, thus making an executable WAR. You can read more about this option here: https://stackoverflow.com/questions/13333867/embed-tomcat-with-app-in-one-fat-jar . Unfortunately I have no experience in this regard.

Beware though: fat jars may not work for apps that package multiple files into the same location: for example there will be only one MANIFEST.MF. You can read about the pitfalls here: http://product.hubspot.com/blog/the-fault-in-our-jars-why-we-stopped-building-fat-jars

It is possible to embed Jetty - just check out this launcher class: https://github.com/mvysny/vaadin-on-kotlin/blob/master/vok-example-crud-jpa/src/test/kotlin/com/github/vok/example/crud/Server.kt