7 days ago

When Kotlin 1.1 brought in Coroutines, at first I wasn't impressed. I thought coroutines to be just a bit more lightweight threads: a lighter version of the OS threads we already have, perhaps easier on memory and a bit lighter on the CPU. Of course, this alone can actually make a ground-breaking difference: for example, Docker is just a light version of a VM, but it's quite a difference. It's just that I failed to see the consequences of this ground-breaking difference; also, performance-wise in Vaadin apps, it seems that there is little need to switch away from threads (read Are Threads Obsolete With Vaadin Apps? for more info).

So, is there something else to Coroutines with respect to Vaadin? The answer is yes.

What are Coroutines?

Coroutine is a block of code that looks just like a regular block of code. The main difference is that you can pause (suspend) that block by calling a special function; later on you can direct the code to continue the execution; the function will simply end and the calling code resumes execution. This is achieved by compiler magic and thus no support from JVM/OS is necessary. The Informal Kotlin Coroutines introduction offers great in-depth explanation. Even better is Roman Elizarov's Coroutine Video - the most clear explanation of Coroutines that I've seen (here is the part 2: Deep Dive into Coroutines on JVM).

There are the following problems with Coroutines:

  • the compiler magic (a nasty state machine is produced underneath);
  • the block looks like just a regular synchronous code, but suddenly out of nowhere completely another thread may continue the execution (after the coroutine is resumed). That's not something we can easily attune our mental model to.
  • They're not easy to grok.

Just disadvantages, what the hell? I can do pausing with threads already, simply by calling wait() or locks or semaphores etc; so what's the big fuss? The important difference is that at the suspension point, the coroutine detaches from the thread - it saves its call stack, the state of the variables and bails out, allowing the thread to do something else, or even terminate. When the coroutine is resumed later on, it can pick any thread to restore the stack+state in, and continue execution. Aha! Now this is something we can't do with threads.

The Blocking Dialogs

The coroutines suspend at suspension functions, and we can resume them as we see fit, say, from a button click handler. Consider the following Vaadin click handler:

deleteButton.addClickListener { e ->
  if (confirmDialog("Are you sure?")) {
    entity.delete()
  }
}

This can't be done with the regular approach since we can't have blocking dialogs with Vaadin. The problem here is as follows: assume that confirmDialog() function blocks the Vaadin UI thread to wait for the user to click the "Yes" or "No" button. However, in order for the dialog to be actually drawn in the browser, the Vaadin UI thread must finish and produce response for the browser. But Vaadin UI thread can't finish since it's blocked in the confirmDialog function.

However, consider the following suspend function:

suspend fun confirmDialog(message: String = "Are you sure?"): Boolean {
    return suspendCancellableCoroutine { cont: CancellableContinuation<Boolean> ->
        val dlg = ConfirmDialog(message, { response -> cont.resume(response) })
        dlg.show()
    }
}

When the dialog's Yes or No button is clicked, the block is called with response of true (or false, respectively), which resumes the continuation, and with it, the calling block.

This allows us to write the following code in the button's click handler:

deleteButton.addClickListener { e ->
  launch(vaadin()) {
    if (confirmDialog("Are you sure?")) {
      entity.delete()
    }
  }
}

The launch method slates the coroutine to be run later (via the UI.access() call). The click handler does not wait for the coroutine to run but terminates immediately, hence it does not block. Since the listener is now finished and there is no other UI code to run, Vaadin will now run UI.access()-scheduled things, including our coroutine (coroutine is the block after the launch method).

The coroutine will create the confirm dialog and will suspend (and detach from the UI thread). That allows the UI thread to finish and draw the dialog in the browser. Later on, when the "Yes" button is clicked, the coroutine is resumed in the UI thread. It will close the dialog (the code is left out for brevity), deletes the entity and finishes the coroutine and the UI thread as well, allowing the dialog close to be propagated towards the browser.

And alas, with the help of compiler magic, we now have blocking dialogs in Vaadin. Of course those dialogs are not really blocking - that still can not be done. What the compiler actually does is that it will break the coroutine code into two parts:

  • the first part creates the dialog and exits;
  • the second one runs on button click, closes the dialog and deletes the entity.

The compiler will convert our nice block into a nasty state automaton class. However, the illusion in the source code is perfect.

Further Down The Rabbit Hole

The above coroutine is rather simple - there are no cycles, try/catch blocks, and only a simple if statement. I wonder if the illusion will hold with a more complex code. Let's explore further.

We will create a simple movie tickets ordering page. The workflow is as follows:

  1. we will first check whether there are tickets available; if yes then
  2. we'll ask the user for confirmation on purchase;
  3. if the user confirms, we make the purchase.

We will access a very simple dummy REST service for the query and for purchase; the app will also provide server-side implementations for these dummy services, for simplicity reasons. Since those REST calls can take a while, we will display a progress dialog for the duration of those requests.

Let's pause briefly and visualize the code using the old, callback approach. We can block safely in the UI thread during the duration of those REST calls, but then we can't display the progress dialog: it's the same situation as with the blocking confirmation dialog above. Hence we need to run that REST request in the background, end the UI request code, and resume when the REST request is done. Lots of REST clients allow async callbacks - for example Retrofit2 (which is usuually my framework of choice). Unfortunately Retrofit2's dark secret is that it blocks a thread until the data is retrieved. With NIO2 we can do better, hence we'll use Async Http Client. In pseudo-code, the algorithm would look like this:

showProgressDialog();
get("http://localhost:8080/rest/tickets/available").andThen({ r ->
  hideProgressDialog();
  showConfirmDialog({ response ->
    if (response) {
      showProgressDialog();
      post("http://localhost:8080/rest/tickets/purchase").andThen({ r ->
        hideProgressDialog();
        notifyUser("The tickets has been purchased");
      });
    } else {
      notifyUser("Purchase canceled");
    }
  });
});

Behold, the Pyramid of Doom. And that's just the happy flow - now we need to add error handling on every call, and also make the whole thing cancelable - if the user cancels and navigates to another View, we need to clean up those dialogs. Brrr.

Luckily we have Coroutines, and hence we can rewrite the purchaseTickets() like this:

private fun purchaseTicket(): Job = launch(vaadin()) {
    // query the server for the number of available tickets. Wrap the long-running REST call in a nice progress dialog.
    val availableTickets: Int = withProgressDialog("Checking Available Tickets, Please Wait") {
        RestClient.getNumberOfAvailableTickets()
    }

    if (availableTickets <= 0) {
        Notification.show("No tickets available")
    } else {

        // there seem to be tickets available. Ask the user for confirmation.
        if (confirmDialog("There are $availableTickets available tickets. Would you like to purchase one?")) {

            // let's go ahead and purchase a ticket. Wrap the long-running REST call in a nice progress dialog.
            withProgressDialog("Purchasing") { RestClient.buyTicket() }

            // show an info box that the purchase has been completed
            confirmationInfoBox("The ticket has been purchased, thank you!")
        } else {

            // demonstrates the proper exception handling.
            throw RuntimeException("Unimplemented ;)")
        }
    }
}

Looks sequential, easy to read, easy to understand, but it has full power of error handling and cancelation handling:

  • Error handling - we simply throw an exception as we would in a sequential code, and the kotlin-produced state machine will take care that it is propagated properly through try{}catch blocks and out of the coroutine. There, our Vaadin integration layer will catch the exception and will call the standard Vaadin error handler.
  • Cancelation: the confirmDialog() function simply tells the continuation to close the dialog when the coroutine is completed (both successfully, exceptionally or canceled). The withProgressDialog() is even simpler - it will simply use try{}finally to hide the dialog; coroutine will then call the finally part even when canceled.

Coroutines allowed us to do amazing things here: we were able to use the sequential programming approach which produced readable/debuggable code, which is at the same time highly asynchronous and highly performant. I personally love that I am not forced to use Rx/Promise's thenCompose, thenAccept and all of that functional programming crap - we can simply use the built-in language constructs as if, try{}catch again.

The whole example project is at github: Vaadin Coroutines Demo - visit the page for more instructions on how to run the project for yourself and open it in your IDE. It's very simple.

You can try out the cancelation for yourself: normally it should be hooked on View's detach() which is called when navigating away, but it's easier to play around when you have a button for that (the "Cancel Purchase" button in the example app).

Further Ideas

Your application may need to access a relational database instead of making REST calls, to fetch the data. JDBC drivers only allows blocking access mode - that's fine since it's fine to block in JVM with Vaadin. Yet, there are fully async database drivers (for example for MySQL and PostgreSQL there are drivers on Github), so feel free to go ahead and use them with Continuations.

Unfortunately, we can't use async approach with Vaadin's Grid's DataProvider. This is because the DataProvider API is not asynchronous: the nature of the API requires us to compute the data and return it in the request itself. However, that's completely fine since it's OK to block in JVM.

Of course you can implement even more longer running processes. Perhaps it might be possible to save the state of a suspended coroutine to, say, a database. That would allow us to develop and run long-running business processes in your app, in a really simple way, with the full power of the Kotlin programming language. The possibilities are interesting.

 
7 days ago

Let me start by saying that threading model (or, rather, single thread model) provides the most simplistic programming paradigm - the imperative synchronous sequential programming. It is easy to see what the code is doing, it is easy to debug, and it is easy to reason about (unless you throw in dynamic proxies, injected interfaces with runtime implementation lookup strategy and other "goodies" of the DI) - that's why having such paradigm is generally desirable. Of course that holds only when you have one thread. Adding threads with no defined communication restrictions overthrows this simplicity and creates the most complex programming paradigm.

A new async hype of using Rx/Promises/Actors/whatever is high right now. node.js guys preach the new, event-based model to be superior to the current threading model. This marketing makes sense since they only have one thread (or, rather, one event queue), thus they can't do otherwise. However, compared to the simplicity of the single thread model, any code using async techniques will be order of magnitude harder to understand and debug:

  • callback-based programming introduces Callback Hell/Pyramid of Doom; implementing error-handling on top of that turns the pyramid into unmaintainable mess.
  • Rx/Promises introduces combinators such as thenCompose, thenAccept and others. While native to functional programming, it is quite foreign to imperative programming style and introduces a completely new programming vocabulary instead of reusing language constructs such as if, try{}catch etc. This vocabulary also tend to differ from library to library; also the API-to-code ratio is quite high and creates high verbosity (Java programmers could tell stories ;)
  • Misusing Actors to create complex app tends to create a mesh of messages which is impossible to debug or reason about; any bug is hard to spot, reason about its origin and fix.

True, these techniques do offer a programming paradigm which is easier to use than "here, have multiple threads and do whatever you like" - that's a ticket to hell (unless you use 1-master-n-slave threads paradigm). But they produce code far more complex than the single thread model does.

Simplicity Is Discarded Too Easily

With Vaadin, you only have one thread per session (or user). Since sessions are generally independent and do not tend to communicate, with Vaadin you effectively have one thread where you do things synchronously - update the UI, fetch stuff from REST/database synchronously, update the UI some more and return. That is the most simple programming paradigm you can get. Moving to Rx/Promises/Actors for whatever reasons will make your program more complex by an order of magnitude.

Unfortunately, simplicity is typically discarded too easily since it only creates indirect costs: the development is slower since the code is way more complex and harder to modify/add features to, there are more bugs so the maintenance is slower. When simplicity is discarded, the result is typically a big ball of mud nobody dares to refactor.

You don't want to go there. Make it work correctly; only then make it work fast. Don't include Rx in your project solely as an excuse to learn something new.

Debunking Arguments Against Threads

Typically there is the performance argument against threads: the threads are too heavyweight, and JVM can only create 3000 native threads anyway before something horrible happens, hence we must pursue alternatives. But that is simply not true, at least for a typical Vaadin app.

Vanilla OpenJDK on my Ubuntu Linux can create 12000 threads just fine, before failing to create additional thread with OutOfMemoryError. But that's only because there is a systemd hard limit on user process number which is around 12288; lifting that limit will allow JVM to create 33000 threads (in 4 seconds, so quite fast) before failing. To go even further, we need to increase the available heap size, or decrease stack size, fiddle with Linux ulimits etc. It's doable, and it is worth the preserved simplicity. And we don't even need that many threads after all.

According to the Vaadin scalability study, Vaadin supports up to 10000 concurrent users on one machine. That's 10000 native threads which JVM and the OS can perfectly handle. Typically your page will go nowhere that limit. You're not Google nor Facebook, and thus you don't need to employ complex techniques those guys need to employ. Start small, start simple; when the page grows, the business catches up, the idea is validated and your budget increases, only then you can fiddle with performance.

Regarding native threads performance: Ruby had green threads but they moved to native threads; Python had native threads from the start. Well, to be honest they still have GIL so they can still use but one CPU core, but the point is that those guys considered OS threads lightweight enough so that they started to use them.

As the time passed and JVM got more optimized:

  • Intra-process context switching became more cheap
  • locks on JVM are now based on CAS and thus very cheap
  • JVM can ramp up 30000 threads in 3-5 seconds on my laptop without any issues
  • 10000 threads with default stack trace of 512k will consume 5GB of memory; that could be handled by the virtual memory OS subsystem - if I don't use Spring/JavaEE/other huge stack generators then the stack is typically nearly empty and can be left unallocated/swapped out
  • Majority of threads typically sleep/are parked, and thus they are not eligible for preemptive multitasking/context switching and thus generate no observable CPU overhead
  • The overhead is mostly memory upkeep for stack and some thread metadata.

Therefore, using the threaded imperative model with Vaadin is completely fine and will create no obvious bottlenecks.

Conclusion

Quoting Donald Knuth:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

I suggest you start simple unless it's absolutely clear that it won't suffice anymore; only then employ more complex techniques. Also, if you're not a functional paradigm guy, don't hop on the Rx/Promises bandwagon - use Kotlin's Coroutines since they produce code that's vastly simpler and superior.

 
8 days ago

Nicolas Frankel put some of my feelings into an excellent article Is Object-Oriented Programming compatible with an enteprise context?. During the development of Aedict Online (still as a Java-based JavaEE/JPA app back then, now it's Vaadin-on-Kotlin-based), when trying to find rows from JPA+DB I often tried to find the necessary functionality on the JPA entity itself. Wouldn't it be great if, upon user's log in, we could find the user and verify his password using the following code?

fun login(username: String, password: String): User? {
  val user = User.findByUsername(username) ?: return null
  if (!user.verifyPassword(password)) return null
  return user
}

Unfortunately, I was not able to implement the User.findByUsername static method properly on JavaEE. I was forced by JavaEE to move such finders to an enterprise bean, because that was the only way to have transactions and the EntityManager. All my attempts to create such finders from JPA Entity static method failed:

  • The DI engine makes it impossible to look up beans from a static method
  • Creating a singleton bean with all things injected, providing them from a static context is a pattern discouraged in the DI world.

Eventually I realized that it was the DI itself who directly forbade me to group the code in a logical way, finders (they are in fact factory methods!) together with the Entity itself. I believe that the need to keep the common functionality together is one of the core concepts in pragmatic OOP, and I felt that DI was violating this. As the time passed I got so frustrated that I no longer could see the merits of the DI technology itself. After realizing this I hesitantly rejected the DI approach and tried to prototype a different approach.

I was learning Kotlin back then and I have just learned of global functions with blocks. What if those things could to the transaction for me, outside of EJB context? Such heresy, working with a database just like that! The idea is descibed here, but the gist is that it's dead easy to write a function which runs given block in a transaction, creates an EntityManager and provides it: the db method.

With that arsenal at hand, I was able to simply write this code:

class User {
  ..
  companion object {
    fun findByUsername(username: String): User? = db {
      em.createQuery("select * from User where username = :username")
        .setParameter("username", username)
        .singleResult as User?
    }
  }
}

It was that easy. I added a HikariCP connection pooling on top of that and I felt ... betrayed. The Dependency Injection propaganda crumbled and I saw DI standing naked, a monstrosity of dynamic proxies and enormous stack traces, bringing no value but obfuscating things, making things very complex and forcing me to split code in a way that I perceived unnatural (and anti-OOP). By moving away from DI I was expecting a great loss of functionality; I expected that I would have to tie things together painfully. Everybody uses DI so it's right to do so, right?

Instead, the DI-less code worked just like that - there was no need to inject lookup service into my JPA entity as Nicholas had to do, I did not had to resort to use Aspect Weaving or whatever clever technique my tired brain refused to comprehend. Things were simple again. I've embraced the essence of simplicity. Nice poem. Back to earth, Apollo.

I started to use static finders and loved them ever since. Later on I have created a simple Dao which allowed me to just define the User class as follows:

class User : Entity<Long> {
  companion object : Dao<User>
}

And the Dao would bring in useful static methods such as findAll(), count() etc, and the Entity would bring in instance methods such as save(), delete(), which allowed me to write code such as this:

val p = Person(name = "Albedo", age = 130)
p.save()
expect(p) { Person.findById(p.id!!) }
expect(1) { Person.count() }

Ultimately I ditched the JPA crap away and embraced Sql2o with CRUD additions. You can find this approach being used in the Beverage Buddy Vaadin 10 Sample App, along with other goodies, such as:

grid.dataSource = Review.dataSource
category.delete()

By throwing the DI away, I no longer had to run the container with Spring Maven plugin or fiddle with Arquillian since I could just run the code as it was, from a simple JUnit tests: Server-side Web testing.

But that's story for another day.

 
17 days ago

It is indeed possible to write Vaadin 10 Flow apps in Kotlin. Here are a couple of resources to get you started:

Those videos will also help you to download and start with the very simple skeleton application.

  • The skeleton application which contains everything necessary to build the app and launch it, and it only introduces a very simple UI: a button and a text field. You can find all the necessary information here: karibu10-helloworld-application
  • A more polished application which shows additional components and also demonstrates the ability to connect to a Polymer Template: the Beverage Buddy application. It has no database access support and only serves in-memory dummy data. It is a part of the Karibu DSL framework; instructions to get it and launch it: Beverage Buddy.
  • The same Beverage Buddy application as above, but uses an embedded full-blown H2 SQL database and is based on the Vaadin-on-Kotlin framework: Beverage Buddy.

Please feel free to post any questions at the Vaadin Forums.

 
21 days ago

Traditionally, the testing of a web portal is done from the browser, by a tool which typically using Selenium under the hood. This type of testing is closest to the user experience - it tests the web page itself. Unfortunately, it has several major drawbacks:

  • It is slow as hell. A typical Selenium test case takes at least 5-10 seconds to complete, depending on the complexity of the test.
  • Unreliable - the integration with Firefox breaks on every Firefox update; sometimes Selenium itself fails or timeouts randomly.
  • Either requires proper IDs, or you have to use CSS/XPath selectors which are very flaky, tend to break on web portal upgrades and tend to grow to absurd lengths, making them maintainability nightmare.

Luckily with Vaadin, being a component-oriented framework, we can do better. How about we stopped testing the Browser, the client-server transport layer and the server-side http parsing code and started to test the app logic directly? Vaadin - being a server-oriented framework - holds all of the component information and hierarchy server-side after all.

By bypassing the browser and running tests "server-side", with all of the business logic directly accessible on classpath from JUnit tests, we gain:

  • Speed. Server-side tests are typically 100x faster than Selenium and run in ~60 milliseconds.
  • Reliable. We don't need arbitrary sleeps since we're server-side and we can hook into data fetching.
  • Run headless since there's no browser.
  • Can be run after every commit since they're fast.
  • You don't even need to start the web server itself since we're bypassing the http parsing altogether!

Yeah, we can't test CSS with those, but we can now test all the nitty-gritty form validation paths which were just too painful and slow to test with Selenium; that leaves Selenium just for testing the happy paths. This follows the concept of the Test Pyramid:

  • Most tests are quickly running simple JUnit tests which test individual classes
  • In less numbers there are server-side tests which test the business logic, a basic integration and the "dark corners" of the UI rarely triggered by the user (e.g. validation, crappy user input etc). These tests are able to test surprisingly large coverage of the app.
  • That leaves Selenium just tests few happy flows and the overall system integration.

How to do that

Well it's easy. We just need to mock Vaadin's UI, Session and Service; once that's done we can simply create Vaadin components directly from the JUnit tests as follows:

@Before
fun mockVaadin() {
    MockVaadin.setup()
}

@Test
fun testButtonCaption() {
    val button = Button("Click Me")
    expect("Click Me") { button.caption }
}

(the MockVaadin.setup() just mocks VaadinServletService, VaadinSession and creates Vaadin UI, which provides enough environment for Vaadin components to work happily).

Well this test is not really useful. We need to test our app instead of the Vaadin Button, right? For a really simple app with no navigator we need to:

  • Instantiate our UI since that creates our app's UI,
  • Find components and interact with them - e.g. press the "Login" button etc.

Luckily, lot of this is provided by the Karibu-Testing library. You can see the library in action in Karibu Helloworld Application. When you build the app using ./gradlew, all tests (including the UI ones) will run automatically. The UI test is just a simple class:

class MyUITest {
    @Before
    fun mockVaadin() {
        MockVaadin.setup({ MyUI() })
    }

    @Test
    fun simpleUITest() {
        val layout = UI.getCurrent().content as VerticalLayout
        _get<TextField>(caption = "Type your name here:").value = "Baron Vladimir Harkonnen"
        _get<Button>(caption = "Click Me")._click()
        expect(3) { layout.componentCount }
        expect("Thanks Baron Vladimir Harkonnen, it works!") { (layout.last() as Label).value }
        expect("Thanks Baron Vladimir Harkonnen, it works!") { _get<Label>().value }
    }
}

The mockVaadin() function sets up the test environment and instantiates our UI; now we're ready to test!

The simpleUITest() is the main meat. The test method runs with Vaadin UI lock acquired and thus we can access the Vaadin components directly. The _get method is provided by Karibu-Testing and serves for component finding: the _get<TextField>(caption = "Type your name here:") invocation finds the one text fields with given caption in the current UI; if there are none or multiple of those the function fails.

The _click() method on a Button is really useful since it asserts that the button is actually visible, enabled and not read-only and thus can be interacted with; if all of this is satisfied then the button is actually clicked, which invokes the click listeners etc etc.

All that's left is to assert the state of the application - we can assert that a Label has been added and it has a proper contents.

Note that no web server is launched, not even an embedded Jetty - it's unnecessary. This makes the test run very fast and really simple to debug - just try to place a breakpoint into the "Click Me" button click handler and try debugging the test from your IDE.

A more complex example

With more complex apps which sport the navigator pattern, we moreover need to:

  • Instantiate our UI since that creates our app's general outlook,
  • Navigate to the view under testing since that populates the page with components,
  • Auto-discover all of @AutoView views so that the navigator will work properly.
  • Mock our app initialization (since we will have no server and those ContextListeners won't be run automatically)

Since we don't use Spring nor JavaEE components, we don't have to painfully start/stop those fat environments. And since we don't need the web server either, we are still able to write just a basic JUnit tests as follows:

class CrudViewTest {
    companion object {
        @JvmStatic @BeforeClass
        fun bootstrapApp() {
            autoDiscoverViews("com.github")
            Bootstrap().contextInitialized(null)
        }

        @JvmStatic @AfterClass
        fun teardownApp() {
            Bootstrap().contextDestroyed(null)
        }
    }

    @Before
    fun mockVaadin() {
        MockVaadin.setup({ -> MyUI() })
    }

    @Before @After
    fun cleanupDb() {
        Person.deleteAll()
    }

    @Test
    fun testEdit() {
        // pre-fill the database with a single person
        Person(name = "Leto Atreides", age = 45, dateOfBirth = LocalDate.of(1980, 5, 1), maritalStatus = MaritalStatus.Single, alive = false).save()
        
        // open the Person CRUD page.
        CrudView.navigateTo()

        // now click the "Edit" button in the Vaadin Grid, on the first person (0th index).
        val grid = _get<Grid<*>>()
        grid._clickRenderer(0, "edit")

        // the CreateEditPerson dialog should now pop up. Change the name and Save.
        _get<TextField>(caption = "Name:").value = "Duke Leto Atreides"
        _get<Button>(caption = "Save")._click()

        // assert the updated person
        expect(listOf("Duke Leto Atreides")) { Person.findAll().map { it.name } }
    }
}

This tests a fairly complex functionality of the Vaadin-on-Kotlin example CRUD app, namely the ability to edit a person. Let's break it down into individual pieces:

  • the bootstrapApp() will use the Karibu-Testing built-in autoDiscoverViews() method to find and register all @AutoViews; then it will simply call the WAR bootstrap which will initialize Vaadin-on-Kotlin and run the migrations.
  • The teardownApp() will call the WAR teardown which will close the database connection pool and tears down Vaadin-on-Kotlin
  • The mockVaadin() will mock Vaadin environment exactly as in the earlier simple example. It will also instantiate our MyUI which sets up the Navigator and creates the page UI.
  • Since the database is initialized and ready, we can use it freely; thus we can for example remove all persons in the cleanupDb() method. The Person itself is just a Sql2o entity; by the means of Dao<Person> it gains the extension methods such as deleteAll() and other DAO methods. To learn more about this neat-and-simple database access, just watch the Vaadin-on-Kotlin Part 2 Video.
  • The bulk of the test is located in the testEdit() method which shows the person list, then clicks the "Edit" button on the first person, changes the name in the opened dialog and hits the Save button. Then we can assert the state of the database that the person has indeed been updated.

You can see the test itself here: CrudTestView.kt; you can run the app for yourself simply by typing this into your terminal:

git clone https://github.com/mvysny/vaadin-on-kotlin
cd vaadin-on-kotlin
./gradlew vok-example-crud-sql2o:appRun

You can find more information at Vaadin on Kotlin Example Project; for more information on Vaadin on Kotlin please visit the official Vaadin On Kotlin page.

It takes 2 seconds on my machine to bootstrap JVM, load all classes, initialize everything, run both of those two tests and then clean up. Any further tests will typically take 60-120 milliseconds to complete since everything is already loaded and initialized. This is very fast as compare to launching a Spring application, not to mention launching a JavaEE application, launching a browser... Also, the whole setup is very easy, especially when compared to setting up a testbed to launch JavaEE server; just a tiny bit of magic as compared to tons of magic when using Spring.

 
about 1 month ago

So you have built your first awesome VoK app - congratulations! You will probably want to launch it outside of your IDE, or even deploy it somewhere on the interwebz. Just pick up any virtual server provider with your favourite Linux distribution.

There are following options to launch your app:

From Gradle, using the Gretty plugin

With proper configuration and build.gradle file, you can simply use the Gretty Gradle plugin to launch wars. Gretty comes pre-configured in the sample quickstarter VoK HelloWorld App, so you can grab the Gradle configuration from there:

git clone https://github.com/mvysny/vok-helloword-app
cd vok-helloworld-app
./gradlew web:appRun

You can configure Gretty to launch your app either in Jetty or Tomcat, please read more here: http://akhikhl.github.io/gretty-doc/Gretty-configuration.html . To launch it in cloud, just ssh to your virtual server, install openjdk 8, git clone your app to the server and run ./gradlew appRun.

From Docker, using the Tomcat image

Really easy - just head to the folder where your my-fat-app-1.0-SNAPSHOT.war file is, and run the following line:

docker run --rm -ti -v "`pwd`:/usr/local/tomcat/webapps" -p 8080:8080 tomcat:9.0.1-jre8

This command will start the Tomcat docker image and will mount your local folder with your war app into Tomcat's webapps/ folder. Your app will then run at http://localhost:8080/my-fat-app-1.0-SNAPSHOT . To run at http://localhost:8080 just rename your war to ROOT.war. To launch in your virtual server, just copy the WAR into your server, install docker and run the abovementioned command.

You can also create a Tomcat-based Docker image with your app easily. Just drop the following Dockerfile next to your war file:

FROM tomcat:9.0.1-jre8
RUN rm -rf /usr/local/tomcat/webapps/*
COPY *.war /usr/local/tomcat/webapps/ROOT.war
VOLUME /usr/local/tomcat/logs

Then, just run this in the directory with the Dockerfile:

docker build -t my-app .
docker run --rm -ti -p 8080:8080 my-app

Or even better, just use docker-compose.yml:

version: '2'
services:
  web:
    build: .
    ports:
     - "8080:8080"

Then all you need to run is just

docker-compose up --build --force-recreate

(--build will always rebuild the image so that you can play around with the Dockerfile; --force-recreate will delete old containers and create new ones so that Tomcat won't try to restore sessions from previous run).

Fat jar

It is possible to package Jetty or Tomcat into the WAR itself, add a launcher class and then simply run it via java -jar your-app.war, thus making an executable WAR. You can read more about this option here: https://stackoverflow.com/questions/13333867/embed-tomcat-with-app-in-one-fat-jar . Unfortunately I have no experience in this regard.

Beware though: fat jars may not work for apps that package multiple files into the same location: for example there will be only one MANIFEST.MF. You can read about the pitfalls here: http://product.hubspot.com/blog/the-fault-in-our-jars-why-we-stopped-building-fat-jars

It is possible to embed Jetty - just check out this launcher class: https://github.com/mvysny/vaadin-on-kotlin/blob/master/vok-example-crud-jpa/src/test/kotlin/com/github/vok/example/crud/Server.kt

 
3 months ago

Or what-if-a-fucking-meteor-falls-down-and-swaps-1s-and-0s

Worst atrocities has been committed in the name of future-proof.

Hey, let's refactor this previously readable code using all design patterns we can find, so that, you know, what if somebody would want to add things in the future?

Ah, the dreaded "what if": it's easy to propose, hard to fight against, and thus the team usually gives up after several what-ifs and just nods their heads. In the better case those what-ifs are kept at the bottom of the backlog; the manager is satisfied and the code is not polluted. In the worst case they are actually implemented, twisting the code base into a spaghetti of undecypherable patterns.

The best future-proof strategy I've seen so far is to keep the code simple and understandable. If the code is simple, then any requirement is easy to integrate. You must thus strive to not to be future-proof, otherwise you will trade code readability immediately for some dubious future which may not even come. Remember: complexity kills, and it's easy to introduce complexity but hard to get rid of it.

I have just recently read parts of the Code Change book by Uncle Bob (he uses that nickname himself, so I will too). Ironically dubbed A Handbook of Agile Software Craftmanship, the book is infected with the what-if fallacy and it proposes to trade code simplicity for uncertain future. For example, the Sql class on page 147:

public class Sql {
  public Sql(String table, Column[] column);
  public String create();
  public String insert(Object[] fields);
  public String selectAll();
  public String findByKey(String keyColumn, String keyValue);
  public String select(Column column, String pattern);
  public String select(Criteria criteria);
  public String preparedInsert();
  ...
}

What's wrong with this class? Absolutely nothing! It actually looks like an EntityManager. It is simple to use, auto-completion works and offers a reasonable set of utility methods. Yet Uncle Bob claims that it violates Single Responsibility Principle and proposes the following 'solution':

abstract public class Sql {
  public Sql(String table, Column[] column);
  abstract public String generate();
}
public class CreateSql extends Sql {
  public CreateSql(String table, Column[] columns)
  @Override public String generate()
}
public class SelectSql extends Sql {
  public SelectSql(String table, Column[] columns)
  @Override public String generate()
}
public class InsertSql extends Sql {
  public InsertSql(String table, Column[] columns)
  @Override public String generate()
}
public class SelectWithCriteriaSql extends Sql {
  public SelectWithCriteriaSql(String table, Column[] columns, Criteria criteria)
  @Override public String generate()
}
public class FindByKeySql extends Sql {
  public SelectWithCriteriaSql(String table, Column[] columns, String keyColumn, String keyValue)
  @Override public String generate()
}

Uncle Bob claims that

The code in each class becomes excruciatingly simple.

Wow, you have just fucked up the client API completely by scattering it into a bunch of undiscoverable classes, adding useless inheritance complexity, and you call that an improvement. Sure, go ahead and refactor your innard private classes as you see fit, I don't care as long as you give me the single Sql class to work with! Even more hilarious,

Equally important, when it's time to add the update statements, none of the existing classes need change!

Oh, awesome! The number of the SQL language statements has not been changed for 30 years, but let's be future-proof here because what if five completely new statements were suddenly invented?

This points out one of the tragedies of the Java world quite nicely. We have a simple problem, so let's invent a fucking framework around it which will "solve" an entire class of problems, complicating the world around in the process. The True Academic Way (tm) - I Don't Care About Usability As Long As I Have My Thesis.

How DI Was Invented - The Ironic Version

The problem with Java 1.3: it was really dumb and tedious to do DB transactions:

void accomodate(Guest guest) {
  startDbTransaction();
  try {
    sql.insert(guest);
  } catch(Exception ex) {
    rollbackDbTransaction();
  } finally {
    commitDbTransaction();
  }
}

Then some ingenious guy discovered that proxy interfaces could do that for us. But just having a single DB.wrap(interface) call was deemed too simple, too particular, not abstract enough and thus improper for a true academic soul, and thus a "proper" solution (read enterprise-fucking-level complicated solution) was invented by some smartass: the XML+AOP Java solution.

But there is no way to simply instantiate an interface, you need to instantiate a class implementing that interface, then remember to pass it to DB.wrap etc. This is obviously considered error-prone (because what if somebody forgets to call DB.wrap, right?) and thus it's obviously better to declare the construction chain in XML (because that's not error-prone at all). And thus we have wounded up with combining services (aka programming a pipeline of functionality) in XML.

Alas, the Dependency Injection was born, giving birth to Spring and JavaEE. All that simply because this Kotlin code:

db {
  doStuff()
}

was too much pain to write in a chatty Java 1.3, and because the thinking of Java guys is perverted with complexity.

Luckily, Oracle is dropping JavaEE, so that's one monstrosity down the drain; also Node.js without this DI nonsense (but with its own class of issues) is gaining momentum. Will we live to see a world without Spring and Java, but with Kotlin and a simple approach one day?

 
5 months ago

Once a project leaves the rapid development phase and enters the calm waters of maintanenance mode, the programming paradigm changes. Fancy hyped frameworks favored by project creators no longer play important role. What matters most now is the code readability and understandability. The code is already written - now it is going to be read, understood and fixed. After all, that's usually what happens to the code - it is written once but read by many. This is true both for open-source, but also for closed-source: people in the team come and go, and new team members need to read the code, to familiarize themselves with it.

So far I have came up with the following main principle of project maintenance:

  • You must be able to understand what the program does, without having to run it

When fixing bugs originated from production, you often don't have the possibility to reproduce the bug in your testing environment - the production usually contains a different set of users, different data, the workflow to reproduce the bug may not be clear, etc. The only things you actually have are: a log file, a stacktrace, a vague description from the user on what she did and how she interacted with the app, and perhaps a screenshot. Usually this alone is not enough to reproduce the bug, so you start examining the code, looking for possible code execution flows that might have been taken by the CPU to trigger the bug.

In other words, we maintainers need to invest the mental power to read the code surrounding the stacktrace in hopes to discover the proper code execution flow which may lead to the bug. All that so that we could reproduce/trigger the bug ourselves, in our happy contained debug environment. Thus:

  • The execution flow of the code must be obvious: you can't understand the algorithm if you can't follow the instructions in the algorithm. Too many misnamed utility methods, too many asynchronous calls with callbacks (especially if they continue the algorithm in a completely different class/actor/observable) seriously hinder the readability.

To find the code execution flow that led to the bug, you usually need two things:

  • A stacktrace, with possibly good error message, to point you at the end of the code execution flow, and
  • The code, obviously.

The better the stacktrace, the stacktrace's message and the code, the faster we can discover the code execution flow. In order to discover the code execution flow faster, the code should have the following properties:

  • Code Readability - it's obvious on the first sight what the code does. By no means I'm implying that the code is the documentation; however to fix a bug, one first needs to read the code and understand it. Badly named variables, too many clever filters in your List-filtering code, and you'll need comments to explain that particular spaghetti.
  • Code Locality - the code doing similar functionality should be co-located: in one class, in one file, in one package. That's how we human beings like things that relate - nicely wrapped in one box.
  • Ability to Navigate - to understand the algorithm one must understand the functions which the algorithm calls. You must be able to Ctrl+click to anything in the code and IDE must navigate to code which implements that thing, be it a function, class, interface, operator, property, anything.

I have drafted a sketch list of things that tend to hinder these properties.

Overzealous protection against copy-paste

Affects: Code Readability

So now you've extracted all repetitive code to utility methods. Congratulations! Now your function calls a function, which calls a function, which calls a function... If those functions are not well-defined or they are misnamed, the code readability will suffer.

Often there are multiple methods which share only a few lines of the code. One usually extracts the moving parts to an abstract class (generally a bad idea, see below) or introduces some builder, with moving parts injected as blocks or Runnables (ugly, but better than using inheritance).

Remember: every rule has an exception and sometimes it's better to copy-paste. Especially in the UI world: either turn part of the UI into a reusable component, or copy-paste mercilessly.

OOP Inheritance

Affects: Code Readability

The main principle of OOP is still awesome: cram related code to a single class, which improves code locality. Hiding stuff into private produces sane autocompletion results and makes your component really usable. Trouble starts when you start using functionality from those classes. You generally have to choose whether to use functionality (that is, call methods) from other class either by reference, or by inheritance.

Composition promotes components: reusable objects with well-defined API that do not leak their internals and do stuff. This is the good approach.

Inheritance promotes adding functionality by extending classes. Peachy, but it's not easy to do it properly. Quoting Josh Bloch: Design and document for inheritance or else prohibit it:

  1. Your protected methods are now part of the API as well
  2. Constructor must not invoke overridable methods. This is because the overrided method in the constructor of super class will always be invoked before the constructor of the sub class. This may cause invocation of an object before that object has been initialised properly.

Ever tried to read a class which extended a deep hierarchy, and methods were magically called out of nowhere? You're effectively combining your class with all of the code from all of the subclasses, and you need to understand those as well. This hurts code locality (since superclasses are located completely elsewhere), and also code navigability (since it may only be clear at runtime what the method will actually do, thanks to polymorphism).

I once had a colleague who blindly followed the newest hip rule in OOP, quoting: "every class must have an interface". There were lots of interfaces in that project. Ctrl+Click would bring you to an interface. Luckily nowadays with Intellij you just press Navigate - Implementations, but it hurted the code navigability quite badly back then.

Dynamically-Typed Languages

Affects: Ability to Navigate

The very foundations of dynamic languages (say, JavaScript, Ruby, Python) are based on duck-typing - if a class has quack() method then it can quack. The problem with duck-typing starts to show once you Ctrl+click on the quack() function call and realize that the IDE has no way of knowing of:

  • which class you are trying to call (since you often do not specify the type of the class)
  • whether the callee class actually has the quack() method or not (since the method_missing() catch-all may be used)

Thus, on every Ctrl+click, the IDE usually presents all classes that define quack() method (unless it can guess the callee class type); and this list may actually not list all possible target code points because of method_missing().

Incorrect Code Grouping

Affects: Code Locality

Sometimes people tend to put all JPA entities into one package, all Controllers into the controller package, all Views into the view package. This causes the code to be grouped by type instead of by the functionality. This is the same fallacy as grouping racing cars by color (not the best parable but it's the best I have ;).

The fix is actually quite easy: all Views, Controllers and JPA entities dealing with the customer should instead be placed into the customer package (unless of course there is a glaring reason to do otherwise, for example when the JPA entity resides in a completely another layer, invoked remotely via REST).

Overuse of Annotations (Annotatiomania)

Affects: Ability to Navigate, Code Readability, Code Locality

An example:

@Transaction
@Method("GET")
@PathElement("time")
@PathElement("date")
@Autowired
@Secure("ROLE_ADMIN")
public void manage(@Qualifier('time')int time) {
...
}

Ctrl-clicking on @Transaction will take you to the Transaction annotation definition which contains no code. To actually see the code, one needs to search for any code touching that annotation in all libraries (and this is actually impossible with JavaEE). And then one usually encounters an uber-complex function dealing with all of the annotation parameters (say, code dealing with Required/RequiresNew/Mandatory/NotSupported/Supports/Never transaction attributes, XA transaction handling, bean/container managed transactions) which is virtually impossible to grok in a sane time span.

The idea behind annotations is noble: to standardize a particular functionality and make it configurable via annotations. Using the convention-over-configuration principle one can often simplify the annotated class so that the intent is immediately clear (say, @RequiresRole("ROLE_ADMIN")). However, often that is not enough, and in the fervous attempt of avoiding code writing, an annotation-based DSL language is born. Usually with morbid results, for example @Secure({@And(@RequiresRole(@Or("ROLE_ADMIN", "ROLE_MANAGEMENT")), @RequiresLoginPlace("ACME_HQ")}). The problem is that annotations do not actually execute anything; the actual implementation handling all aspects of the configuration is thus someplace else, not easily reachable (making the possibility to navigate hard). This can be remedied for example by specifying access verifier class in the annotation; however the code that actually calls the verifier class is still hardly reachable.

However, sometimes this may be acceptable. Say, Retrofit: the library is simple enough so that the functionality is clear; also one rarely needs to actually dig into its core and check how exactly that GET is executed. And since Retrofit forms a simple detained island which does just one thing (as opposed to Spring with all its plugins which tend to permeate all of your code base), the complexity is detained within an acceptable black box.

The interceptor annotations such as @Transactional can be replaced with custom functions with blocks, for example:

db { em.persist(person) }

The db is just a function which starts and commits (or rollbacks) the transaction. It has some 15-20 lines, is easy to understand, and you can just Ctrl+Click on the db function name to navigate directly to the source code. The idea behind the db {} function is described in mode depth in Writing Vaadin Apps In Kotlin part 3. And you can just call this function from anywhere, as opposing of having an injector everywhere (by making all classes managed beans) and injecting @Transactional class which finally can call the database.

Should the db{} function stop serving your needs (no XA or other reason), just fork and write your own, or use XA vendors. It's actually not that hard: the dirty little secret of XA is that it is still not 100% error proof; naive approach may work here quite well.

JPA

Affects: Ability to Navigate

JPA is a tough beast. Its sheer amount of annotations makes you want to run and hide. There is absolutely no way one can understand what's going on behind the scenes by a simple Ctrl+click approach. However, once one follows a good tutorial and learns how to navigate himself around pitfalls such as detached beans, then it does its job and is again an acceptable black box.

Yet, I grew weary of constant surge of exceptions, N+1 selection problems and now I just select exactly what I need with plain SQL selects, using sql2o plus a bit of code to add simple CRUD capabilities in the Vaadin On Kotlin framework.

Loose Coupling

Affects: Ability to Navigate

Loose Coupling of large modules makes sense - it's not easy to understand the modular structure of the app if all of the modules leaks internals like hell. However, it is easy to go all loose-coupling-crazy, stuff everything behind interfaces and thus turning Ctrl+click into searching for implementors of that particular function (and God forbid should there be more than one!). The granularity here really matters. Loose Couple on module level, tightly couple on package level.

Argumenting by loose coupling is often fallacious. Say, "using Rx makes your code loosely coupled". This actually sounds like a good thing, until you realize that you can no longer see the code flow! To rebute this fallacy, just ask:

  1. Do we need loose coupling in this particular place?
  2. Does it outweight the cost of not seeing the code flow anymore? (Usually it doesn't).
  3. Are you just fallaciously promoting your favourite framework?

Remember: Loose coupling directly reduces the ability to navigate and the possibility to read the code flow.

Interceptors / Aspects

Affects: Ability to Navigate

The interceptors/aspects are immensely popular with the Dependency Injection frameworks. However, when overused, this can become a maintenance nightmare. Usually interceptors are employed by the means of annotating class/function, which hurts the ability to navigate, but is still doable - by searching all accessors of that particular annotation one can eventually learn the code that "executes" the annotation (even though this gets harder as the number of annotated components increase). However, interceptors can also be configured (in Spring XML or Guice Module) so that theyapply to certain class types (and subclasses) only, and that makes them virtually undiscoverable for any coder who just joined the project. This directly boosts the coder's frustration and the desire to leave that project, which is not what you want.

One of the solutions may be to use functions with blocks instead, as shown with the db {} example.

Java Stream API

Affects: Code Readability

A simple filtering with Java Stream API turns into this:

list.stream().filter(it -> it.age > 0).collect(Collectors.toList());

The stream() and collect(Collectors) is just a noise, however in Java this noise takes almost half of the entire expression, which decreases readability. Compare that with the following Kotlin expression:

list.filter { it.age > 0 }   // the result is already a list.

Going more stream-crazy allows you to write unreadable code like this:

Stream<Pair> allFactorials = Stream.iterate(
  new Pair(BigInteger.ONE, BigInteger.ONE),
  x -> new Pair(
    x.num.add(BigInteger.ONE),
    x.value.multiply(x.num.add(BigInteger.ONE))));
return allFactorials.filter(
  (x) -> x.num.equals(num)).findAny().get().value;

Please read more at This Is Not Functional Programming. Streams are to be used cautiously: Java's nature is not a functional one, and using Java with functional approach is going to produce weird code with lots of syntactic sugar, which will make little sense to traditional Java programmers.

To avoid the noise introduced by Java you can consider switching to Kotlin.

Java Optional API

Affects: Code Readability

Optional is the Java's way to deal with nulls. Unfortunately Optional introduces methods which tend to be used instead of the built-in if statements. Consider this code:

optionalString=Optional.ofNullable(optionalTest.getNullString());
optionalString.flatMap(s->something).ifPresent(s->{System.out.println(s.toString());}).orElseThrow(()->new RuntimeException("something"));

Using the traditional approach of ifs and null-checks would have worked better here.

Optional is also capable of breaking of the Bean API. Consider the following code:

public void setFoo(String foo) {}
public Optional<String> getFoo() {}

This is actually not a property since the type for setter and getter differs. This also breaks the Kotlin support for propertyising Java beans. The code could be fixed by changing the setter to setFoo(Optional<String> foo) but that makes it really annoying to call the setFoo method.

Instead, it is better to use Intellij's @NotNull annotation and traditional language statements. The resulting code tends to be way more readable than any Optional black magic.

Spring, JavaEE, Dependency Injection

Affects: Ability to Navigate, Code Readability, Code Locality

Spring was founded as an alternative for J2EE, which in the old days contained a lot of XMLs. At that time it was lean; nowadays it has tons of modules, with varying quality of the documentation. It seems that Spring permeates everything and uses all of the above methods:

  • In a lots of projects, the wiring granularity is too high. Instead of wiring modules, it often wires classes, often with synthetic interfaces in front of those. This is basically Loose Coupling on a way too granular level, which makes the navigation in such code harder.
  • Uses interceptors/annotations heavily. Spring Data, Spring Security for example. Spring Security has became so huge that coders are avoiding it on purpose.
  • Uses clever tricks to employ interceptors. To employ interceptors to a class, Spring uses the CGLIB library to create a dynamic class which extends the original one, overrides all methods and invokes interceptor chain. This has the unfortunate side-effect of having lots of $$$PROXY fields with null values which clutter the state of the original class.

JavaEE suffers from the very same problems, and adds the following issues on top:

  • It is impossible to see the actual implementation of, say, @Transactional, since it's implemented somewhere in the application server, which may not even be OSS. And even if it was possible, the code is highly complex and hard to read.
  • If JavaEE can't employ annotation, it simply will ignore it. For example, calling @Asynchronous method on the same bean instance will silently result in a synchronous call. This directly violates the main principle since the program silently does something else than it should, from the look of the code. To actually call the method asynchronously, the class needs to inject itself and call the method on that.

The Dependency Injection paradigm is highly invasive. For example, the need of injecting classes into Vaadin (or Swing, JavaFX) components: since all DI frameworks usually tend to hide Injector from the programmer, the programmer usually makes every Vaadin component a managed bean, to have depending services injected. This makes it harder to use java-debug class-reloading techniques.

I've seen people using Spring only because of Spring Boot, since it is a pain in the ass to launch a war project in Eclipse. Let me rephrase: one's code base hurts just because the tooling of his choice sucks.

Overuse of async: Reactive, EventBus, AKKA

Affects: Code Locality, Ability to Navigate

Reactive, Rx, React or whatever the asynchronous programming is called today. On larger granularity, and with proper task, it makes total sense to employ AKKA Actors: when they are big enough in order for them to form a grokkable box of synchronous functionality in which you can navigate. Also, in large loads, having too many thread being blocked will cause their stacks to eat the memory, so async is always the way to go, right? It depends.

It is true that it is extremely handy to have only a single thread in node.js, or a single thread in AKKA actor consuming messages, as opposed to traditional multi-threaded approach. With a single thread, you don't need to worry yourself with guarding your shared state with locks. And with advanced techniques like promises and/or coroutines, neither code locality nor ability to navigate suffers.

Shit tends to hit the fan when you have a hammer labelled Rx and every problem is a nail. Then, usually Observable gets slammed on every button listener, splitting the previously readable synchronous listener method into chunks of three-line asynchronous objects. There is no way to navigate there - this is like a too granular Dependency Injection all over again, since the code which configures (creates a pipelines of those asynchronous small blocks) is usually completely elsewhere than the code itself. This horribly affects the code locality.

Even worse when you have lots of actors with no promises/coroutines to handle request/response type of messages. If all you have is just an onMessage() method, then it's usually very hard or impossible to figure out who sent the message and thus who the caller is, and thus the ability to navigate suffers. And even more worse when the actor needs to send and receive messages in certain order. Then you need to mark down in the actor state which message you already received and which you still need. You are effectively creating a defensive state machine. State machines are hard to maintain, since by adding more state raises the possible code execution flows exponentionally. This price is rarely worth more than any dubious benefits the async approach may have.

Debugging suffers with async as well - it is no longer possible to debug and check the stacktrace for the information who called you, since the caller is usually a background message-processing thread. To actually get the caller, you need to track the code which created the message, as mentioned above.

The motto of EventBus - fire [the event] and forget - can be read as I have no idea what is going on when I fire this event since it's nearly impossible in design time to discover all event receivers who gets notified. This hurts the ability to navigate.

Java 8 Default Methods on Interfaces

Affects: Code Locality

Everything is fine when the default methods are used purely as utility methods (or in place of extension methods since Java is lacking those). The hell breaks loose if the interfaces are used as traits, adding completely new functionality, demanding to store the state as in the following example:

interface Bark {
    public abstract int getBarkedCount();
    public abstract int increaseBarkedCount();
    default public String bark() {
        increaseBarkedCount();
        return "barked " + getBarkedCount() + " times";
    }
}

Of course this example is simple, silly and self-explanatory. Yet, imagine having 4-5 of such interfaces implemented on an object, each bringing its tiny little functionality inside the object, but every piece of functionality stored in a different file. The code locality suffers: it can no longer be seen what the object is responsible for, what's the actual state and what are the state mutation paths. The object becomes a God object with a lots of responsibilities.

Missing Comments

Affects: Code Readability

Quoting Uncle Bob's Clean Code book, chapter 4:

Comments are, at best, a necessary evil. If our programming languages were expressive enough [...], we would not need comments very much. The proper use of comments is to compensate for our failure to express ourselves in the code.

That's totally wrong, stupid, and very dangerous:

  1. People who take that literally will end up with having no comments in their code, and they will be proud of that. I can't even begin to describe how stupid that is.
  2. Uncle Bob actually contradicts himself - at first he says that all comments are evil, and then he lists examples of good comments: legal comments (!!! yeah, truly the most important type of comments. Uncle Bob, certainly you do care about code clarity, right?), informative comments, explanation of intent...
  3. The "expressive language" myth is easy to debunk: there is no such language. Scala tried and failed to produce not even code with intents, but even readable code. Computers don't execute intents, they execute code and they don't care about intents. Computer code is not about what we want the computer to do, it is about what the computer will do, and those are two different things. Thus, we need comments to explain our intents.

In short, having no comments because of fear that the comments may get obsolete, is just plain stupid.

Scala

Affects: Code Readability

There are languages which say what they do in English, and then there's Scala:

 
7 months ago

You have your superdevmode development environment up and running. All is peachy and world is unicorns until you figure out that some pesky bug is only reproducible on a cell phone. Bam. What now?

No worries! The totally awesome thing about PC Chrome is how easy it is to access the phone's Chrome browser log, modify DOM, everything. All this from the comfort of your PC. Just follow this tutorial and prepare for a major wow effect: https://developers.google.com/web/tools/chrome-devtools/remote-debugging/ . Simply by clicking that Inspect button you will be able to use the full power of Chrome Dev Tools on your PC, of a page displayed and running in Android's Chrome.

Usually both the PC and Android are connected to the same wifi, and thus Android will see the PC. I'm going to assume here that your PC's IP is 192.168.100.17. Naturally you will type in http://192.168.100.17:8080?superdevmode into Android's Chrome and naturally it won't work.

The usual approach is to add tons of configuration options as stated below. TL;DR: skip them and use the port forwarding solution below which is way easier and less error-prone.

The old and traditional solution which you'll want to avoid unless port forwarding is not an option for you for some reason

We need to configure stuff first.

  1. First we need to open up the codeserver so that it accepts connections from all over the world, not just from localhost. Open your mvn vaadin:run-codeserver launch configuration in intellij, and set Command line to vaadin:run-codeserver -Dgwt.bindAddress=0.0.0.0
  2. To force the Android Chrome browser to connect to the proper codeserver one needs to set it in the URL. Just make the Android Chrome Browser to open http://192.168.100.17:8080?superdevmode=192.168.100.17
  3. To avoid this pesky message "Ignoring non-whitelisted Dev Mode URL:" in your Android's Chrome (some security precaution introduced in GWT 2.6) you need to bootstrap from a slightly modified widgetset.gwt.xml. Just follow http://stackoverflow.com/questions/18330001/super-dev-mode-in-gwt and add the following into your widgetset.gwt.xml and recompile, so that you will bootstrap with proper settings:
    <set-configuration-property name="devModeUrlWhitelistRegexp" value=".*" />
    

Now what to do if you don't have your own widgetset but instead you are just using the precompiled one from vaadin-client-compiled.jar? Obviously you will bootstrap off the default unmodified DefaultWidgetSet.gwt.xml from vaadin-client-compiled.jar. You could do mvn clean install of modified Framework client/src/main/resources/com/vaadin/DefaultWidgetSet.gwt.xml but we can be smarter and use port forwarding.

Port Forwarding

Instead of twiddling with countless configuration options we will pretend that your phone is the one who is actually running the server. By doing that we can just type in http://localhost:8080?superdevmode into Android's browser and thus bypass those pesky security precautions. This can be done by somehow listening on TCP ports 8080 and 9876 on your phone; when someone connects, just send all data to the PC. This technique is called port forwarding.

Make sure you have Android SDK installed. You can download that for free here: https://developer.android.com/studio/index.html - you can either download the full-blown Android Studio which comes with the SDK itself, or just scroll down to the end of the page and install the sdk-tools, the choice is yours. We will only need the adb binary anyway.

Now open up your console and type in the following:

$ adb reverse tcp:9876 tcp:9876
$ adb reverse tcp:8080 tcp:8080
$ adb reverse --list
(reverse) tcp:9876 tcp:9876
(reverse) tcp:8080 tcp:8080

If adb complains that error: no devices/emulators found just close your PC's Chrome, so that it will close its connections to Android and will thus allow adb to connect.

Now fire up Chrome, Remote, and just type http://localhost:8080?superdevmode into your Android Chrome and you should be good to go.

FAQ

Q: The widgetset compiles but my changes aren't applied

A: Simply use the port forwarding solution. Else make sure there is no "Ignoring non-whitelisted Dev Mode URL:" message in the browser's console. If there is, you will need to bootstrap off a modified widgetset.gwt.xml.

Q: The widgetset recompilation fails

A: Simply use the port forwarding solution. Else make sure the codeserver listens on all interfaces (netstat -tnlp|grep 9876 will give you :::9876 instead of 127.0.0.1:9876). Also make sure you're using ?superdevmode=192.168.100.17 in the Android Chrome's URL.

Q: adb list won't show my phone and/or complains with error: no devices/emulators found

A: Make sure that your PC's Chrome is not connected to Android. Close all Chrome Remote windows; close all the "Remote Devices" tabs in PC Chrome. Or simply close Chrome while we configure port forwarding.

 
7 months ago

When trying to fix some browser-side issue in Vaadin built-in components, it is often useful to modify those component sources while your app is running. This way, you can modify e.g. VButton.java, refresh your browser with F5 and immediately see the changes done in Vaadin Button. This blogpost describes how to configure Intellij to do just that.

We will need two things:

  • A standard Java Vaadin WAR project which employs the component we are going to modify, e.g. com.vaadin.ui.Button. You can use your own project; if you don't have one, for the purpose of this article you can easily generate one at https://vaadin.com/maven
  • The Vaadin Framework sources checked out from git. I'll guide you later on.

First, start by running your Vaadin-based WAR application from your IDE as you usually do. I assume here that you use only the default widgetset, with no added widgets; I need to write another blogpost on how to debug your custom widgetset.

Now that your app is up and running and accessible via the browser at, say, http://localhost:8080, we are going to employ GWT superdevmode. Generally, the idea here is that once a special Vaadin switch is activated (?superdevmode added to your url), the browser will be configured by the Vaadin Framework to serve widgetset javascript not from your project's WAR, but instead from a GWT so-called codeserver which we will launch in a minute. The codeserver does two things:

  • when you press F5 in your Chrome, the codeserver will perform a hot javascript recompilation of Vaadin client-side widgets modified by you, and it will feed the compiled javascript to your Chrome.
  • the codeserver will also provide so-called sourcesets for Chrome browser so that you will see an actual original Java sources in Chrome and you will be able to debug them from Chrome.

Let us start by grabbing Vaadin sources:

$ git clone git@github.com:vaadin/framework.git

Make sure that you checkout appropriate branch (e.g. 8.0 if you use Vaadin 8.0.x, 7.7 if you use Vaadin 7.7.x etc), then compile:

$ mvn clean install -DskipTests

Then just open the main framework/pom.xml in Intellij. There will be compilation errors regarding import elemental.json.JsonObject missing, you can safely ignore those.

Launching the codeserver properly is tricky and produces lots of errors when not done right. To put it bluntly, codeserver is a bitch, prepare for lots of cursing and wasted time, you've been warned. The easiest option is to use maven vaadin plugin. Just open the Maven tool window in the Intellij in which you have the framework sources opened, go into vaadin-client-compiled / Plugins / vaadin / vaadin:run-codeserver, right-click and select Create "vaadin-client-compiled..." launch configuration. The most important part is the Resolve Workspace artifacts:

  • When it is unchecked, Maven will use vaadin-client.jar from ~/.m2/repository, meaning that you will need to run mvn clean install inside the client project every time you modify Vaadin GWT sources. This is definitely suboptimal and we can do better.
  • When it is checked, Maven will prefer client sources over jars from ~/.m2/repository. However, activating this mode and Debugging this launch configuration will sometimes fail to compile the DefaultWidgetSet with some ridiculous error message. If this happens, just read below for a workaround.

Workaround for vaadin:run-codeserver not starting: So, what now? Simple - just attach the sources as resources :-) Uncheck the Resolve Workspace artifacts, save the launch configuration. Then, open the client-compiled/pom.xml file and add the following snippet right below the <build> element:

    <resources>
        <resource>
            <directory>../client/src/main/java</directory>
        </resource>
    </resources>

Now it will be possible to launch the vaadin:run-codeserver task, and with it, the codeserver. The codeserver should eventually print something like this to the console:

[INFO] --- vaadin-maven-plugin:8.1-SNAPSHOT:run-codeserver (default-cli) @ vaadin-client-compiled ---
[INFO] Turning off precompile in incremental mode.
[INFO] Super Dev Mode starting up
[INFO]    workDir: /tmp/gwt-codeserver-1604961084994240944.tmp
[INFO]    [WARN] Deactivated PrecompressLinker
[ERROR] SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[ERROR] SLF4J: Defaulting to no-operation (NOP) logger implementation
[ERROR] SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[INFO]    Loading Java files in com.vaadin.DefaultWidgetSet.
[INFO]    Module setup completed in 14374 ms
[INFO] 
[INFO] The code server is ready at http://127.0.0.1:9876/

Now that the codeserver is running, let us use it. In your Chrome, modify the URL to http://localhost:8080?superdevmode (or http://localhost:8080?superdevmode#!someview if you are using Navigator). The browser should say that it is recompiling the widgetset, and after a while, your app should appear as usual. To prove that the app is indeed using the codeserver, we will modify the VButton.java a bit.

Open the VButton.java in Intellij where the codeserver is running, and head to the constructor. At line 118, at the end of the constructor, just add the following line:

addStyleName("woot-my-style");

Save and hit recompile CTRL+F9. There may be compilation errors regarding import elemental.json.JsonObject missing, you can safely ignore those - the VButton.java will be hot-redeployed to the codeserver anyway.

Now, head to the browser, press F5, then Ctrl+Shift+C and click any Vaadin button. The appropriate div should now have the woot-my-style class. If not, kill the codeserver and make sure you are using the <resources> workaround above, with Resolve Workspace artifacts unchecked.

This concludes the hot-compilation part. Now for the debugging part. Press F12 in Chrome and head to the Sources tab. Now patiently unpack com.vaadin.DefaultWidgetSet / 127.0.0.1:9876 / sourcemaps/com.vaadin.DefaultWidgetSet / com / vaadin (at this point I start to wonder whether coping with all this is really worthwile, now that we have Kotlin2JS) / client / ui / VButton.java. You see java sources in a browser. Cool. Now place a breakpoint at line 118 where the addStyleName() resides, and press F5. Chrome should now stop at that line. You can use leftmost debugger buttons to fiddle with the execution flow.

FAQ

Q: The codeserver fails to start with

[INFO] [ERROR] cannot start web server
[INFO] java.net.BindException: Address already in use

A: CodeServer loves to stay running even when killed by Intellij. Just do netstat -tnlp|grep 9876 and then kill the offending java CodeServer process.

Q: The codeserver does not pick up the changes I made

A: Make sure you are launching the codeserver in Debug mode. Then, make sure you saved the java file and hit CTRL+F9. Also make sure you are using the <resources> workaround above, with Resolve Workspace artifacts unchecked.

Q: The codeserver fails to compile widgetset because of some ridiculous compilation error

Make sure you are using the <resources> workaround above, with Resolve Workspace artifacts unchecked.