Why You Should Avoid TIBCO BusinessWorks 6

I used to be a TIBCO fan. Like, honestly.

I spent my whole professional life working on integration projects and I do have a strong experience in ESB products like TIBCO BusinessWorks, SoftwareAG webMethods, Apache Camel etc…

Before working on TIBCO BusinessWorks 6, my clear ESB preference was BusinessWorks. BusinessWorks 5 I mean.
Indeed, TIBCO BusinessWorks 5 used to be a great integration tool, very much stable, very well designed, easy to use etc. I was clearly in love with this version and I have to thank the TIBCO engineers for this.

But this was before. Before BusinessWorks 6.

Even if I consider that ESB as such will become more and more outdated, I am going to simply express my feeling about the product BusinessWorks 6, not about the theoretical concepts themselves. In the meantime, I am not going to reference any other articles, comments or reviews you can find on the internet. This article is solely based on my own one-year BW6 experience.

The problems

The first concept I loved in BW5 was the fact that each component was autonomous. Instead of deploying your application on a server (just like Integration Server on webMethods), you were simply instantiating your component in an autonomous way. Every dependency was embedded inside of the BW5 component. Pretty familiar with the concepts of microservices and fat JAR right? In fact, TIBCO was pretty ahead on such concepts.

With BW6, the core engine has been redesigned and is now based on OSGI. The main benefit is that you can now deploy several BW components on a single JVM. The system is flexible enough to implement whatever granularity you wish (one single fat JVM, one JVM per component, one JVM per logical group of components etc.). This capability is nice but from my perspective, this was not mandatory. As I said with the rise of microservices, engineers tend to favor more and more autonomy.

I do not know really if it is related to the OSGi integration but throwing away the BW5 engine was a huge mistake. As I said BW5 was really much stable. But now with BW6, I faced so many problems with the runtime engine that I feel anxious each time I need to run an application on my local environment or on a remote domain.
Deploying an application for example is not idempotent. You can do a set of actions (e.g. deploying an application A, deploying an application B, starting an application A and starting an application B) and have different results each time.

As Einstein said: “Insanity is doing the same thing over and over again and expecting different results”. I tell you that you may become insane with BW6.

The other main problem is the new Eclipse-based IDE: BW Studio. It is really unstable I do not see real improvements with all the past versions released by TIBCO.

The workspace management for instance is a nightmare. All of a sudden a workspace can crash for whatever reasons and become unusable. So you will have to create a dozen of workspaces per month and waste your precious time. Another problem is that a set of projects can work in a workspace and not work in another workspace. We suspect Studio to keep a local cache of a project state somehow somewhere. Theoretically, an Eclipse-based IDE was a very nice feature but believe me I do hate this IDE and the number of bugs occurring randomly (a problem occurred: widget is disposed, a problem occurred: NullPointerException, an internal error has occurred: null argument and so on…).

By the way, have you ever seen an IDE proposing a “Repair project” option? This is a new feature of BW6. Because Studio is not stable, believe me, you will have to use this function several times per day and praying each time for a concrete fix. As an example, when your workspace is fine but that after having restarted your computer, the projects are now full of errors. This is the moment when you start praying. And the worst part is that even this function is not idempotent! Repairing a project may not lead to the same results than repairing the project twice. Very frustrating…

In the meantime, renaming a BW asset can also lead to some random bugs. Do not expect to feel confident if you have to simply rename a service, an operation or whatever. In some cases the action will work, in other cases, this will lead to corrupting your project.

The shared modules are another part of the whole BW6 problem. These are more or less based on the same principles than BW5 DTL.
The first problems with shared modules are that if your schemas are a bit too complex (I will come back to what complex means hereunder), you will not be able to achieve the same level of reusability than in BW5. In my company, we decided to not embed our schemas into shared modules but to simply copy paste them into each BW application because of known limitations. The second problem is at design time. The shared modules are not compiled and so not managed as binaries by BW Studio. There are managed like any other projects with a source you can modify on the fly. This was tackled by a recent BW release but still with many limitations and you can still modify some parts of a shared module.

Do you remember how to debug a single process in BW5? This was very simple, right? You were able to simply execute a single process with a given input and check its behavior quite quickly. Forget it with BW6.

Because of the OSGI engine, I assume, you will have to start your whole application (so waiting several dozens of seconds) before to execute a simple test on a single process. And even this part is far to be reliable. In my Studio, I am not able to do it every time. Sometimes when I am in debug mode, the pipeline containing the runtime variables is empty for whatever reasons. If you come from BW5, you will most likely hate the way to debug an application in BW6 Studio. Just like me.

As I said we have also to deal with complex schemas. You could obviously argue that the notion of complexity does not really exist for a schema. A schema is either compliant or not compliant with a standard. But if you are using abstract elements, multiple imports etc., believe me, there are complex for Studio. We faced so many problems in dealing with those schemas. I know that things were not perfect in BW5 but in BW6 this is a complete mess. Studio has many limitations and bugs. I feel like we spent more time finding Studio workarounds than actually developing core functions.

A new feature of BW6 is that each time you need to create a subprocess (for the non-TIBCO experts, a reusable piece of logic), it has to be based on a WSDL. Yes, a WSDL. Just ask a webMethods developer how long it takes to create a reusable flow service and to design its contract. Most likely few seconds. My mistake I thought BusinessWorks was supposed to be a development accelerator.
This was enhanced, though, with a recent release (direct subprocess) but now you still have to create an XSD. And it is not possible anymore to directly create elements in your process input as it was possible in BW5.

Do not expect to have a great CI support. There is a plugin made by TIBCO developers for Maven integration but there are so many limitations that you will most likely have to either fork it or create another one from scratch (I have seen both choices).

Runtime performance is another major problem. In a nutshell, BW6 performance is a disaster.

I ran simple performance tests to compare BW6 and Vert.x on exposing a simple REST service with a simple transformation. With Vert.x the average latency is less than one millisecond. With BW6 the average latency is 10 times longer. With 100 concurrent users, it is 27 times.

I tried to collaborate and to share my tests with TIBCO on the TIBCO ideas portal: https://ideas.tibco.com/ideas/AMBW-I-6.

Meanwhile, with the rise of reactive implementations, TIBCO should in my humble opinion tackle this. I also tried to share my thoughts about this topic on the TIBCO ideas portal: https://ideas.tibco.com/ideas/AMBW-I-5.

I always mention “I tried to share” because I never got any feedback from TIBCO since I published my first idea about 10 months ago. This is also very disappointing.

Conclusion

I could have mentioned many other problems and limitations (memory consumption, BW5 migration, TEA stability, palettes stability, SOAP limitations, Swagger limitations etc.) but I wanted this article to be concise, not a list of every problem I faced.

Clearly, though, BusinessWorks 6 is a disappointment for me, especially in regards to how great BW5 was and of my expectations for this new version.

This may seem like a paradox somehow but I do believe in the TIBCO company though. They already built great solutions in the past and I simply cannot believe they would not be able to achieve the same level of cleverness with BW6. But clearly, this will take time.

And until that time, in my opinion you should avoid TIBCO BusinessWorks 6.

If you do not share my point of view, feel free to open the discussion 😉

Tested BW version: 6.2, 6.3 and 6.4.x (x < 2)

Introducing TIBreview: An Automated Quality Code Review Tool For TIBCO BusinessWorks 6

TIBreview
After having introduced reactiveWM, a framework for implementing reactive designs on webMethods, I would like this time introduce another development of mine based on TIBCO: TIBreview.

This project is an automated quality code review tool for TIBCO BusinessWorks 6 projects. In a nutshell TIBreview is based on an extensible and configurable engine for running automated rules in XPath or Java. At the end of an automated check, it can generate a report in a CSV or a PMD format (directly integrable in Jenkins or Sonar).

So far, TIBreview is capable to analyze two types of BusinessWorks assets: processes and resources. For the first release, about 50 automated rules have been implemented:

  • Checking forbidden activities: Checkpoint, Copy File etc…
  • Global TIBCO conventions: process description missing, default namespace used, process size, number of activities per process, missing Catch All activity etc…
  • Transitions: missing label on a conditioned transition, missing Empty activity after a transition parallelization, conditioned transitions without a no matching one
  • Critical sections: make sure a blocking activity is not called within a critical section scope
  • Process calls infinite cycles: for example Process A –> Process B –> Process A
  • Resources: hardcoded values
  • Etc…

Because each context is unique, it is also possible to quickly enable/disable rules and to easily implement new ones. So if you are interested to do it because you developed, for instance, your specific BusinessWorks framework, you should give it a try. TIBreview can be used during developments (manually triggered by a develop for example) or in the scope of a continuous integration process.

For more information, check out the Github page: https://github.com/teivah/TIBreview

Last but not least, if you are interested to work on the project, please let me know 😉

From Application Integration To Microservices Architecture: A Pragmatic Approach

Today we are going to cover a real-life example showing the different methods to expose business capabilities, from application integration to SOA to microservices architecture.

In this example, let’s dig into an insurance company wanting to expose the business capability of quotation calculation for an insurance policy. Of course, to calculate the best quotation price for one customer, we need to know the existing customer contracts (perhaps to return a lower price for devoted customers).

Method 1: Application integration

method1_enterprise_integration

Let’s first think in terms of application. We can imagine that the insurance company was managing customers with a CRM application and then they started to develop a new application to manage quotations based on complex algorithms.

Because the list of existing contracts for a customer is managed by the CRM application, the quotation application must first retrieve them.

The first approach is to replicate the data managed by the CRM into a local database, hosted by the quotation application.

The replication can be made either:

  • With batches (using an ETL for instance).
  • Or rather in an event-driven mode by triggering contract operations as soon as something change in the CRM (using a point-to-point integration or an EAI).

Pros:

  • The quotation application is not tightly coupled with the CRM application. At runtime, the quotation application is autonomous.

Cons:

  • The replica DB is not the company’s master referential for managing customers. It may lead to some data quality issues.

Method 2a: SOA with an ESB

method2_soa_with_esb

Now let’s shift of paradigm and consider the service as the central enterprise building block where reuse is everything.

We still have two applications, a CRM and a quotation one.
The CRM application exposes a getContract service based on a client ID and the quotation application exposes the calculation service based on a list of contracts.

Based on traditional SOA layers, the most intuitive approach would be to expose a business service on top of these two existing applicative services.

The new service would be owned by the ESB (as a SOAP or a REST web service) and would orchestrate the sub service calls:

  • Invoke the getContract service.
  • Invoke the quotationCalculation service based on the contract list retrieved with the previous call.
  • Send back a synchronous reply to the consumer.

Pros:

  • No more data replication.

Cons:

  • The new business service is tightly coupled at runtime with both applications.

Method 2b: SOA without an ESB

method3_soa_without_esb

Because SOA does not necessarily mean ESB (even though most SOA implementations are ESB-based), we can think about exposing the business service directly on top of the quotation application.

Instead of being owned and hosted by a middleware layer, the service is exposed directly by the quotation application. It will call first the getContract service, still exposed on top of the CRM application and then calculate the best quotation price.

Pros:

  • No data replication.

Cons:

  • The quotation application is tightly coupled with the CRM application.

Method 3a: Synchronous microservices

method4_microservices_sync

One last time, let’s shift of paradigm and consider now microservices as the information system building block. Compared to SOA promoting the share-as-much-as-possible principle, microservices architecture promote the opposite, the share-as-less-as-possible principle.

A microservice has different characteristics among which:

  • Autonomy
  • Isolation

These characteristics are the main drivers of microservices architecture and lead to scalable and resilient platforms insofar as:

  • Each service can be scaled independently.
  • Failure can be isolated and gracefully managed using circuit-breaker pattern.

Based on these two characteristics, the quotation microservice must embed the contract information in a local database using event sourcing and CQRS patterns.

We can undoubtedly notice that this method is similar to the first one. Of course the granularity is completely different (from monolithic application to lightweight microservice) but insofar as the IT is generally cyclic, this is not much surprising to notice similarities.

In this example, the quotation microservice exposes a synchronous boundary to calculate the customer quotation (based on REST or Thrift for instance).

This example matches perfectly but if we would have considered a more complex service (with another function already handled by another microservice for instance), the following choice would have to be made:

  • Either embedding this external function into our quotation microservice. But if we consider that micro means doing one thing but do it well, we would inevitably lose service cohesion.
  • Or coupling the quotation microservice with an external microservice but then it would not anymore be considered as autonomous.

Pros:

  • The quotation service is at runtime autonomous and can be scaled independently from the rest of the platform.

Cons:

  • Even using modern approaches such as event sourcing and CQRS, we are still talking about data replication.
  • In complex cases, a cursor must be positioned between service cohesion and service autonomy.

Method 3b: Asynchronous microservices

method5_microservices_async

Let’s now consider two microservices, the former exposing the getContract capability and the latter exposing the quotationCalculation capability.

With this approach, we consider asynchronous interactions between two microservices and between with the consumers. The overall coupling is obviously strongly reduced.

One of the main characteristic of microservices architecture is to promote choreography (without any central component) over orchestration (with a central component).

The global routing logic might be performed:

  • Based on publish-subscribe. Each microservice publishes its response to an event bus (in our case the quotation microservice subscribes to contract events and the consumer subscribes to quotation events).
  • Based on a logic either predefined by the consumer or distributed to microservice. The interactions between microservices can be done using standards such as Reactive Streams.

Such architectures are also called reactive microservices architecture.

Pros:

  • If two microservices must communication with each other, compared to 3a we can still manage having low coupling and high service cohesion in parallel.

Cons:

  • Global asynchronicity can lead to more complex architectures (in terms of development, monitoring etc…).
  • All service consumers are not asynchronous-ready.

Conclusion

As usual, there is not in IT a one-size-fits-all approach. Microservices architecture is obviously the next “big thing” in enterprise integration but it is worth understanding the pros and cons of each architecture type and also remembering our legacy.

Further reading

Vert.x: Understanding Core Concepts

Today a deep dive into a framework for building reactive applications on the JVM: Vert.x
In this post we are going to cover the main concepts of Vert.x and provide examples for the developer wanting to see some concrete implementations.

The source code of each example is available at https://github.com/teivah/vertx-tutorial

Overview

vertx
Vert.x is a project started in 2011 by Tim Fox. It is still an official Eclipse Foundation project but led by Red Hat which sees Vert.x as a strategic project.

Vert.x is a framework based on the JVM allowing developers to implement autonomous components. These components communicate together in a weakly-coupled way by sending messaging over an Event Bus.

Vert.x offers a polyglot approach because developers are free to implement their components among a list of languages: Java, JavaScript, Groovy, Ruby, and Ceylon.

Likewise, Vert.x offers functionalities for implementing reactive applications based on a non-blocking and callback approach.

Vert.x is often compared as the Node.js for the JVM but we will see the main differences between both frameworks.

Reactor pattern and event loop

As implemented and popularized in Node.js, Vert.x is based upon the Reactor pattern.

The Reactor pattern is a solution:

  • To handle concurrent events (either a connection, a request or message).
  • Based upon a synchronous and blocking demultiplexer (also known as event loop) listening for incoming events.
  • Dispatching events to appropriate handlers if an operation can be executed in an asynchronous fashion.

A handler is generally implemented to manage operations such as I/O, database connection, HTTP request etc…

This pattern is often used to offer a solution to the C10k problem. This problem might be described very simply: how can a server handle ten thousand connections at the same time?

In such conditions, the classical approach by creating one dedicated thread per request is no longer possible due to physical and/or OS limitations (way too much thread context switching for instance).
In comparison, the Reactor pattern offers a model with a single-threaded event loop.

Yes. One thread to manage 10k connections. If you are not familiar with such concepts, this is pretty disruptive, isn’t it?
One thread to manage incoming events and dispatch them to asynchronous handlers.

The Vert.x approach is slightly different though. Let’s imagine now you want to scale-up your machine by adding more CPU cores. If you keep one single thread, you will not benefit from these new resources.

With Node.js though you can bypass this limitation usually by implementing one process per core.

In comparison, Vert.x allows you to configure and define the number of instantiated event loops. This is clearly a lovely and powerful option. A best practice to boost as most as possible the performance of your application is to set up as many instances as the number of CPU cores available.

It is worth saying that as a developer, you have to be very careful with the way you manage the event loop.
With the classical approach, if one thread is blocked; it is of course not a good thing but it will not impact many other threads.
With the reactor approach, a blocked thread would simply result in a disaster because you would impact all ongoing and upcoming events.

In addition, Vert.x offers a various set of non-blocking API. You can use Vert.x for non-blocking web operations (based upon Netty), filesystem operations, data accesses (for instance JDBC, MongoDB and PostgreSQL) etc…
There is also a standard way we will describe hereunder to manage a blocking operation directly from an event loop.

Verticle model

As mentioned in the overview, Vert.x proposes a model based upon autonomous and weakly-coupled components.
This type of component is called in Vert.x a Verticle and can be written in the different languages we have seen. Each verticle has its own classloader.

It is not mandatory to develop Vert.x applications based upon verticles but it is strongly recommended though to adopt this model.

A verticle is basically an object implementing two methods: start() and stop().
There are three different types of verticles:

  • Standard: Verticles based upon the event loop model. As mentioned, Vert.x allows to configure the number of event loop instances.
  • Worker: Verticles based upon the classical model. A pool of thread is defined, and each incoming request will consume one available thread.
  • Multi-threaded worker: An extension of the basic worker model. A multi-threaded worker is managed by one single instance but can be executed concurrently by several threads.

In the below examples, we will show concrete examples on how to configure and instantiate each verticle type.

The Verticle approach has some similarities with the Actor model but is not a full implementation of it.
I will not dig too much deeper into these concepts (it would be a nice idea for a future post though to present in details the differences between Vert.x and an Actor-compliant framework such as Akka). If we just stick to the three main principles of an Actor given in the Reactive Messaging Patterns with the Actor Model book:

  • Send a finite number of messages to other actors: we will see it in the next part, verticles are communicating together in an asynchronous fashion using messages.
  • Create a finite number of new actors: in my understanding/perception, this is not the philosophy of Vert.x, each verticle is deployed by a central process. Having concrete dependencies between verticles to form a kind of supervision system where a parent would have to supervise its child is not the model proposed by Vert.x.
  • Designate the behavior to be used for the next message it receives: Still according to my understanding, verticles are designed for being stateless components. They do not maintain their state to modify for instance their behaviors in preparation for future messages. It would still be possible to implement it manually but there is no Vert.x standard for doing it.

Event Bus

The last core concept is the Event Bus. As you can imagine, the Event Bus is the support for exchanging messages between the different verticles in a weakly-coupled fashion.

Each message is sent to a logical address. For example: customer.create

The event bus supports the two standard pattern types: point-to-point and publish / subscribe. With point-to-point pattern, you can also manage to implement request-reply if the recipient must reply to the initial message.

One important thing to understand, the event bus implements only best-effort delivery.
It means there is no way to manage guaranteed delivery by storing for instance messages on the disk. So there are true possibilities to lose messages.
Likewise, there is no capability to implement durable subscribers. It means that if a consumer is not connected to the Event Bus when a message is published, it will never be able to retrieve it.

Within your Vert.x application, the Event Bus may be the standard for synchronous and asynchronous exchanges without having to guarantee the delivery.
Otherwise, you should use a third-party messaging middleware.

If your Vert.x application runs on a single JVM node, there is no physical channel. It remains simply a messaging endpoint on-heap.
If your application is clustered, an event bus endpoint is available through a TCP channel. But this logic remains invisible for the developer. There is no difference in the way you are going to use the send/publish API. Vert.x manages that for you.

Last precision, so far with existing versions (up to 3.2.1), the Event Bus TCP communication is not encrypted. Based on the roadmap, it will be implemented from version 3.3 (~ June 2016).

Let’s code!

Enough theory for this post. Let’s see now some concrete examples:

  • In the first and second examples we will the basic concepts to implement a REST API.
  • In the third one we will describe how it is possible to compose asynchronous methods.
  • In the fourth example we will see how to publish, send and receive messages on the Event Bus.
  • The last example will show how to create and deploy verticles.

Create a basic REST service

Github example

In this example, we will create a simple method on a REST resource.

public class Sample extends AbstractVerticle {

	// Start method
	@Override
	public void start(Future<Void> startFuture) {
		// Create a router object allowing to route HTTP request
		final Router router = Router.router(vertx);

		// Create a new get method listening on /hello resource with a parameter
		router.route(HttpMethod.GET, "/hello/:name").handler(routingContext -> {
			// Retrieving request and response objects
			HttpServerRequest request = routingContext.request();
			HttpServerResponse response = routingContext.response();

			// Get the name parameter
			String name = request.getParam("name");

			// Manage output response
			response.putHeader("Content-Type", "text/plain");
			response.setChunked(true);
			response.write("Hello " + name);
			response.setStatusCode(200);
			response.end();
		});

		// Create an HTTP server listening on port 8080
		vertx.createHttpServer().requestHandler(router::accept).listen(8080, result -> {
			if (result.succeeded()) {
				// If the HTTP server is deployed, we consider the Verticle as
				// correctly deployed
				startFuture.complete();
			} else {
				// If not, we invoke the fail method
				startFuture.fail(result.cause());
			}
		});
	}

	// Stop method
	@Override
	public void stop(Future<Void> stopFuture) throws Exception {
		// Do something
		stopFuture.complete();
	}
}

We will see in the last examples how to deploy a verticle. So far, you have simply to note that if you need to create your own verticle, you must extend the AbstractVerticle class.

In this class, to manage the deployment and un-deployment parts, you can override these four methods:

  • start()
  • start(Future)
  • stop()
  • stop(Future<Void)

The start methods are called by Vert.x when an instance is deployed and the stop when an instance is un-deployed.
With the methods taking a Future<Void> parameter, you have the capability to indicate asynchronously that the instance has been successfully deployed.

final Router router = Router.router(vertx);

The router object is retrieved from the vertx one. This object is a protected one, inherited from AbstractVerticle.

router.route(HttpMethod.GET, "/hello/:name").handler(routingContext -> {
	// Retrieving request and response objects
}

We use route to expose a new GET method on a /hello resource with a name parameter.

If this verticle is a standard one, the implementation part will be invoked based on the event loop mechanism. It means the implementation must never ever be blocked. As we mentioned, each time that an operation can be managed asynchronously we need to dispatch it to a dedicated handler.
The routingContext object is a lambda containing the request context.

HttpServerRequest request = routingContext.request();
HttpServerResponse response = routingContext.response();

From the routingContext, we retrieve the incoming request and the response we are going to format.

String name = request.getParam("name");

We create a new String based on the input name parameter.

response.putHeader("Content-Type", "text/plain");
response.setChunked(true);
response.write("Hello " + name);
response.setStatusCode(200);

The response is then formatted (content, content-type etc…).

response.end();

At the end, when the response is formatted, we invoked the end() method to indicate the HttpServerResponse to send back a reply.

vertx.createHttpServer().requestHandler(router::accept).listen(8080, result -> {
	if (result.succeeded()) {
		startFuture.complete();
	} else {
		startFuture.fail(result.cause());
	}
});

After having configured our first route, we now want to create a listening HTTP port on 8080.
Based upon this configuration, we can then indicate to the startFuture future whether the verticle has been correctly started.
If there is no particular problem (port not already in used for instance), we call the complete() method. Otherwise we invoke fail(Throwable).

And that’s it! You have just created your first REST service.

Manipulate Json data and execute blocking code

Github example

In this example, we will expose a POST method but we will see as well how to manipulate JSON using Vert.x API and how to manage blocking codes from a verticle.

public void start(Future<Void> startFuture) {
	final Router router = Router.router(vertx);

	// Create a BodyHandler to have the capability to retrieve the body
	router.route().handler(BodyHandler.create());

	router.route(HttpMethod.POST, "/file").handler(routingContext -> {
		HttpServerResponse response = routingContext.response();

		// Retrieve the body as a JsonObject
		JsonObject body = routingContext.getBodyAsJson();
		// Get the filename attribute value
		String filename = body.getString("filename");

		// Execute a blocking part
		vertx.executeBlocking(future -> {
			byte[] bytes = parseLargeFile(filename);
			// Complete the future with the generated objects in the
			// blocking method
			future.complete(bytes);
		} , res -> {
			if (res.succeeded()) {
				// If the blocking part succeedeed
				byte[] bytes = (byte[]) res.result();

				response.putHeader("Content-Type", "application/octet-stream");
				response.setChunked(true);
				response.write(Buffer.buffer(bytes));
				response.setStatusCode(200);
				response.end();
			} else {
				// Otherwise we manage to return an error
				response.setStatusCode(500);
				response.setStatusMessage("Internal error: " + res.cause().getMessage());
				response.end();
			}
		});
	});

	vertx.createHttpServer().requestHandler(router::accept).listen(8080, result -> {
		if (result.succeeded()) {
			startFuture.complete();
		} else {
			startFuture.fail(result.cause());
		}
	});
}

The first part is similar except that here we are exposing a POST method on a /file resource.

JsonObject body = routingContext.getBodyAsJson();
String filename = body.getString("filename");

The Vert.x core API provides mechanisms to manipulate JSON data.
In this case we simple asked the routingContext to format the body as a JsonObject.
We create then a filename String using the getString(String) method.

vertx.executeBlocking(future -> {
	//Blocking part
	future.complete();
} , res -> {
	if (res.succeeded()) {
		//Success
	} else {
		//Failure
	}
});

Same alert than before, because we are implementing an event loop, we must take care to never block it.
Nevertheless let’s imagine now that you must call a blocking method, for instance, to read a file (even if for I/O operations, we will see in the next example that Vert.x provides a non-blocking API for that) or anything else but still based a blocking API.

Vert.x allows doing it from your event loop implementation by simply encapsulating the blocking part within an asynchronous handler.

vertx.executeBlocking(future -> {
	byte[] bytes = parseLargeFile(filename);
	// Complete the future with the generated objects in the
	// blocking method
	future.complete(bytes);
}

In our example we encapsulate a blocking parseLargeFile() method within a future.

res -> {
	if (res.succeeded()) {
		byte[] bytes = (byte[]) res.result();
		response.putHeader("Content-Type", "application/octet-stream");
		response.setChunked(true);
		response.write(Buffer.buffer(bytes));
		response.setStatusCode(200);
		response.end();
	} else {
		response.setStatusCode(500);
		response.setStatusMessage("Internal error: " + res.cause().getMessage());
		response.end();
	}
});

The res object is in that case an AsyncResult containing the asynchronous result of the previous future.
Here we set up a specific handler in case where the future succeeded and another handler otherwise.

Orchestrate asynchronous executions

Github example

In this example, we will show the different ways to orchestrate asynchronous executions.

Let’s imagine we have implemented these three methods:

// Read file method
private Future<String> readFile() {
	Future<String> future = Future.future();
	
	// Retrieve a FileSystem object from vertx instance and call the
	// non-blocking readFile method
	vertx.fileSystem().readFile("src/main/resources/example03/read.txt", handler -> {
		if (handler.succeeded()) {
			System.out.println("Read content: " + handler.result());
			future.complete("read success");
		} else {
			System.err.println("Error while reading from file: " + handler.cause().getMessage());
			future.fail(handler.cause());
		}
	});
	
	return future;
}

// Write file method
private Future<String> writeFile(String input) {
	Future<String> future = Future.future();
	
	String file = "src/main/resources/example03/write.txt";
	
	// Retrieve a FileSystem object from vertx instance and call the
	// non-blocking writeFile method
	vertx.fileSystem().writeFile(file, Buffer.buffer(input), handler -> {
		if (handler.succeeded()) {
			System.out.println("File written with " + input);
			future.complete(file);
		} else {
			System.err.println("Error while writing in file: " + handler.cause().getMessage());
			
		}
	});
	
	return future;
}

// Write file method
private Future<String> copyFile(String input) {
	Future<String> future = Future.future();
	
	// Retrieve a FileSystem object from vertx instance and call the
	// non-blocking writeFile method
	vertx.fileSystem().copy(input, "src/main/resources/example03/writecopy.txt", handler -> {
		if (handler.succeeded()) {
			System.out.println("Copy done of " + input);
			future.complete("Copy success");
		} else {
			System.err.println("Error while copying a file: " + handler.cause().getMessage());
			future.fail(handler.cause());
		}
	});
	
	return future;
}

Each method now is based upon non-blocking a non-blocking API provided by Vert.x. We set up handlers on success and failure.

The next challenge will be to orchestrate them.

In a synchronous mode you could have done something like:
copyFile(writeFile(readFile))

Of course with asynchronous developments, this is not possible anymore.
There are two ways to do it. Let’s see the first one:

// Orchestration based on setHandler
private void method1() {
	// Create first step
	Future<String> future1 = readFile();

	// Set handler on future1
	future1.setHandler(res1 -> {
		if (res1.succeeded()) {
			Future<String> future2 = writeFile(res1.result());

			// Set handler on future 2
			future2.setHandler(res2 -> {
				if (res2.succeeded()) {
					Future<String> future3 = copyFile(res2.result());

					// Set handler on future 3
					future3.setHandler(res3 -> {
						if (res3.succeeded()) {
							System.out.println(res3.result());
						} else {
							// Manage doSmthg3 errors
						}
					});
				} else {
					// Manage doSmthg2 errors
				}
			});
		} else {
			// Manage doSmthg1 errors
		}
	});
}

Pretty verbose right? Let’s dig into the code.

Future<String> future1 = readFile();

The first action is to retrieve the Future object sent back by the readFile() method.

future.setHandler(res -> {
	if (res.succeeded()) {
		// If the future succeeded
		Object o = res.result();
		// Do something...
	} else {
		// If the future failed
	}
}

The rest of the code is basically a composition of such patterns.
We set up a handler if the future succeeded and another one if it failed. The result of the previous execution is retrieved with the result() method

Since 3.2.1, there is another way, less verbose, to orchestrate asynchronous functions:

// Orchestration based on compose
private void method2() {
	// Create first step
	Future<String> future1 = readFile();

	// Define future1 composition
	future1.compose(s1 -> {
		Future<String> future2 = writeFile(future1.result());

		// Define future2 composition
		future2.compose(s2 -> {
			Future<String> future3 = copyFile(future2.result());

			// Because the future3 is the last, we define here a handler
			future3.setHandler(handler -> {
				if (handler.succeeded()) {
					System.out.println(handler.result());
				} else {
					// Manage doSmthg3 errors
				}
			});
		} , Future.future().setHandler(handler -> {
			// Manage doSmthg2 errors
		}));
	} , Future.future().setHandler(handler -> {
		// Manage doSmthg1 errors
	}));
}

The recurring pattern is now the following:

future.compose(res -> {
	// If the future succeeded
	Object o = res.result;
	//Do something
}, Future.future().setHandler(handler -> {
	// If the future failed
}));

The compose method takes in argument a Handler invoked in the execution was in success and another failure in which the failure (if any) will be propagated.
At first glance, this implementation may seem less natural but it is about 30% less expensive in terms of lines of code than the first implementation with setHandler().

Last but not least, it is worth noting that the CompositeFuture object offers static methods to implement handlers on the completion of a future list for example. This might be useful if all your futures are independent (one future does not depends on the execution of another one).

Use the Event Bus to send and receive messages

Github example

Let’s get on the Event Bus to see how to receive and send/publish messages.

public void start(Future<Void> startFuture) {
	//Retrieve the EventBus object form the vertx one
	EventBus eventBus = vertx.eventBus();

	//Create a EventBus consumer and instantiate a JsonObject type message consumer
	MessageConsumer<JsonObject> createConsumer = eventBus.consumer("customer.create");
	
	//Handle new messages on customer.create endpoint
	createConsumer.handler(json -> {
		System.out.println("Received new customer: " + json.body());
		//Enrich the customer object
		JsonObject enrichedCustomer = enrichCustomer(json.body());
		
		//Publish (one-to-many) the enriched message on another endpoint
		eventBus.publish("customer.completion", enrichedCustomer);
	});
	
	startFuture.complete();
}

Let’s detail the key steps.

EventBus eventBus = vertx.eventBus();

The Event Bus instance is retrieved from the vertx object.

MessageConsumer<JsonObject> createConsumer = eventBus.consumer("customer.create");

We create a JsonObject-typed MessageConsumer with a simple call to the consumer() method in defining a logical name for the channel.

createConsumer.handler(json -> {
});

From the MessageConsumer object, we define then an explicit handler that will be executed each time we will receive a new message in the customer.create channel.
The json lambda will be directly formatted as a JsonObject.

Let’s now imagine that after having enriched the JsonObject, we now want to republish it onto another channel. This can be simply done using:

eventBus.publish("customer.completion", enrichedCustomer);

In this example we called the publish() method (one-to-many) but if we wanted a point-to-point interaction, we would have called the send() method (one-to-one).

If the Event Bus consumer is defined within a standard verticle, the same event loop model will be applied. So same alert, be careful on never blocking your implementation.

Deploy verticles

Github example

As we have seen there are three different types of verticles. Let’s first see how to deploy a standard one:

vertx.deployVerticle("org.enterpriseintegration.Standard",
	// Instantiate a DeploymentOptions by setting an explicit number
	// of instances and referencing a JSON configuration
	new DeploymentOptions().setInstances(instances).setConfig(config), res -> {
		if (res.succeeded()) {
			System.out.println("Standard verticle deployed");

			// Send messages on the event bus
			EventBus eventBus = vertx.eventBus();
			eventBus.publish("event", "event01");
			eventBus.send("event", "event02");
			eventBus.send("event", "event03");
		} else {
			System.out.println("Error while deploying a verticle: " + res.cause().getMessage());
		}
	}
);

The deployVerticle() method takes as a first argument either a String referencing the implementation class or a Verticle object.
The second argument is a DeploymentOptions object, used to set up options during the verticle deployment.

Here we can set up the number of instance (or event loops) for a verticle by calling the setInstances() method.
The setConfig() call shows how to pass information from the parent to the verticle instance. The config object is a JsonObject.

One important thing, if we define an Event Bus consumer on the event channel with two verticle instances, the published message event01 will be received by both instances. This may not be an expected behavior, so you have either to restrict your instances number to only one or to use another verticle type.

The following example shows how to deploy a worker verticle:

vertx.deployVerticle("org.enterpriseintegration.Worker",
	// Instantiate a DeploymentOptions by setting an explicit number
	// of instances and enabling worker code
	new DeploymentOptions().setInstances(instances).setWorker(true), res -> {
		if (res.succeeded()) {
			System.out.println("Worker verticle deployed");

			// Send messages on the event bus
			EventBus eventBus = vertx.eventBus();
			eventBus.publish("event", "event01");
			eventBus.send("event", "event02");
			eventBus.send("event", "event03");
		} else {
			System.out.println("Error while deploying a verticle: " + res.cause().getMessage());
		}
	}
);

The only change (except that here we do not show how to pass a configuration object), is the call to the setWorker() method. This method takes as an argument a boolean to indicate simply whether a verticle is a worker.

Usually, because the worker type follows the more classical model one event = one thread, we need to set up a higher number of instances than for standard verticles.

With the same deployment configuration (two instances), the worker verticle will still receive event01 twice. Then it will manage event02 and event03. So same alert than before because it may lead to create duplicates.

The solution for this very scenario is to use a multi-threaded worker:

vertx.deployVerticle("org.enterpriseintegration.MTWorker",
	// Instantiate a DeploymentOptions by setting an explicit number
	// of instances and enabling worker code, multiThreaded mode but
	// without having to set the number of instances
	new DeploymentOptions().setWorker(true).setMultiThreaded(true), res -> {
		if (res.succeeded()) {
			System.out.println("Multi-threaded Worker verticle deployed");

			// Send messages on the event bus
			EventBus eventBus = vertx.eventBus();
			eventBus.publish("event", "event01");
			eventBus.send("event", "event02");
			eventBus.send("event", "event03");

		} else {
			System.out.println("Error while deploying a verticle: " + res.cause().getMessage());
		}
	}
);

In that example, we called the setMultiThreaded method with true.
As you can see here we do not configure the number of instances. For a multi-threaded worker verticle, this value i not used.

A multi-threaded worker is managed by only one single instance but can be executed concurrently by several threads. In comparison a default worker is managed by one or several instances but each instance is executed by only one thread.

In this example, the event01 will be received only once (so did the event02 and event03 of course).

What else?

In this first post we covered the core principles of Vert.x: Event loop model, Verticle and Event Bus.

As we have seen, Vert.x is a dedicated framework for implementing reactive applications. It’s worth adding as well that the Verticle model fits really well with micro-services concepts.

In addition to what we have seen, Vert.x offers also many other functionalities: a Rx API, a standard for reactive streams and fibers, some tools for generating the code or the documentation, integration with Cloud providers such as OpenShift, a Docker integration, some bridges with Service Discovery tools such as Consul etc…

Vert.x is definitely a very rich framework and this post will only be the first of a series.

If you are interested in the solution, you should check out the following links:

Setting Up A Couchbase Cluster In 10 Minutes With Docker And Docker Compose

Today a tutorial on how to setup a Couchbase cluster on your local machine using Docker.

We often describe Docker as a lightweight solution for isolating your production processes. But sometimes we may forget to say how easy it is to use Docker for setting up your development environment.

Recently I developed a Couchbase adapter and I had to test failure scenarios among which a node failover and a disaster recovery scenario.
In our production environment, we installed 2 Couchbase clusters of 3 nodes each (3 nodes is the minimum for having the auto-failover mode enabled).

I wanted to test my adapter on my development environment but I let you imagine how heavy it would be to install manually 6 nodes considering especially that Couchbase manages many ports for administration and clustering. I would have configured manually each server node to prevent port collisions.

Because Docker provides environment isolation, let’s see how easy it is to create and configure the following Couchbase cluster with 3 nodes and one port mapped on your local machine exposing the admin console:

couchbase-cluster

To instantiate the three Docker nodes, run the following commands:

$docker run -d -v ~/couchbase/node1:/opt/couchbase/var couchbase:3.1.0
$docker run -d -v ~/couchbase/node2:/opt/couchbase/var couchbase:3.1.0
$docker run -d -v ~/couchbase/node3:/opt/couchbase/var -p 8091:8091 couchbase:3.1.0

Each command starts a new Docker instance (the -v option allows to create a volume for persisting the Couchbase node data).

The last command maps the admin console port on your local machine.

The next step is to retrieve the internal container IP address of the node3.
Execute the command hereunder by replacing <node3_docker_id> by the node3 container id:

$docker inspect --format '{{ .NetworkSettings.IPAddress }}' <node3_docker_id>

Connect to your local admin console (http://localhost:8091), follow the setup steps and enter the previously retrieved IP in the hostname field.

The last step is to add node1 and node2 in the cluster. Retrieve their internal IP with:

$docker inspect --format '{{ .NetworkSettings.IPAddress }}' <node1_docker_id>
$docker inspect --format '{{ .NetworkSettings.IPAddress }}' <node2_docker_id>

In the admin console, go in Server Nodes menu and click on Add Servers button. Add the node1 and node2 internal IP addresses.

That’s it! Your Couchbase cluster is configured. If you want to create another cluster for testing Couchbase XDCR mechanism, for example, you just have to repeat these operations (don’t forget though to map the admin console port on another one).

Now, what if you want to have the capability to instantiate this cluster on the fly, during an automated testing process for instance?
There is nothing easier! Let’s use Docker Compose, a simple but powerful tool to define and run multi-container applications.

Create a docker-compose.yml file with the following content:

version: '2'
  services:
    node1:
      image: couchbase:3.1.0
      volumes:
        - "~/couchbase/node1:/opt/couchbase/var"
    node2:
      image: couchbase:3.1.0
      volumes:
        - "~/couchbase/node2:/opt/couchbase/var"
    node3:
      image: couchbase:3.1.0
      volumes:
        - "~/couchbase/node3:/opt/couchbase/var"
      ports:
        - "8091:8191"

In this file, we are simply reusing the information on the Couchbase cluster configuration in a Docker Compose file format.

Start your Couchbase cluster with:

$docker-compose up

This command starts automatically each node and lets Couchbase setting up the cluster based on the configuration you previously made in the admin console.

<3 Couchbase

<3 Docker