Monday 24 April 2017

Microservices Communication Pattern

Context

Synchronous HTTP is a bad choice, even an anti-pattern, for communication between microservices. Synchronous REST is acceptable for public APIs, but internal communication between microservices should be based on asynchronous message-passing. If we have to call other services in order to be able to serve a response to a request from a public client, the overall response time for the public client will be bad, and our service will not be as resilient as it could be, because it is coupled in time to the service it depends on.

If a service needs to trigger some action in another service, do that outside of the request/response cycle.
The preferred choice is to use asynchronous communication. In this pattern the calling service simply publishes it's request (or data) and continues with other work. It is not blocking and waiting for a response after it sent a request, this improves scalability. Problems in another service will not break this service. If other services are temporarily broken the calling service might not be able to complete a process completely, but the calling service is not broken itself.

Thus using the asynchronous pattern the services are more decoupled compared to the synchronous pattern and which preserves the autonomy of the service.


Solution

Implement a microservice communication platform that is asynchnorous, address scalability, loosely coupled with the business logic and fits into cross platform implementation of microservices. Lets see with the help of a message broker ( RabbitMQ), how we can address the issue.

Design



The high level design of the platform contains following components -

Platform Component

The platform component includes dynamically creating the exchanges and queues and routing messages to from exchanges to queues and adding and updating the routing keys for queues.

Message Channel:

This component takes few arguments like broker host, port, username, password and establish a connection with the RabbitMQ cluster. The message channel library (jar) should be a part of the event source which is again a microservice itself.

Event Framework:

Event Framework component abstracts the broker, queue, routing key creation on RabbitMQ cluster. It also supports an annotation library. Distributed events are annotated with @DistributedEvent this events are published to RabbitMQ cluster using an API exposed by this framework. The messages are published to a fanout exchange which route all incoming messages to a data exchange (topic/header exchange). The topic exchange route message to different different queues based on routing keys.

Exchange:

The exchange-exchange binding allows for messages to be sent/routed from one exchange to another exchange. Exchange-exchange binding works more or less the same way as exchange-to-queue binding, the only significant difference from the surface is that both exchanges(source and destination) have to be specified.

Two major advantages of using exchange-exhhange bindings based on experience and what I have found after doing some research are:

Exchange-to-exchange bindings are much more flexible in terms of the topology that you can design, promotes decoupling & reduce binding churn
Exchange-to-exchange bindings are said to be very light weight and as a result help to increase performance.


Queue:

The platform uses queue per service concept which is persistence by nature. Every time a new microservice is created it will have a consumer and a queue dedicated to it. The queue setup process will make sure that the routing keys are associated accordingly so that the queue receives the intended messages. In a microservice cluster one of the instance will be the active consumer thus a centralized processing of all incoming messages.

Payload:
The message payload will be a JSON string. Every event will be converted to a JSON string, the receiver upon receiving the message will deserialize it to appropriate object, thus it will also facilitate cross platform interoperability.

Routing Keys:
Routing keys are important aspect for messages to reach the destination queues. We need maintain a standard naming convention for routing keys like domain.event.action ( user.event.created, user.event.deleted). All the services will update its routing keys to receive intended messages.


Platform Client (Producer):


The producer includes event channel, event framework and event model. The event data (model) is transformed into a JSON before sending it to the exchange with appropriate routing key as explained earlier.


Platform Client (Consumer):


On the consumer side we have a message handler framework which decides the action to be performed once a message has been received at by the consumer based on message type.




Implementation (POC)

Connection Factory
The connection factory abstracts queue connection details. The host, port etc are all abstracted with a default value. The default values can be overwritten by properties specified in a property file. This abstracts the client from configuration details. The client can only concentrate on producing and consuming messages.

Create a file named channel-config.properties with the following contents

connection.host=<host>
connection.port=<port>
connection.username=<user_name>
connection.password=<password>


Event

The distributed events generated at the source are annotated with @DistributedEvent and implements.
An example event looks like

@DistributedEvent
public class TestEvent extends AbstractEvent {
public TestEvent() {

}

}

Publisher

The framework provides a uniform API to publish events. To publish any distributed event we have to instantiate EventPublisher and invoke the publishEvent method with event object.

To publish an event, we just need to write following code snippet

EventPublisher<Event> publisher = new EventPublisher();
publisher.publishEvent(new <test_event>);

Consumer

Every consumer classes has to be annotated with @DistributedConsumer that extends Consumer and override consumeMessage method. The @DistributedConsumer annotation should also include name attribute which is the routing key for the queue. The application on startup scans through all the consumers and starts listening to queue.

To start consumer we have to include following code snippet

EventConsumerStarter.loadContext();

A sample consumer looks like

@DistributedConsumer
public class TestConsumer extends EventConsumer {
@Override
public void consumeMessage(Object object) {
// implemetation
}

}

Spring Framework Integration

The framework provides seamless integration with Spring Events. Spring event publishing API is exposed in ApplicationEventPublisher. The framework implements the interface and if the event is annotated with @DistributedEvent it will publish the event to the distributed message broker. If the event listen by the event listener has a @DistributedEvent annotation it will subscribe to the queue for listening to events.
If Producer java class is a source of event, to use spring events we have to write following code snippet
@Component
public class Producer {
@Autowired
private EventPublisher eventPublisher;
public void createTask() {
eventPublisher.publishEvent(new TaskAssignedEvent());
}
}

References

1. Routing Topologies for Performance and Scalability with RabbitMQ [http://spring.io/blog/2011/04/01/routing-topologies-for-performance-and-scalability-with-rabbitmq/]
2. RabbitMQ – Best Practices For Designing Exchanges, Queues And Bindings? [https://derickbailey.com/2015/09/02/rabbitmq-best-practices-for-designing-exchanges-queues-and-bindings/]
3. RabbitMQ Tutorials [https://www.rabbitmq.com/getstarted.html]
4. Code examples [https://github.com/badalb/messaging-platform/tree/master/messaging-platform]