Breaking free from on-premises

Two three years ago now (wow, time flies!), I was investigating how we could transition from our on-premises message based architecture to having services running in Azure. This article briefs you on parts of our journey thus far.
In this post

Background

On premises, we were running NServiceBus with its default MSMQ transport. We were relying on DTC to ensure that we had transactional guarantees for message delivery, and we had grown comfortable with this support. As such, we were looking for a solution where we could extend our current mesh of on-premises services into the cloud, having them work together for an extended period of time, until we finally had moved them all.


Also, due to dependent software components, for a given set of services, we were tied down to an older version of NServiceBus (4), whereas we - in the cloud (and for our new services) - wanted to use the latest stable version we could get our hands on, which at the time was NServiceBus 6.

We tried various approaches and one of these were to introduce lightweight components that could participate in any transaction and bridge our calls from on-premises to the cloud and back again. These were lightweight in the sense that they were templated projects without business logic, whose only purpose was to transport the messages. Thus, we could rely on them being stable, once we had gotten the configuration right - i.e. verifying that messages were written as expected. Since we could communicate from our premises to the cloud, we decided to place the bridge components on premises as well:


Okay, that's a lot of boxes and arrows! Let's break it down a bit by looking at our messaging scenarios:

Extending our first on-premise service to the cloud - a scenario

The team responsible for our first on-premise service (really - the underlying business capabilities, which are encapsulated by a number of messaging services, web apps and more), wants to add new functionality to their solution. This added functionality fits well in the cloud, why they create *Cloud Service 1*. In order to communicate with this service - and maintain the transactional guarantees they had before - they send their command (or event) to the *On-premise Service 1 to Cloud Service 1* (OPS1-To-CS1) message bridge, which they also maintain as part of their solution. The message gets written to the services' outgoing queue (courtesy of NServiceBus and MSMQ) and will be rolled back if subsequent messages / actions fails within the same transaction.

All the message bridge is doing, is to take the message and forward it to the target service, which it does by reading the message from the queue (again, courtesy of NServiceBus) and writing it to Azure Service Bus. Here, we minimize risk of message loss by letting the bridge has exactly one responsibility, minimizing the risk that we introduce code in the future that would interfere with message delivery. We also rely on Azure Service Bus' Message Deduplication feature, ensuring that we - in the case of errors concerning double delivery, only ever let our target service see one message. All in all, we accomplish once and only once delivery, which we are so accustomed to when running in our own datacenter.

Finally, the Cloud Service 1 reads the message from Azure Service Bus, using the NServiceBus 6 library.
Want the nitty-gritty? Check out our published code.

Reference material

Comments

Popular posts from this blog

Auto Mapper and Record Types - will they blend?

Unit testing your Azure functions - part 2: Queues and Blobs

Testing WCF services with user credentials and binary endpoints