Event Sourcing with Svein Arne Ackenhausen

Yesterday, one of my teams and I participated in an Event Sourcing workshop with Svein Arne Ackenhausen, sharing his experiences going from traditional RDBMS state store architectures to event sourcing ones.

This is my interpreted summary of the event. :-)

Event Sourcing fits well within a a bounded context. Within this context, UIs can HTTP Post specific command DTOs to an endpoint which
- Constructs a Command
- Asks the Command to structurally validate itself (e.g. check a number range 1-10)
- Send it to a Command Handler

The Command Handler will
- Invoke business rules to validate the Command (e.g. check if this user is allowed to invoke said command)
- Load an Aggregate from an Event Source Repository (which in turn will construct the aggregate and might read an Event Store and replay all events onto it)
- Send the Command to the Aggregate which, again, will validate the command against its state. If the Command is valid, the aggregate will send an event that describes the change (e.g. ChangeEmailCommand -> EmailChanged).
- The Event Source Repository will capture the sent event.
- The Command Handler stages the updated aggregates and finally flushes their events to the Event Store (through the Repository).

Emphasizing that it's not the technology but the aggregate boundaries and relationships that are hard, Svein Arne Ackenhausen had put together the Event Sourcing Framework that we worked with just the night before.

The repository we were working with, was https://github.com/acken/eventsouring-demo/tree/master/Demo.


Guidelines
- Avoid SomethingChanged-events (PersonChanged, AddressChanged) and strive for events that carries an intention, e.g. PersonMoved.

We are used to focus only on resources and trying to figure out what a user wanted, based on which fields were changed (if "Street 1" and "Postal Code" was changed, then the person surely moved). This is in stark contrast with sending commands that carry intention - RelocatePerson, EnlistCustomer ...

Avoid snapshotting for as long as possible, since it's an added complexity with an additional process that runs in the background and - more importantly - an added constriction of Aggregates having to have a certain shape. One of the big benefits of Event Sourcing, is that you will never have to migrate your application storage ever again! You store all events. If you make a mistake, or what to augment/change an event later on, you just construct a new event, maintaining handlers for the old events and the new events.

CRUD is easier in version 1, but as soon as you hit version 2 (or even 1.1), Event Sourcing gets easier with its clear separation between Read and Write and its clear intentions. Validation gets easier in Event Sourcing as does extensions: When a Person changes its E-mail, this and that should happen. In ES, the story is to just add a new event handler. You don't even have to look at your old code, let alone change it.

If you need to deal with unique constraints, maintain a separate database table where you evaluate uniqueness. If you can insert your unique value in the table, you have fulfilled your uniqueness constraint and you can execute your command.

To protect an event handler from crashing when trying to process an event with an unknown shape, you can place a queue between the event source and the handler (poison message handling).

Idempotency is extremely important in an ES system, particularly when you introduce batching, since parts of the batch can fail.

Between bounded contexts, you should reduce your events to basically only contain Id, letting the consuming bounded context ask for details through a HTTP Get, receiving a specialized read model.

Integrations are a lot easier in an Event Sourced system as compared to a classical State Store one, since you in ES naturally create a tailored read model per consumer. If you have a form with a bunch of data from different sources, you can, for example, store a tailor-made document in a document database specific for this form and retrieve all of the fields with a get-by-id call. To create these read model, you employ event handlers within your context.

The event store is an append only construct, meaning that you must never destroy an event. If you need to correct/adapt your event shape/structure, you must create a new event that contains compensating data.

Comments

Popular posts from this blog

Auto Mapper and Record Types - will they blend?

Unit testing your Azure functions - part 2: Queues and Blobs

Testing WCF services with user credentials and binary endpoints