Guild Event-Sourcing | "Event-Sourced CR" Sprint | 2017-09-18

The topic sprint about the “Event-Sourced Content Repository” in Kiel has started!

Here’s a brief summary of the first "meeting"¹

General Concepts

We came up with a basis for the concept at a workshop together with Mathias Verraes in December last year. This is quite a while ago, so we had to refresh our minds a little:

Editing Session

One notion we came up with is the Editing Session. It is bound to a specific user working on one workspace (remark: maybe even in one context, i.e. including dimensions!?)

The Editing Session is started as soon as the user starts to edit (remark: we need to find out a good way to enforce this, maybe we add an additional interaction step to the UX).
And it ends as soon as the changes are published or revoked (remark: that means that an Editing Session can be very short, e.g. with auto-publishing enabled or last multiple days or weeks).

Read Model

One advantage of CQRS/Event-Sourcing is that we can have multiple dedicated Read Models, suited for their use case.
But for the core API we’ll probably end up with a main Read Model built upon some proper graph implementation (see Bernhards prototype).

To be defined: Are changes of an Editing Session part of that read model or in a separate layer (remark: an extreme case could be to replay Editing Session events in memory on user login and maintain that Read Model in the browser state)

Conflicts / Rebase

The current CR implementation uses Optimistic Concurrency in that the last write “wins”. This can lead to nasty side effects and even to an inconsistent state.
With the append-only nature of an Event Store we can no longer tinker around that problem.
Fortunately there’s a good model for what we’re trying to achieve: GIT.

The current idea is that during an Editing Sessions events are published into a separate (remark: possibly temporary) stream (think branch in GIT).

Before the changes can be published any intermediate changes published to the underlaying workspace(s) have to be incorporated into that Editing Session stream (think rebase in GIT).

Hard constraint: Trying to publish an outdated Editing Session must fail.
(remark: the rebase can happen in the "background" from time to time, e.g. upon user login).

In case of a conflict (i.e. the same node has been changed in both branches) user interaction might be needed (remark: in a first implementation we could ignore this and fall back to Optimistic Concurrency)

Migration path

There are many ways how we could approach this beast.
We probably have to adjust on the way, but for now we decided to go the following route:

  1. Create a new branch in the neos-development-collection
  2. Keep the current PHP API of the Neos.ContentRepository package and replace implementation piece by piece
  3. Adapt the Neos importer so that it converts the current XML format to NodeWasImported events (wrapped in some kind of Editing or Importing Session)
  4. Provide a (GraphQL) API for HTTP

To be found out: Do we start with a projector that generates the current database structure or can we immediately “bend” the PHP API (including Flow Query, …) to use the graph based structure.

Also to be found out: The “Active Record” kind of way the NodeInterface behaves today might cause problems.
We will deprecate at least the mutating methods in favor of proper Commands but it will be a challenge (if not impossible) to provide a (PHP) API that behaves as if it was synchronous while adding support for asynchronicity from the start.

For example: Keep supporting something like this won’t be easy to achieve

$node = $this->context->getNode('/some/path');
$node->createNode('Child', $someNodeType);
$childNodes = $node->getChildNodes();
// ...

One approach might be to work with some kind of Promises that block the code until some projection has processed a given command… But maybe it’s more feasible to break compatibility here in order to make the asynchronous nature explicit…


¹ unfortunately I couldn’t join the Sprint face-to-face but my beloved team mates were so kind as to invite me via Slack call

One topic I forgot to mention explicitly yesterday: We also quickly talked about whether it’s OK to delete events.

I don’t remember exactly what the outcome of this discussion was (if there was any distinct one) but IMO we should not be dogmatic about it.
However conceptually you should never delete events, as that spoils the Unique Source Of Truth.
It’s a different thing to delete a whole Stream.

I heard that as a possible solution for the “data privacy problem”: You store events with sensitive data in some extra stream that points to some user record and when the user is removed from the system that whole stream can be deleted without breaking other parts.

That’s why I think we should store events from the Editing Sessions in a temporary stream (it is also persisted but might be removed if no longer needed) and “copy” those to the persistent streams upon publish.
I put copy in quotes because that step could optimize the Editing Session events (i.e. merge two similar events, skip undone events, …).

To be clarified is: What exactly happens when we “rebase” to changes from the base workspace.
One option would be:

  1. Check whether there are conflicts between the Editing Session events and those from the base WS (this one will be tricky)
  2. If there are conflicts that can’t be resolved automatically (to be defined), notify the user
  3. Otherwise recreate the local Read Model for the Editing Session from the new position in the base WS and replay the events from the Editing Session to it. The Editing Session now points to the end of the base WS stream again

Upon publishing: transform the events, publish them to the base WS stream with Pessimistic Concurrency and possibly delete the Editing Session stream