RFC: Using the JSON API specification for Neos services

This RFC is work in progress but I’d like to gather feedback before diving deeper into this topic.

As everyone knows we currently have a rather mixed up way of providing and consuming data in the Neos backend. There are multiple (incompatible) ways of representing data that the Neos UI needs and how the server-side accepts data from the Neos UI for changing data.

Since (real) REST is very hard to achieve with the way Flow currently works and how the UI needs data, we should rather aim for a proven format that is extensible and easy to consume.

As there are many different approaches of doing exactly that, we never found a consensus and thus we eneded up with many different “best” practices.

I suggest to settle on a JSON format as the main API format while allowing certain other representations. So all API endpoints must support this format while we allow other formats as applicable.

Using http://jsonapi.org/ as a specification and standard gives us a good standard model of providing an API with many features and saves us from defining all the small details that are needed at some point (error messages, including nested resources, query parameters, etc.).


  "links": {
    "self": "http://example.com/posts",
    "next": "http://example.com/posts?page[offset]=2",
    "last": "http://example.com/posts?page[offset]=10"
  "data": [{
    "type": "posts",
    "id": "1",
    "attributes": {
      "title": "JSON API paints my bikeshed!"
    "relationships": {
      "author": {
        "links": {
          "self": "http://example.com/posts/1/relationships/author",
          "related": "http://example.com/posts/1/author"
        "data": { "type": "people", "id": "9" }
      "comments": {
        "links": {
          "self": "http://example.com/posts/1/relationships/comments",
          "related": "http://example.com/posts/1/comments"
        "data": [
          { "type": "comments", "id": "5" },
          { "type": "comments", "id": "12" }
    "links": {
      "self": "http://example.com/posts/1"
  "included": [{
    "type": "people",
    "id": "9",
    "attributes": {
      "first-name": "Dan",
      "last-name": "Gebhardt",
      "twitter": "dgeb"
    "links": {
      "self": "http://example.com/people/9"
  }, {
    "type": "comments",
    "id": "5",
    "attributes": {
      "body": "First!"
    "links": {
      "self": "http://example.com/comments/5"
  }, {
    "type": "comments",
    "id": "12",
    "attributes": {
      "body": "I like XML better"
    "links": {
      "self": "http://example.com/comments/12"

The exact way of how we can define the current endpoints of Neos using this format needs to be figured out. Especially the amount of different endpoints and HTTP verbs needed for the node API needs further investigation.

Nevertheless we should find consensus on 1. a preferred format (JSON) 2. a standard / specification for that format that defines most of the requirements we have for a consistent API.


I like!

We should check out different standards and how widespread they are.

Probably more exists.

There’s as many standards as ways to deal with this :wink:

Actually we should write down a list of requirements the standard should solve. JSON API is (by my knowledge) one of the most complete because it does not only specify the representation, but also many aspects of the behavior. While standards as HAL (which JSON API is certainly influenced by) are mainly describing the representation side without guiding us into how to use it for an actual application.

I know there are many many different views about that topic. I hope we can find a consensus where we just get it done and not end up discussing all the nitty gritty details every time we discover a missing feature. So my highest priority would be to find a somewhat complete standard that tackles most of our (future?) problems while still being open for extension. Another important thing is finding a good balance between a pragmatic approach and the REST hypermedia ideal (which is rather academic after all).

1 Like

See here https://github.com/TryGhost/Ghost/issues/2362 for pretty much the same discussion and a list of viable alternatives (well, they already had an API close to JSON API which we don’t have right now).

One of the other comprehensive standards (AFAIK) would be OData (http://www.odata.org/), but I find it a little bit enterprisey and could be hard to implement completely in Flow / Neos.

Hey guys,

I like your idea to decide on a standardised JSON API. I’m using Emberjs for several of my applications and ember-data is going to use JSONAPI standard in ember-data 2.0.
I currently use https://github.com/Flow2Lab/Flow2Lab.EmberAdapter for my current REST Api (you might get some inspiration from it).

I need to look deeper into the topic myself but I’d like to participate on this.

1 Like

Hey Christopher,

just looked at the spec once more and generally really like it :slight_smile: Personally, I’d have JSON-LD in mind as an API language; but I think just quickly comparing it, JSONAPI covers more ground and is easier to implement as consumer.

So great initiative :smile:

Bookmark: presentation about JSON-LD + Hydra:

While I think that JSON-LD looks interesting it’s way too generic to be a useful guidance for us. (IMHO) It’s a way of representing data (for both the producer and consumer). JSON API covers almost the whole space of how to expose our services including the representation, bulk updates, error messages, etc. So it’s way more pragmatic while still incorporating ideas like linking and self description of data. When looking at the spec of JSON-LD (complex!) I don’t see any of these practical issues being addressed.

That said there’s nothing against providing/supporting JSON-LD as a secondary format from/in our API.

1 Like

Hey Christopher,

full ack on what you just wrote :slight_smile: So +1 from my side for JSON API!

All the best,

I’d like to opt for a 3rd format, HAL… Let’s just focus on the best pick and make sure we code it in such a way that the other formats can also become supported by 3rd party packages if they want to use the CR API.

Internally for Neos I’d pick what fits best and stick to 1, and reading this thread I suppose JSON API would be it.

1 Like

Hey Rens,

sounds like a good plan for me :smile:

All the best,

Just as a side note: I will be implementing OData for a client project but I still think it’s not the best idea for us.

After some playground in a Go application with JSON-API, I say +1 for it. It will avoid a lots of discussion on how to do that or this and the flatten structure is really efficient in case of complex relations.

1 Like

Check out: https://facebook.github.io/react/blog/2015/05/01/graphql-introduction.html
That might be intersting for us due to the flexibilty.

1 Like

+1 for JSON API because it’s emberjs first choice (Yehuda Katz is one of the five primary editors).
If we go with ember-data (after ember-cli migration) where will be no better solution since JSON-API is the new standard adapter.


Just for the sake of completeness, there’s also Falcor https://netflix.github.io/falcor/starter/what-is-falcor.html

But yes, ++ from me on JSON-API.

So, reading through the asnwers JSON API is the winner here.

How is this going to proceed? Anyone up for creating an issue in JIRA so we can plan this in? And work out the details in that issue and/or sub-tasks to it?

It’s very interesting, but also rather complex. I played around with relay and GraphQL a bit but it needs a lot of code even for simple things (just look how mutations are considered to be done). It’s certainly a cool thing but needs a completely different server-side (everything is channeled through a single endpoint in a RPC style fashion) than JSON API based RESTful services.

So I think we won’t have the resources to integrate GraphQL in our server- and client-side architectures.

Although I’d much prefer GraphQL instead of REST, I agree that it might be too big a task. Pretty cool that Drupal has it. Did you use a library like https://github.com/webonyx/graphql-php to implement it or?

1 Like

Hey there.

Here’s just a view thoughts about implementing “kind of jsonapi.org”.

To me there are couple different things on the agenda.
Feels like a first prototype of this can be implemented as a package without any changes to the core.

Dispatcher request looping over both, the requested data as well as all included relations.

Let’s talk about it as a root request ending up in a controller performing sub requests.
I’m not completely sure if using a controller for this is the best idea or getting the Http Component involved is better. But for the sake of not getting too complex in the first shot, I’m talking of the root request as ending up in a controller.

The idea

The RootRequest gets resolved to an JsonrgApiController where an empty array is created.
The actual “thing to do” of the RootRequest is transformed to a WorkerRequest.
Every WorkerRequest gets executed separately, “while (count($workerRequests))”.
Whenever a WorkerRequest holds relations to other objects, a new WorkerRequest is created and pushed to the $workerRequests array.

The benefit

As soon as all relations are handled as subrequests, every subrequest can be cached individually.
Even the actual implementation of how those subrequests are handled can be adjusted per project, depending on the available software environment.
The very basic idea is just using Flow subrequests, but this could be enhanced to e.g. doing real HTTP web requests targeting a varnish proxy, or whatever.
The thing is: All relations are handled not nested but iterative and independent, providing a distinct spot where caching is supposed to go later.

DTOs wrap domain objects that provide configuration by PHP code

Not targeting jsonapi.org but to be used with the $resource mechanism of AnuglarJS, I did such a thing for Extbase and ported it to Flow some time later.
This shure needs some improvement in general and some adjustment in order to match the jsonapi.org requirement, but I’d like to think of it as an idea that has proven to be not too bad.


Think about something like this:

All property names meant to be available externally are named separately and provided by a getter of the Dto.
This can be enhanced by privilege mechanism, context checks and so forth.
I’d leave it to a plain getter method in the AbstractDto, so it can be easily implemented per distinct Dto class. This leaves room for complexity when its necessary but no complex configuration required.

They are both at the same time, input and output converters. Just as they provide every information that should be exposed to the public in getter methods, they can cover all the code needed to transform the external data to internal data in setter methods.
The easiest way, of course, is to just pass getters and setters right through the payload object. And again: Room for complex mapping if necessary, but no mapping at all required by default.

The object can be used to alias internal attribute names to external attribute names, simply by providing getter methods and setter methods respectively. They can even be used to travers through nested objects. Think about “person.identifier” which could be a getter method calling “return $this->getPayload()->getAccount()->getIdentifier()”.

In contrast to my implementation available on Github, those objects should not be used to expose nested structures.
Lots of magic of my Github package is related to providing a deeply nested array that can be passed to json_encode. This, of course, is obsolet when it comes to jsonapi.org and should be replaced by “relationships”.

How to map a DTO class

I suggest introducing DtoConverters.
Just like TypeConverters, they should be have a priority and asked for “canConvert” by providing a distinct model object.

Mapping Model objects to Dto objects

That’s the easy part. In general, one can simply create a new Dto($payload). Of course that’s the obligation of the DtoConverter, but it’s no big deal.

Build URIs

The more complex task of DtoConverts should be to provide enough information to create absolute resource URIs pointing to the API endpoint for an individual Domain object or Dto.
Those are meant to be used for both, creating the exposed public URI for “links.self” as well as filling the HTTP sub request properly that acts as the WorkerRequest for related objects.

Creating a view

A JsonView dedicated to Dtos can easily iterate through all properties named by Dto.propertyNamesToBeApiExposed.
Every scalar value goes into an associative array.
Every object goes back to the list of WorkerRequests.
Every other object throws an exception.

Sparse Fieldset

There’s that nice feature of a Sparse Fieldset in the API documentation.
I’d like to handle that as a “post processing of a complete result”. Instead of passing all of this information to the view and let the view limit the “properties to be api exposed”, I’d rather let the view create a complete response and have the RootRequest apply those restrictions, in favor of advanced caching on one hand and a single point of implementing it on the other.

Open glitches

The jsonapi.org requires the “MM relation data” to be exposed.

This means there need to be “relationship” routes that expose the inverse side of the association as well as some information about what owning side it belongs to.

I don’t really know how to tackle this. Maybe we could treat that “MUST” as a “MAY” and postpone this to an improved version.


… well, not talking about the API response. If some of you have doubts or ideals, I’d really love to hear back. I might start creating a little prototype as soon as I have some minutes to spare :).