Exluding tests for code quality checks (Code Climate)

We currently exclude the Tests folder in packages of the Neos dev collection for Code Climate because of issues that occur from duplicated code, long methods and other code quality problems.

To put it in the most dramatic way I can think of: this could be the beginning of the end of test driven development for Flow / Neos if we go further down this road.

Since the creation of Flow and especially Neos, the way we write tests and the role they played have changed continuously (that being another story). So we ended up with quite some mixed ideas and implementations about how to do unit (and functional) testing.

Tests are one place where we piled up a large amount of technical debt. This is visible for many new features or changes where tests break in a lot of places because of various reasons. A few of them are:

  • Lots of duplications when setting up tests
  • Complexity of test methods makes it hard to understand the intent of tests
  • Excessive use of mock objects makes test pass even if implementations changes or expose to much behavior that should be encapsulated

I’d say if we care about tests we need to treat them as code as well and that means applying the same levels of quality.

Quotes from the internet

I collected some quotes here that elaborate on my POV:

You should definitely take the same if not better care of your unit tests than your production code in terms of quality and readability. The unit tests are often the first thing you look at when trying to grasp what some piece of code does, and the reader should quickly understand what’s at stake when looking at the test. Unit tests also tend to change a lot and will break a lot if their design is not robust, which kind of nullifies the benefits of having tests.

guillaume31, Programmers StackExchange

It is absolutely worth spending time writing good-quality code for unit tests:

They will require maintenance like any other code.
Unit tests are one of the best sources of documentation for your system, and arguably the most reliable form. They should really show:
Intent: “what is the expected behaviour?”.
Usage: “how am I supposed to use this API?”.
They will require debugging like any other code.

vaughandroid, Programmers StackExchange

There are several reasons why unit tests should be held to a comparable standard as other code:

Each unit test also serves as documentation for the testee. When the tests are comprehensive and cover as many edge cases and error conditions as possible, they can often replace the comment-documentation of a class or function. They can also serve as a starting point for people new to the code base.

Unit tests can be buggy, too. And errors are more visible when the code is well written.

If, for some reason, you later need to split up a module, you’ll probably need to split up its unit tests, too. This is easier if the tests are written such that they have easily discernible dependencies.

Benjamin Kloster, Programmers StackExchange

Finally

We should take the quality indications of our test code seriously and work on solutions to make our test code better. If we don’t care about the test code it will “rot” away and at some point we just stop writing new tests because it’s annoying or takes too much time. And even if new tests will be written, they would fall out of our quality checks that apply for other code and will not improve the overall quality.

1 Like

Full ack, thanks for taking the time to write this up.
Just some thoughts re:

Some classes are very hard to test because they violate the “Single Responsibility Principle” and while it would be the best to refactor/rework those it would mean loads of work probably…
Instead of skipping (complex) tests for those cases I could imagine some magic to ease mocking of dependencies in our UnitTest base class so you could do something like:

$this->someClass = $this->getMockWithDependencies(SomeClass::class);
$this->someClass->_get('someDependency')->expects($this->once())->method('foo');
$this->someClass->someMethod();

Again, the preferred solution would be less dependencies in the first place, but maybe this could be a way to make testing those easier at least…

Your answer is one of my biggest arguments against including the tests:

Right, and on top of that we would need to refactor ALL of our tests if we include them in the analysis. If we don’t see that we can do it for actual production code how are we realistically going to fix it for the tests?
For me the metrics are a little push to make things nicer. But we won’t have the time to fix it all, especially the test code. So staring at bad numbers because of that for the next years definitely neither cheers me up nor improves my motivation. Seeing improvements in the metrics while refactoring code would… (Yes they will also improve with the tests included but at a much smaller scale because the tests will have a lot of weight).
Lets concentrate on getting the code cleaner and any new/adjusted tests nicer (I am pretty sure we know how even without analysis) and then include the tests at some later point.

I agree with Christian here. It’s not that I don’t care about quality tests, but the fact is that tests have never had a requirement of high quality (often not code reviewed e.g.) and thus the current situation is not so good. In fact as long as there were passing tests for new code, it was a plus.

I completely agree we should improve this and put focus on it, but I don’t think code climate is the right way to do this. It will also mostly find superficial issues AFAIK, but won’t really fix the core problem. If we want to improve the quality of the tests, we need to change the required quality level in reviews.

When we have improved the tests we can start running them through code climate, which is a visible rating that we don’t really need to have low when the actual production code has such a high quality. It’s easy to run similar statistics to what CC does with tools running locally.

1 Like

Okay, I completely agree with that. Let’s improve that!

2 Likes