Lessons learnt from James Grenning’s TDD For Embedded courses

During October and November 2022, I had the enormous privilege of attending the trainings about Test-Driven Development applied to embedded that James W. Grenning currently offers from his learning platform. The following post has the intent to deeply review them.

Grenning is a signatory to the Agile manifesto and author of the book 'Test-Driven Development for Embedded C'. I will never tire of saying that this book had a radical impact both in my professional and personal life. If you haven't read it yet, I strongly encourage you to do it as soon as possible.

Specifically, I am referring to the following two trainings:

Duration of 3 consecutive days, at a rate of 5h/day live with James and the rest of the participants, plus an additional 2h/day dedicated to prior preparation individually. The course has deliveries every few months that adjust quite well both to European schedules (in Spain the sessions start at 2:00 p.m.) as well as to schedules adjusted to North America.

It is a new course modality, in which James offers the opportunity to follow practically the same steps as in the live remotely delivered, but at your own pace.

Structure and course contents

Both courses have a fairly similar structure. In both modalities, the online Cyber-dojo tool is used to carry out the practical exercises. The platform automatically records the status (red or green) after running the tests as well as the changes made since the previous run. This information allows the instructor to quickly and easily know when someone is getting stuck as well as to understand the underlying reason.

James has his own Cyber-dojo server. In it, in order to go from Red to Green, not only will we have to pass the test that we had failing, but we must do it with as little production code as possible. Code coverage is automatically checked and any value below 100% means that we have entered more production code than necessary to make pass the existing tests.

cyber-dojo-james-grenning

Writing more code than necessary is one of the most common mistakes when applying TDD, especially when we are starting out. This automatic check is intended to alert us when we are not complying with the first and third of the TDD laws proposed by Robert C. Martin, which  appear referenced several times throughout the course:

    1) We are not allowed to write any production code unless it is to make a failing unit test pass.
    2) 
We are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.

  3) We are not allowed to write any more production code than is sufficient to pass the one failing unit test.

These three rules constitute the Test-First's implementation under the umbrella of TDD. Its main objective is to build a battery of semantically stable or exhaustive specifications, which allows us to fight, from the beginning and throughout the successive stages of refactoring, our number one public enemy: The Liar.

If you want fully tested code, do not write untested code.

- James Grenning

Code coverage is a necessary (although not sufficient) condition to ensure the exhaustiveness of our test suite. For greater certainty, it would be necessary to apply Mutation Testing. However, its execution, both C and C++, may be too slow to include its automatic checking within the TDD micro-cycle. Still, since code coverage is a necessary condition and quick to automatically check, it's a brilliant idea to include it in our TDD's feedback loop. That way attendees will notice immediately when they are writing more code than necessary, being able to correct their acquired habits.

All the interactivity happens inside the Wingman Training Center. It is a virtual environment based on gather.town. James has arranged different spaces in it: a stage where what happens in it is seen and heard by all attendees, the Tiki Bar, which is a kind of meeting room where you can jointly discuss any doubts that may arise certain group, as well as the different work rooms from which we will work in pairs on the different practical exercises.

Wingman Training Center

Day 1.

The example to develop during this first day is a CircularBuffer. It is an excellent example for illustrating and experimenting with TDD's Red-Green-Refactor micro-cycle, as well as the concepts encompassed by the ZOMBIES acronym, all without the added complexity of using collaborators.

 

After our first practical experience with TDD, James explains how to apply it effectively in the embedded environment, through the application of Dual-Target TDD and its implementation through the following 5 stages:

- Stage 1: Host TDD micro-cycle
- Stage 2 - 3: Execute unit tests on Evaluation Hardware or Simulator
- Stage 4: Execute unit tests on final Hardware
- Stage 5: Execute acceptance tests on final Hardware

Subsequently, we are exposed for the first time to evolutionary and incremental development through a modification of the initial requirements that we had applied to the CircularBuffer. It is at this moment when we first realize that, although the first benefit we appreciate is the reduction in the number of bugs, in reality, the main benefit provided by TDD is being able to change our code freely and safely.

Changing design is one of the main benefits of TDD

- James Grenning

The session ends with a debrief, in which the doubts and impressions that the participants have had when applying TDD for the first time are shared. At this moment, James takes the opportunity to explain which are the most widespread criticisms and what is his opinion on them. It is one of the sections of the course that I found most interesting. I will try to qualify those that I consider most important.

LACK OF COMPREHENSIVITY OF THE TESTS

One of the criticisms of TDD is that, since it is a development technique instead of a test one, it is more concerned with increasing functionality, normally through happy-paths, than with covering all possible corner-cases and sad-paths.

In fact, some well known TDD practitioners openly advocate for this approach, delegating the treatment of sad-paths to another type of Test-Later approach and, therefore, under a much slower feedback loop.

There is actually no reason for this to be so. One of the benefits of TDD is that enables us to fully focus on one functionality at a time, regardless of which code path, happy or sad, we are working with. Furthermore, no development technique based on Test-Later is able to unambiguously trace the cause-effect relationship of our code (here is a study on the effects of Test-Later in a safety-critical environment).

I fully agree with James that following the 3 rules of TDD guided by ZOMBIES is the best way we currently know to cover all possible scenarios, including sad-paths, iteratively and incrementally.

SLOWER DEVELOPMENT SPEED

The most common criticism, and possibly the most dissuasive, is the one that states that more time is needed to implement a certain functionality following TDD than without TDD. Here it is important that we clarify what 'without TDD' really means. Are we referring to Test-Never or Test-Later?

1) Test-Never

It means delivering production code without having developed any kind of automated testing. In the best case, some manual tests will have been done during and/or after having developed the production code (I remember those prints in DEBUG with no nostalgia at all).

If it doesn't have to work, i can get it done a lot faster

- Kent Beck

Delivering something does not mean that it works or is finished. Any development speed based on delivering with a poor or incomplete Definition of Done is simply a lie. This should be enough to discard/abandon this approach to professional software development. However, it is not at all vestigial in our sector, so let's break it down a little more.

If we follow this philosophy, in order to avoid regressions, at each new iteration or change, we will need both to manually test the new code as well as re-test the code that we had already 'tested' and considered as correct in the previous iteration. Here is how it goes:

- Manual testing time grows exponentially, increasing delivery time.

 

- We spend more and more of that time in the debugger.

 

- Parts of the code appear that are inaccessible or very difficult to exercise.

 

- We start taking shortcuts and we end up opening the door to The Liar.

We fall down the spiral in which we spend more time fixing bugs than creating new functionality.

It is important to clarify that in the above reasoning we are only considering external quality. The time associated with the opportunity cost has not been taken into account; that is, the cost associated to every each of the missed opportunities to iteratively rethink and improve the solution's internal quality (lowering its accidental complexity and bettering its design and architecture), thanks to the short feedback loop provided by TDD.

I
n short, this option would only be valid if we were able to develop the required solution in a single iteration, without introducing bugs and as long as the code did not have to be changed in the future. We should all recognize that the above conditions are never met.

"You've been down there, Neo.
You already know that road.
You know exactly where it ends.

And I know that's not where you want to be"

- Trinity

Those of us who have experienced in our own flesh the paralysis due to having to continually put out fires know perfectly well how and where that path ends.

2) Test-Later

In this case, there are automated tests, so the cost of retesting no longer grows exponentially. However, it does not offer us continuous feedback on the solution and decisions that we are adopting. This lack of feedback leads to problems whose consequences are similar to those we found with Test-Never, such as:

  • Tests are not as exhaustive as with TDD: hard to access parts of production code tend to show up due to the accidental complexity that we have inadvertently added until we write the test.

  • Tests tend to be coupled to implementation details, making subsequent refactoring impossible: the non-existence of the Refactor stage means that they remain focused on Make-It-Work phase, without the necessary decoupling from implementation details that must be done during the Make-It-Right phase.

James has done interesting trials with participants in his course. He divided the participants into two groups. Both would develop a module for the first time, whose public interface was provided to them. The Definition of Done was achieved when certain functional acceptance tests (external quality focused) that James had defined were met.

A first group would develop the module with TDD. A second group with Test-Later. It turned out that both groups took the same average time to complete the required functionality.


Could this mean that developing with TDD is as fast as doing it with Test-Later (at least for a first version of a product)?


The answer is yes. Although the equality is true only when developers are not familiar with TDD. As we gain more control of the practice, this time is reduced, and it ends up being faster with TDD than with Test-Later just from the very first iteration.


This is fully aligned with my experience and with that of those I know who practice the technique with the proper rigor. Most of the time I used to reason that TDD is faster because we don't waste as much time debugging. Still true, the truth is that even if we were able to remove the debugging bottleneck, Test-Later would still be slower for someone with enough flight hours with TDD, right from the first iteration, where the effects of having to deal with the accumulated accidental complexity have not yet become apparent.

Day 2

The examples in this session revolve around the development of a LighScheduler (which also appears in the book). During this session, test doubles are introduced, mainly Spies and Stubs, as a mechanism to control the dependencies with which our Subject Under Test (SUT) will collaborate in the test environment.

The use of these test doubles will not only allow us to control the indirect inputs and capture the desired behavior at the SUT output for all those cases that we contemplate, but also to do it with repeatability and speed, under a cycle of a few seconds.

The approach to solving the problem guided by ZOMBIES is followed again, while we comply with the 3 TDD rules previously described.


Finally, the issue of tests maintenance and refactoring is addressed. Emphasis is placed on the fact that each test or specification must be a small micro-universe, independent from the rest. In these test micro-universes, we must apply the same principles applicable to the production code to make it independent of implementation details (SOLID, Design Principles...), with the difference that in this case DAMP must prevail over DRY.

In the BONUS exercises we can relive the experience of evolving our solution with the safety net that we have woven. It is a reinforcement about the real goal  of TDD we noticed on Day 1: make us able to change our mind and evolve the solution incrementally as we know more details, not only about the map, but also about the territory of the solution itself.

Day 3.

In this last session of the course we get closer to the silicon, to the hardware, through the implementation of a Flash memory driver/controller.


In this case we will make use of a new and seductive test double, the Mock, specifically using the CppUMock framework for it. Our Mock will supplant the interaction of our SUT with the hardware, specifically the readings and writings interactions with the special registers of the driver.


Mocks are easy to use once you understand how they work. This makes them especially appetizing. However, they do have a dark side, as James warned us at least four times. They know a lot (sometimes too much) about how our SUT interacts with its dependencies. This can be very beneficial when the specifications are written in stone, even if it is made of silicon, as is the case of the specifications included in the IC datasheets, where we have to interact in a specific order and sequence. In the rest of the situations, be should be very careful with the Mocks!
.

Quality_And_Clean_Code

Although it has already been introduced in the previous sessions, in this last one James makes a demo focused on Refactoring techniques, in which he expands, diverts and later contracts the code, with the aim of ending up accommodating a new change that had an impact on the public API of our module. Since refactoring techniques are crucial for evolutionary and incremental design and development, James also has a comprehensive course on refactoring in C++.

You must slow down to be fast

- James Grenning

As a farewell, we are presented with some techniques to deal with Legacy Code and how to get it placed under the test environment following baby steps, as well as the automated 'algorithm' that James has dubbed Crash To Pass. As with refactoring techniques, he offers a workshop fully dedicated to Legacy Code.

In this case, both the work and the practical exercises themselves are arranged incrementally, supported by audiovisual material, in which it is simulated that you are programming in pairs with James.

Even being at your own pace, James has roughly divided it into 4 weeks, remaining as follows:

Week 1 -- Module 1 -- Your first TDD
It has a practically direct correspondence with the first session of the Live Course described above.

Week 2 -- Module 2 -- TDD and code with Dependencies

It has a practically direct correspondence with the second session of the Live Course described above.

Week 3 -- Module 3 -- Test Doubles and Mocking the Hardware
Week 4 -- Module 4 -- Refactoring Legacy Code
These two modules correspond in content to those dealt with in the third session of the Live Course.
 

Are they worth? Even if I have already read the book?

Absolutely yes! I explain why.


I have been applying TDD in my daily work and providing training on it since 2016. Still, every time I read his book again (I'm on my fourth reading right now), I reveal a new and important detail that I hadn't noticed. Unfortunately for me, I have had to learn many of those details through trial and error over that years. Luckily, I have to say that I have always put both the technique itself and my approach to it on trial. This last point has been crucial so as not to have given up. It is curious to discover how, once you understand what the underlying problem was and find a solution, when you re-read the book you find that, most of the time, the answer was already there, somehow waiting for you... waiting for your level of knowledge was sufficient to understand the implications of this important detail applied in the appropriate context.


Accompanying the reading of the book with one of these courses can transform those years into months, eliminating the probability of abandonment due to unconscious ignorance. It is therefore a competitive advantage, with a return on investment that is difficult to improve.

In my opinion, to get the most out of the
Live Course, it is better to have read the book beforehand. In this way you can directly ask all the questions you have to James. The level of detail of his explanations is so extensive that it will not only solve that doubt, but it will unlock the start of thinking about the next one.

In the case of the
Self-Paced modality; I think that, in case you have enough time, you can organize reading the book and running the course in parallel. James has established a learning path in which he perfectly simulates being by your side, through guided development of the code like you were programming in pairs, as well as with audiovisual material and demonstrations. In case you have any doubts, you always have the option of asking questions from the platform itself.

What will you learn

A way to develop iterative and incremental embedded code, faster, safer, with higher quality and at a sustained pace. All this thanks to a development process based on an extremely short feedback cycle (less than a minute).

It's easier to keep a system working than to fix it after you break it

- James Grenning

The basics of TDD may be easy to understand. However, the important details that are hidden behind each of its phases and their respective implications are not immediate to anticipate at all. From my own experience and from what I usually hear and read, the vast majority of people who say they have tried to follow TDD and have ended up giving up have not been able to apply TDD correctly (almost always the failure lies in not knowing the true implications of the stage of Refactor).

With James-In-The-Loop you will progressively learn all the intricacies that usually go unnoticed and whose knowledge will lead you to a correct application of TDD, in a guided, fast and efficient way.

Conclusion

The vast majority of organizations responsible for the training of future software developers, whether embedded or not, do not include in their curriculum the study and application of any technical practice focused on enabling evolutionary and incremental design and development.

Additionally, according to my experience, that of many colleagues, and as supported by several studies, does not seem to exist a critical mass of developers with sufficient experience in such technical practices within companies. In the embedded sectors in particular, where its application brings the greatest benefits, the situation looks like, ironically, even worse.

That is why the chances of new developers being able to learn about and analyze development approaches opposed to the traditional Waterfall with objective data, either during their academic training or during their work experience, are unfortunately very low.

I can't convince you to use TDD. you have to convince yourself

- James Grenning


What to do then?

James Grenning has been applying Extreme Programming (XP) techniques to the embedded world since 1999, as well as teaching others how to do it. Like him, many of us believe that it is absolutely necessary to modernize the way in which we develop software and, more specifically, embedded software. I doubt that there are better ways to start the path in this exciting way of understanding development than by the hand of this master teacher.

Last but not least, I want to say a big thank you to James for inviting me to savor these brutal learning experiences. I have been able to learn new aspects and verify, with deep satisfaction, how I have converged on approaches and solutions that are very similar to those used by him, both in the way we develop them and in the way we try to transfer them in our respective trainings.

Thank you James for bringing so much light to our embedded gloom!