Towards a long run embedded business: 'The' way to portable software
Some years ago Kent Beck pointed out that software development should follow three steps:
When we are at “making it work” phase, we are highly concentrated in understanding how to solve the problem, not about engineering the code right. These first two differentiated steps should exist because barely no human being, nor software developer, is able to do both things properly at once.
The problem
I have been wondering during long time why are the embedded developers so obsessed to go directly to 3rd point (this fault is not covered in the following discussion and it is left to a future one), making to reach the 1st even harder and making the 2nd barely impossible. When I asked this question on twitter a year or so ago, some people pointed out that is something habitual on non-embedded world too. I agree with them, but my experience points out that is much more noticeable on the embedded world. Is there any major difference between the two worlds that can justify such difference?.
The more I think about the possible originating differences, the more I realize that those we find on the "make it work" phase are the most important ones. Making an embedded system to do a 'simple' thing (for example, imagine printing a 'hello world' message) is several orders of magnitude harder than in a non-embedded environment. Embedded developers must manage lots of little low-level details at once in order to accomplish a such ‘simple’ task. After several (sometimes very hard) trials, when we finally manage to “make it work”, then we should be starting the "make it right" phase ...
But, wait! We all know that if we change any tiny detail, we could break something. Unfortunately, we are used to not having any kind of automatic, non time-intrusive, fast and comprehensive suite of tests acting as safety regression net at our disposal. Also unfortunately, we have been taught to use the debugger as 'the tool' to assure we have not broken anything. But debugging could make the system behave differently (mainly due to interruptions or other time related events not being attended at their expected ocurrence time), is a high time-consuming activity and the scenarios are difficult to reproduce under demmand. So we end up letting the code 'working' as it is: highly coupled (both to the underlying hardware and to the running processor), with lots of low level implementations details polluted everywhere. As a result, the code rotting has been seeded: it will become harder and harder to change over time, until we eventually fear to make any change at all on it. We, embedded developers, love to rule our systems, but we end up creating ‘Frankensteins’ which really rule us.
However, somehow, we keep fooling ourselves believing “this is how it goes, we are embedded, we write Firmware, our development environment is different ...”
Does this situation sound familiar to you? I would bet it is, at least somehow.
A solution ('the' solution so far)
The solution to this problem is based on finding a development process which facilitates the transition between the “make it work” phase to the “make it right” phase, while assuring that the whole system keeps working all the way down while we make the code right. The only way I have found so far to provide such development process is to adopt Test-Driven Development and follow its ‘Red-Green-Refactor” cycle.
During the ‘Red’ phase, you are aimed to “make it work”, in the easiest and fastest way that you can. I really love how Ian Cooper explains it in his 'TDD, Where did it All go Wrong' talk.
What does that mean exactly? Well, it means doing things that may be you are not familiar with: for example, you can go to stackoverflow, to the microcontroller vendor’s forums or use that 'marvelous' vendor's autogenerating code framework, locate the solution they are telling it works, copy and paste those lines of code, one after another, and try if they do their intended duty. As I anticipated, a procedure completely new and unknown by all of us.
If those lines work, then you know exactly what is needed to be done in order to make that part of the system to make its intended job. And then, this behaviour becomes your 'Green' state: the state to be preserved. But you are not DONE yet: we all MUST learn that we are not done when it works, we are done when it is right. We face that differentiate goal when we enter into the 'Refactor' TDD phase execution. Is in this phase when you are supposed to engineer and craft your code right: apply Simple Design, SOLID, Design Patterns …
If you have arrived until here following the previous 'Red' & 'Green' TDD phases, you are now able to refactor the production code safely as long as you have your tests acting as a safety net. You could do it step by step, towards a decoupled and easy to change solution, while you learn which is the intention and effect of every each of those initial ugly copied-pasted lines.
The goal
The final goal is not to enjoy the beauty of the code we will be able to generate (and I can assure you will do it a lot). That is a secondary effect that only developers could appreciate. The real goal is to be able to add value to the organization in the fastest and sustainable way. I said the 'fastest' way. Yes, that is the fastest, real and objective way. Any other rushing shortcut we could take is a lie, a technical suicide. I fully agree with the following sentence from Robert C. Martin:
In other words, the goal is to develop an embedded solution that can evolve to accommodate new requirements easily, without the apparition of defects, which, additionally, must be able to be ported to different platforms with minimum to none effort.
I put all this process on practice during the development of the practical example we use on my 'Unembedding' Embedded Systems course, in which TDD, SOLID, several Design Patterns (template method, observer, factories, strategy…) as well as Clean Architecture are applied into C code aimed to run under a tiny 8 bits bare-metal microcontroller. In the example, the underlying hardware, including the microcontroller itself, become simple plugins to the high level Entities, that is, to the modules where the long term business logic reside. Any future system (PCB, embedded PC, microcontroller…) whiling to execute those high level Entities must implement the abstract interfaces defined by them.
The underlying hardware becomes a detail which depends on high level software business Entities, not the opposite. Those software abstractions provide us the portability of the business logic to different platforms, with minimum to none additional effort: an almost mandatory necessity today to keep the organization in business long term.
Conclusion
We reached that vital goal as the end result of well-crafted (embedded) software. It is barely impossible to reach without following the proper development process and technical practices.
So, while no better approach is found to reach that ambitious goal, I have nothing else left to say than long live to Test-Driven Development!