Last time, we discussed the general idea behind testing software and which members of the development process are responsible for certain aspects of the final result. In short, we divided that responsibility into three categories. The first one is the business responsibility to verify the integrity of the overall concept for application. The second category refers to the IT department responsible for properly implementing the idea. Finally, security and availability are the main concern for administrators and third-party experts.
Dividing testing activities by responsibility helps a given team focus on the development process’s main aspects and conduct it accordingly.
Today, I would like to move toward testing software by developers. It means we will focus on the code that is eligible to be tested and how the context of the project changes the testing requirements.
What to test?
Let’s start our discussion from the general philosophical perspective. Are there any parameters of the code that make it eligible to test? Are there any attributes to decide when testing is most valuable?
Certainly, we do not always test everything we write, so we leave some code out of the spectrum. Still, on the other hand, we test something to our great benefit, so the mentioned spectrum is not empty. As it usually goes, the truth is somewhere in the middle; thus, maybe we can agree on some standard norms where the benefits from writing tests are highly possible. We will follow properties that are satisfactory to consider testing the implementation.
The first trait that seems sufficient to give testing a green light is when the code is considered a core module. The definition is not precise because every project may have its own interpretation of crucial backbone functionality. Sometimes, it is a module that is vital for business logic. Other times, it is an internal part of the app architecture. It may be used by many other parts of the system, but it may also be a deep-down dependency that no one explicitly relates to. In general, it is a part of the implementation that is critical for the project, and if broken, we will feel the pain.
You are the right person to define the core code for your project, but let’s think for a second, why testing that code is essential. Aside from finding bugs and providing good documentation, testing may give a little more certainty. When debugging any functionality, we have to rely on something. We have to have confidence in examination tools and some parts of the system; otherwise, we must look everywhere, which costs a lot. If we can rely on our core code, whatever it is, we can limit the number of steps to find the bug. The implementation becomes more reliable and cheaper to maintain. That’s an excellent benefit.
Another characteristic that we should take into account is the frequent change of the implementation. We all know that. Some parts of the application have to be very flexible, and they transform a lot. It does not matter whether it is a new business requirement or an adjustment in some integration. The mere fact that our code is altered a lot is a good incentive to think about writing automated tests.
What may be the benefit of that approach? It is true that when we modify the module a lot, it is prone to errors. That’s why making sure we won’t fall into that trap is key. In addition, we may also appreciate the other great benefit of testing. Having the proper coverage of the code in question, we may consider a test-driven development (TDD). In this case, the evolution starts with modifying the tests and then the code itself. That brings the clarity and validity of the implementation. In general, having a lot of tests is costly to maintain, but when we can fully benefit from the advantages it brings, we will lower the overall cost of development.
The last aspect of the implementation that makes the code highly eligible for testing is its tendency to generate problems. When we find out over and over again that something is wrong with a module, we better learn from our mistakes and optimize our work. It is not always obvious why something breaks regularly, but we should consider that we do not see everything, and sometimes an extra layer of verification may be the best investment for our team.
The pragmatic approach to testing software means creating automated validation wherever we can find errors. Of course, we won’t foresee the future, but the experience of defining places where bugs may appear is essential. It comes with time. When you regularly encounter and fix problems, sooner or later, you will detect them ahead of time.
The benefit seems obvious. The more code we test before it experiences bugs, the less debugging we have and the less time we waste.
When to test?
We have three characteristics of a code which is eligible for automated testing. They are more hints than specific definitions, but we need to consider more to discriminate the code accordingly.
Some teams completely disregard testing, and some crave one hundred percent coverage. We cannot arbitrarily say which one works more efficiently or whose application is better. Too many variables are in the equation to make generalizations like these.
For instance, there is a different necessity for testing when the circumstances of a project change. To understand the context, we should consider the project size, the number of programmers involved, their level of experience, pure knowledge, code review quality, documentation, and technical planning. All of that and much more plays a tremendous role in deciding what to test, if anything at all. The code and the team are in constant flux, so there is probably no single unchanging rule regarding testing software. It may adjust in one way or the other as our experience grows.
I hope you will find the right answer for your project and team. Cheers!