A thought occurred to me when i was working on some tests: ‘When Kolb’s Learning Cycle helps me to transfer knowledge about new concepts to the developers in the team, can it also help me to learn about the robustness, performance en correctness of our code?’. Testing can be viewed as a learning process. The main objective of testing is not to find bugs in the code, but to collect knowledge about the quality of the system and to determine whether the system is good enough to move to production.
I expected that i would not be the first one that had made this connection, so i did a quick search and found a nice post by Beren Van Daele about Kolb’s Testing Cycle. He writes that ‘Testing and learning have virtually the same process.’ and i firmly agree with that. I would like to add that testing is also about learning by the whole team or organisation.
My translation of the phases in Kolb’s Learning Cylce applied to testing is something like this:
In this phase the activities are focused on getting a feeling about the system under test. This can mean doing some exploratory testing, clicking through the system, taking notes and collecting metrics and logs. It can also mean executing load tests to find out about the behavior of the system under different loads.
It can help to have a taxonomy of errors and risks to overcome forms of bias and broaden the range of possible area’s to discover. Input of the developers is useful since they will know, the parts of the system where the most pervasive changes were made.
The reflection stage is important to determine the priorities for the next stages. The facts collected from the exploration stage must be examined for deviations from the expectations and for areas where concerns are about quality. Questions that can arise are:
- Are there parts where operation is complicated, usability is low?
- Are there concerns about stability or performance? For the whole system or only parts of it?
- Is the software trustworthy?
- What will be the impact of failures?
The whole team should take part in the analysis. Developers can help explaining patterns that are noticed while experimenting with the system.
The next step is to create models and hypotheses about the system under test. In this phase systematic tests are designed (and if possible scripted or automated). At this stage, it must be possible to define expections and form hypotheses about the behavior of the system in different circumstances. This is the area most teams are familiar with: a large part will be the design of requirement based tests.
Finally, we end at the phase where the bulk of the tests are executed. Tests should be automated as much as possible. This helps to get faster feedback and frees the team from repetitious work.
When tests are automated they are often run after each build and you can question whether the concept of learning by doing tests still hold. What can one learn from a test that is repeated every day and always returns the same result? In a future post i will dive a little deeper in how to get more value from repetitive test.
After all tests are executed it is good moment to update your (code) review checklists and risk taxonomies. They can add valueable input to the next testing cycle.
The edge cases
The diagram above suggests a nice clean cyclic process, but there are a lot of cases where the boundaries are fuzzy and phases overlap. The industry trend to move to continuous deployment minimizes testing activities to a point that all testing is automated and changes to the code are pushed to production in seconds. In these cases, the testing cycle is not a seperate step between developement and deployment to production, but it happens in parallel.
You can recongnize it in the Netflix approach to form hypotheses about the steady state behavior of the system. They conduct experiments with the chaos monkey, simulating real-world incidents like high loads and server crashes. The test experiments must give an answer if the resulting offsets from the steady state are acceptable. For a good summary of their approach see this post about the discipline of chaos engineering or for an even shorter summary this post about the principles of chaos engineering.
Other technique that are embraced by the continuous deployment movement are the use of release toggles and A/B testing. In effect this will shift the execution of the tests from the team to the customer.