The testing cycle

A thought occurred to me when i was working on some tests: ‘When Kolb’s Learning Cycle helps me to transfer knowledge about new concepts to the developers in the team, can it also help me to learn about the robustness, performance en correctness of our code?’. Testing can be viewed as a learning process. The main objective of testing is not to find bugs in the code, but to collect knowledge about the quality of the system and to determine whether the system is good enough to move to production.

I expected that i would not be the first one that had made this connection, so i did a quick search and found a nice post by Beren Van Daele about Kolb’s Testing Cycle. He writes that ‘Testing and learning have virtually the same process.’ and i firmly agree with that. I would like to add that testing is also about learning by the whole team or organisation.

My translation of the phases in Kolb’s Learning Cylce applied to testing is something like this:

Testcycle

Explore

In this phase the activities are focused on getting a feeling about the system under test. This can mean doing some exploratory testing, clicking through the system, taking notes and collecting metrics and logs. It can also mean executing load tests to find out about the behavior of the system under different loads.

It can help to have a taxonomy of errors and risks to overcome forms of bias and broaden the range of possible area’s to discover. Input of the developers is useful since they will know, the parts of the system where the most pervasive changes were made.

Reflect

The reflection stage is important to determine the priorities for the next stages. The facts collected from the exploration stage must be examined for deviations from the expectations and for areas where concerns are about quality. Questions that can arise are:

  • Are there parts where operation is complicated, usability is low?
  • Are there concerns about stability or performance? For the whole system or only parts of it?
  • Is the software trustworthy?
  • What will be the impact of failures?

The whole team should take part in the analysis. Developers can help explaining patterns that are noticed while experimenting with the system.

Design

The next step is to create models and hypotheses about the system under test. In this phase systematic tests are designed (and if possible scripted or automated). At this stage, it must be possible to define expections and form hypotheses about the behavior of the system in different circumstances. This is the area most teams are familiar with: a large part will be the design of requirement based tests.

Execute

Finally, we end at the phase where the bulk of the tests are executed.  Tests should be automated as much as possible. This helps to get faster feedback and frees the team from repetitious work.

When tests are automated they are often run after each build and you can question whether the concept of learning by doing tests still hold. What can one learn from a test that is repeated every day and always returns the same result? In a future post i will dive a little deeper in how to get more value from repetitive test.

After all tests are executed it is good moment to update your (code) review checklists and risk taxonomies. They can add valueable input to the next testing cycle.

The edge cases

The diagram above suggests a nice clean cyclic process, but there are a lot of cases where the boundaries are fuzzy and phases overlap. The industry trend to move to continuous deployment minimizes testing activities to a point that all testing is automated and changes to the code are pushed to production in seconds. In these cases, the testing cycle is not a seperate step between developement and deployment to production, but it happens in parallel.

You can recongnize it in the Netflix approach to form hypotheses about the steady state behavior of the system. They conduct experiments with the chaos monkey, simulating real-world incidents like high loads and server crashes. The test experiments must give an answer if the resulting offsets from the steady state are acceptable. For a good summary of their approach see this post about the discipline of chaos engineering or for an even shorter summary this post about the principles of chaos engineering.

Other technique that are embraced by the continuous deployment movement are the use of release toggles and A/B testing. In effect this will shift the execution of the tests from the team to the customer.

Advertisements

Static Code Analysis for WSO2

Doing static code analysis is a good practice. It has helped me to create more robust and maintainable code and therefore it is part of my regular routine when writing code. However, in the last few weeks i was not able to keep up that routine because i was working on the service bus parts. Although it is kept in XML files, the mediation sequences on the WSO2 service bus are code, just like the c# code for the services and APIs and the JavaScript code on the client.

A static code analysis tool for the WSO2/synapse files would have some important benefits:

  • It is much easier to check if the project/naming conventions are followed (that’s important to keep the code maintainable).
  • Since it can scan all code – even the code that’s rarely executed, it makes it easier to detect areas with code quality issues.
  • It helps to identify design issues like too complex sequences.
  • Code quality issues will be found earlier.

I searched the web for existing code analysis tools, but didn’t find any, so i decided to do a proof of concept. I created a small tool to scan a folder-structure. All rules to check are hardcoded – no configuration options. The plain text output looks like this:

CancelOrders.xml: Warning: artifact name different from filename
OrderEntry.xml: Warning: Unexpected mediator. Drop, Loopback, Respond or Send 
should be the last mediator in a sequence
error.xml: Warning: filename should end with '.sequence'
prj: Warning: artifact CancelOrder not specified in artifact.xml
0 errors, 4 warnings.

The implemented rules at this moment are a combination of the project naming conventions and some best practices as described here. This first version already helps in keeping the code base clean, but there is still a lot left to do, like:

  • detecting unused properties.
  • detecting when messages are send to a jms queue without specifying the transport as OUT_ONLY.
  • applying the testability checklist to the WSO2 code
  • calculating code metrics