The testing cycle

A thought occurred to me when i was working on some tests: ‘When Kolb’s Learning Cycle helps me to transfer knowledge about new concepts to the developers in the team, can it also help me to learn about the robustness, performance en correctness of our code?’. Testing can be viewed as a learning process. The main objective of testing is not to find bugs in the code, but to collect knowledge about the quality of the system and to determine whether the system is good enough to move to production.

I expected that i would not be the first one that had made this connection, so i did a quick search and found a nice post by Beren Van Daele about Kolb’s Testing Cycle. He writes that ‘Testing and learning have virtually the same process.’ and i firmly agree with that. I would like to add that testing is also about learning by the whole team or organisation.

My translation of the phases in Kolb’s Learning Cylce applied to testing is something like this:



In this phase the activities are focused on getting a feeling about the system under test. This can mean doing some exploratory testing, clicking through the system, taking notes and collecting metrics and logs. It can also mean executing load tests to find out about the behavior of the system under different loads.

It can help to have a taxonomy of errors and risks to overcome forms of bias and broaden the range of possible area’s to discover. Input of the developers is useful since they will know, the parts of the system where the most pervasive changes were made.


The reflection stage is important to determine the priorities for the next stages. The facts collected from the exploration stage must be examined for deviations from the expectations and for areas where concerns are about quality. Questions that can arise are:

  • Are there parts where operation is complicated, usability is low?
  • Are there concerns about stability or performance? For the whole system or only parts of it?
  • Is the software trustworthy?
  • What will be the impact of failures?

The whole team should take part in the analysis. Developers can help explaining patterns that are noticed while experimenting with the system.


The next step is to create models and hypotheses about the system under test. In this phase systematic tests are designed (and if possible scripted or automated). At this stage, it must be possible to define expections and form hypotheses about the behavior of the system in different circumstances. This is the area most teams are familiar with: a large part will be the design of requirement based tests.


Finally, we end at the phase where the bulk of the tests are executed.  Tests should be automated as much as possible. This helps to get faster feedback and frees the team from repetitious work.

When tests are automated they are often run after each build and you can question whether the concept of learning by doing tests still hold. What can one learn from a test that is repeated every day and always returns the same result? In a future post i will dive a little deeper in how to get more value from repetitive test.

After all tests are executed it is good moment to update your (code) review checklists and risk taxonomies. They can add valueable input to the next testing cycle.

The edge cases

The diagram above suggests a nice clean cyclic process, but there are a lot of cases where the boundaries are fuzzy and phases overlap. The industry trend to move to continuous deployment minimizes testing activities to a point that all testing is automated and changes to the code are pushed to production in seconds. In these cases, the testing cycle is not a seperate step between developement and deployment to production, but it happens in parallel.

You can recongnize it in the Netflix approach to form hypotheses about the steady state behavior of the system. They conduct experiments with the chaos monkey, simulating real-world incidents like high loads and server crashes. The test experiments must give an answer if the resulting offsets from the steady state are acceptable. For a good summary of their approach see this post about the discipline of chaos engineering or for an even shorter summary this post about the principles of chaos engineering.

Other technique that are embraced by the continuous deployment movement are the use of release toggles and A/B testing. In effect this will shift the execution of the tests from the team to the customer.


Static Code Analysis for WSO2

Doing static code analysis is a good practice. It has helped me to create more robust and maintainable code and therefore it is part of my regular routine when writing code. However, in the last few weeks i was not able to keep up that routine because i was working on the service bus parts. Although it is kept in XML files, the mediation sequences on the WSO2 service bus are code, just like the c# code for the services and APIs and the JavaScript code on the client.

A static code analysis tool for the WSO2/synapse files would have some important benefits:

  • It is much easier to check if the project/naming conventions are followed (that’s important to keep the code maintainable).
  • Since it can scan all code – even the code that’s rarely executed, it makes it easier to detect areas with code quality issues.
  • It helps to identify design issues like too complex sequences.
  • Code quality issues will be found earlier.

I searched the web for existing code analysis tools, but didn’t find any, so i decided to do a proof of concept. I created a small tool to scan a folder-structure. All rules to check are hardcoded – no configuration options. The plain text output looks like this:

CancelOrders.xml: Warning: artifact name different from filename
OrderEntry.xml: Warning: Unexpected mediator. Drop, Loopback, Respond or Send 
should be the last mediator in a sequence
error.xml: Warning: filename should end with '.sequence'
prj: Warning: artifact CancelOrder not specified in artifact.xml
0 errors, 4 warnings.

The implemented rules at this moment are a combination of the project naming conventions and some best practices as described here. This first version already helps in keeping the code base clean, but there is still a lot left to do, like:

  • detecting unused properties.
  • detecting when messages are send to a jms queue without specifying the transport as OUT_ONLY.
  • applying the testability checklist to the WSO2 code
  • calculating code metrics


Design for testability

Even in the agile world, testing is important to assure the delivered software will meet its expectations. On agile teams, testing provides the neccessary feedback to move the workitems to ‘Done’, but there is less time to prepare, execute and report than in a traditional development approach. Therefore, testing is a whole team effort – not only the dedicated testers do the work, but also the developers and functional analists. Since the team cannot deliver without having tested the software, the software must be testable.

As shown in this data, the defect detection rate of a single testing approach is limited. In most cases, unit testing alone will not uncover more  than 30%  of the errors introduced. This means manual tests, integration tests and exploration testing are required as well. To cope with this, software must be designed for testability on all levels and in all contexts. For software designed for testabiility, testing will take less effort and the feedback faster.

The most important aspects for testability can be summarized by the acronym SOCKS. It stands for: Simplicity, Observability, Controllability, Knowledge and Stability.

  • Simplicity is an important quality because complex software is hard to test. When the system is less complex, setting up the test, executing it and interpreting the results is easier.
  • Observability means that the internal state and the results of the executed algorithms can be inspected. The tester must have access to UI, Reports, logfiles, and diagnostics. Other ways of improving the observability are supplying custom a custom API or a dedicated test UI – without compromising the security of the system.
  • Controllability determines the extend to which the system can be put in a desired state. When testing, you do not want to execute a lot of (manual) steps to reach the initial state for the function to test. It is better to have an option to import the data, or to directly manipulate the data to do an isolated test. For testautomation it is easier to use an API than having to manipulate (script and playback) the user interface.
  • Knowledge of the system under test and the technology used makes testing more effective. Good traceability from the requirements to the implemented code helps to select the essential parts of the software to test. Good documentation makes is easier to setup the tests and interpret the results.
  • Stability is required to prevent having to test the same components over and over again. Code following the SOLID principles will have a positive effect on the stability of the code. A second reason why stability is important is when errors in one component propagate to another component making it impossible to test. To increase testability the design of the code could introduce bulkheads to confine errors to the component where they occured.

Test phases

During the developement of the system different types of tests must be executed. The design of the system can support the different phases in the test proces. This is especially important when the system under test must connect with external systems. In these cases i would apply the adapter or facade pattern to the design and set up the connection to the external system through configuration rather than code. In the early developer and unit tests mocks or stubs can be used. When moving to the integration test phase, the stub can be replaced by an actual testsystem and finally the test is executed against the test environment on the target platform. Note that, you will have to think about observability  in all these context. When doing a development stage, white box test, monitoring the state of the application is often straightforward, but moving to an actual external system makes inspecting the results much more complicated. It can be neccessary to add fields to the UI or write identity information to the logfiles to be able to lookup the results in the external system (for instance when you cannot use you own technical key to find the data you did send to the external system).

Designing for testability means different things for each phase in testing. For unit tests and developer tests the main focus will be on the design of code. The way the code is structured can have a great impact on how good the code can be unit tested. Testability is increased by preventing anti-patterns like non-deterministic code, methods with side-effects, use of singletons, but use patterns like Dependency-Injection and Inversion of Contol. A detailed discussion can be found here.

For testability in integration and acceptance test phases, higher level design decisions are needed. What components and APIs are defined at the architecture level can have a major impact on te testability.

Testability Checklist

Architecture & Design

  • Are export and import capabilities planned?
  • Can the complete system be restored from a backup set?
  • Can external dependencies be mocked?
  • Is there an API to inspect the internal state of the application?
  • Is there a standard test interface?
  • Are multiple installs with different configurations on the same machine possible?
  • Is installation/deinstallation scriptable?
  • Is the system partioned in different compartments that prevent errors propagating?
  • Is documentation available descibing the components and their interaction?
  • Is there a logging framework to facilitate logging from the application?
  • Are control/observation points available where testing code can be injected?
  • Are built-in tests available?
  • Can the component receive test messages?
  • Are performance and usage metrics collected?
  • Are there options to provide all exception conditions?
  • Is there distinct output per process/input (can all ouput be correlated to the input)


  • Are dependencies injected instead of using the service location or singleton pattern?
  • do pages/objects have unique names (is system automation friendly)?
  • If methods alter data that is not an input argument, is the data returned as a function result?
  • Are all state transitions logged?
  • Are there useful comments in the code
  • Is coupling to concrete implementations prevented?
  • Are external interfaces wrapped using adapter or mediator pattern?
  • Is the API documention in the source code  (and published to service consumers)?


Using Web API for SOAP web services

The usual way to implement a SOAP based web service is to use WCF. When you are in charge of creating the contract, it is rather straightforward: Create some datacontracts, a servicecontract and some operations and then create a class that implements the service. WCF will give you all the options you need to host the service in IIS, to support different transport protocols and secure the service. So, why would you use Web API to implement a SOAP web service?

Actually, i made an implementation just out of curiousity, but is was triggered by a real issue with WCF services. Note that creating a service is easy when you are in charge of the contract. Things change, when you have to comply to a standard that is created by another party or committee. In many cases individual suppliers only implement a small part of large standardized service contracts and those contracts can contain constructions that are nog processed very well by SvcUtil and other tools to create c# classes from the WSDL and XSD-files. As an example: the ZKN0310-standard of KING (an organization related to the Dutch government) results in a generated file of more than 100.000 lines of code and when creating objects with the generated code and serialize them to XML, the generated XML will not validate against the original XSD.

In this situation, it seems to be more effective to manipulate the XML directly by my own code. In the past, i had build some REST-like services that used Linq-to-XML to process XML-based input. As the SOAP envelope is just an additional XML-layer around the message, creating some basic SOAP support is not that difficult.

The SOAP mediatypeFormatter

Implementing a web service should not require you to manipulate the SOAP envelope in the actual methods. In order to accomplish this, a custom mediatypeFormatter can take care of stripping the SOAP envelope from the request and adding it to the response. A custom mediaTypeFormatter is just a class that inherits from MediaTypeFormatter of BufferedMediaTypeformatter:

public class SoapFormatter : MediaTypeFormatter
        private XmlMediaTypeFormatter wrappedXmlFormatter
            = new XmlMediaTypeFormatter();

        public SoapFormatter()
            // Add the supported media type.
            SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/xml"));

At the moment, the formatter only supports SOAP 1.1 messages. Support for SOAP 1.2 would at least require support for the “application/soap+xml” content-type. The custom formatter is just a wrapper around the stands XmlMediaTypeFormatter to minimize to work needed.

The formatter must inform the Web API framework wat type of objects it can serialize and deserialize by overriding the CanWriteType and CanReadType methods:

public override bool CanWriteType(System.Type type)
       if (type == typeof(XElement))
            return true;
        return false;

public override bool CanReadType(Type type)
        if (type == typeof(XElement))
            return true;
        return false;

The last step in implementing the formatter is to supply the methods for the actual manipulation of the SOAP envelope.

public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext)
       XNamespace soapenv = "";

       XElement antwoord = new XElement(soapenv + "Envelope",
                new XElement(soapenv + "Body", value));

       return wrappedXmlFormatter.WriteToStreamAsync(type, antwoord, writeStream, content, null);

public async override Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger)
       XNamespace soapenv = "";

       object input = await wrappedXmlFormatter.ReadFromStreamAsync(type, readStream, content, formatterLogger);
       XElement inputElement = input as XElement;

       XElement vraagBericht = inputElement.Element(soapenv + "Body").Elements().FirstOrDefault();
       return vraagBericht;

Routing on SOAP action

As a SOAP web service can provide more than one method on the same resource, a customization of the routing is required. I found out this is surprisingly easy to do using a custom ApiControllerActionSelector that reads the SOAP action from the HTTP headers and set the according action in the RouteData dictionary.

public class SoapActionSelector : ApiControllerActionSelector
	public override HttpActionDescriptor SelectAction(HttpControllerContext controllerContext)
		if (controllerContext.Request.Headers.Contains("SOAPAction"))
			var matchingHeaders = controllerContext.Request.Headers.GetValues("SOAPAction");
			var headerValue = (matchingHeaders == null) ? "" : (matchingHeaders.FirstOrDefault() ?? "");
			if (!string.IsNullOrEmpty(headerValue))
				// Strip the namespace and double quotes from the soap action
				int nsSplit = headerValue.LastIndexOf("/");
				if (nsSplit >= 0)
					headerValue = headerValue.Substring(nsSplit + 1);
				headerValue = headerValue.Trim("\"".ToCharArray());

				// Set the new action
				controllerContext.RouteData.Values["action"] = headerValue;
		return base.SelectAction(controllerContext);

The default route in Web API does not include the action. This means that a route must be aded to the WebAPiConfig:

	name: "SoapApi",
	routeTemplate: "{controller}/{action}",
	defaults: new { controller = "Services" }

Decorating the controller

In order to create a Web api controller that is able to handle SOAP request, one final step is required. The controller must be configured to use the new mediaTypeFormatter and the SoapActionSelector. Web Api offers a nice extension point to do this with the IControllerConfiguration. In this case, i created a ‘SoapControllerConfiguration’ class with only an ‘Initialize’ method. In this method, i replace the standard action selector and the standard media type formatters with the custom ones i created earlier.

public class SoapControllerConfiguration : Attribute, IControllerConfiguration
	public void Initialize(HttpControllerSettings controllerSettings, HttpControllerDescriptor controllerDescriptor)
		controllerSettings.Services.Replace(typeof(IHttpActionSelector), new SoapActionSelector());
		controllerSettings.Formatters.Add(new SoapFormatter());

The only thing needed to change a controller in a SOAP serice is to apply the SoapControllerConfiguration attribute to the controller.

public class ProjectServiceController : ApiController
	public XElement CreateProject(XElement createRequest)
		XNamespace ns = "http://someHugeNamespace/project";

		string projectName = "unknown";
		var nameElement = createRequest.Element(ns + "ProjectName");
		if (nameElement != null)
			projectName = nameElement.Value;

		// ... 

		XElement response = new XElement(ns + "acknowledge",
			new XElement(ns + "Success", true),
			new XElement(ns + "Message", projectName + " created")
		return response;


Should i use this in a production situation? No. This implementation is far too limited. The better way to implement a SOAP service without using generated proxy classes is to use WCF. A simple servicecontract with operations that take an XElement as input parameter and produces an XElement as output will do the trick as well.

For me, the good thing about this exercise was that i learned some new things about Web API that will be usable in other situations as well:

  • Custom media type formatters can help to move repeating work out of the controller methods.
  • The routing options are not limited to the items in the query string. Custom action selectors can lead to much cleaner controller methods.

jQuery AJAX (and the shorthands)

Using AJAX (asynchronous JavaScript and XML) for your ASP.NET MVC application is made relatively easy using the jQuery .ajax method or one of the shorthand methods available. I found a nice introduction on jQuery Fundamentals which explains the concepts i used as the basis for my own experiments with MVC5 and AJAX.

$.ajax: basic usage

The $.ajax method takes a configuration object specifying things like the url, the http-verb to use, the format, and the callback functions for the succes and error situations. A straightforward way to render this is like this:

@section Scripts {
    <script type="text/javascript">

        function addFav()
                url: '@Url.Action("AddFavorite", "Test", new { id = Model.Id })',
                type: 'POST',
                dataType: 'json',
                success: function (resp) {
                error: function (jqXHR, status, err) {
                    errMsg = jQuery.parseJSON(jqXHR.responseText);
                    console.log('Oops: ', status, errMsg);

        $(function () {
            $('#favMe').on('click', addFav);


Somewhere on the page, there is an element with id ‘favMe’ and another element with id ‘notificationBox’. This code will fire the addFav method when the user clicks the element with id ‘favME’. When the ajax call succeeds the message that is returned from the server is written to the ‘notificationBox’ element.

The server side code looks like this:

public JsonResult AddFavorite(int id)
         if (!User.Identity.IsAuthenticated)
                Response.StatusCode = (int)HttpStatusCode.BadRequest;
                return Json("You must be logged in to add a favorite!");

         var fav = new Favorite(){UserName = User.Identity.Name, Id = id};

         // ... validate id and persist changes ...

         return Json("Added to favorites", JsonRequestBehavior.AllowGet);

Returning a http statuscode like 400 (Bad Request) or 500 (Internal Server Error) will effect in the error callback being called on the client side. Just throwing an exception from the action method will send an internal server error back to the client. By specifying the http statuscode yourself you have more control over the data being returned to the client. One note though: a statuscode of 401 (unauthorized) does not result in the error callback being called. Instead, the success callback is called.

Shorthand Methods

jQuery has a number of shorthand methods for the $.ajax method that can result in less javaScript code:

jQuery.get() loads data from the server using a HTTP GET request. loads data from the server using a HTTP POST request. The above code could also be written using the post method:
function addFav() {
     $.post('@Url.Action("AddFavorite", "Test", new { id = Model.Id })',
                function (resp) {
jQuery.getJSON() loads Json-encoded data from the server using a HTTP GET request.
jQuery.getScript() loads javaScript from the server using a HTTP GET request and execute it.

The shorthand methods are all in the form jQuery.methodname (or $.methodname). There is also a shorthand that you can apply on an element to load new data:

.load() Load data from the server and place the returned HTML into the matched element. This could be used in combination with a controller method that returns a partial view.
$('#data').load('@Url.Action("SomeData", "Test", new {id =})');


For security reasons many browsers block requests to other domains. For this reason jQuery offers support for the JSONP (‘JSon with Padding’ or ‘Json with Prefix’) protocol.

Another option is to use CORS (Cross-origin resource sharing), but this is not supported by older browsers. It also requires some work on the server side: either by adding a header programatically:

Response.AppendHeader("Access-Control-Allow-Origin", "*");

or, by adding some configuration to the web.config:


          <add name='Access-Control-Allow-Origin' value='*' />;


When using JSONP or CORS to post to another domain you should be very careful as this introduces a security risk to the application.

Tracing in MVC5

While studying for the Microsoft 70-486 certification i noticed there are two ways you can use tracing in your controller methods. The first one is by using the Trace class:

Trace.TraceInformation("View called for id = {0}", id);

When you have configured a TextWriterTracelListener this will write a line to the logfile containing the name of the executing process (in my case ‘iisexpress’, the switchvalue and the line itself.

The other way is to call a method on the TraceSource class. This will give you the opportunity to supply a name for the logger and create a dedicated configuration. The code for using a TraceSource looks like this:

TraceSource trace = new TraceSource(this.GetType().Name);
trace.TraceInformation("Index action called");

For completeness, here is the configuration as well:

      <source name="ArtworkController"             switchType="System.Diagnostics.SourceSwitch"             switchValue="Information">
          <add name="textListener" />
      <add name="textListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="D:\Projects\Logs\trace.log" traceOutputOptions="Timestamp"/>
        <add name="textListener" />

The TextWriterTraceListener that comes out of the box in .NET does not give much options to format the text. In the above configuration i added a timestamp to the trace output. The TextWriterTraceListener did not ad – as i was expecting – the timestamp on the same line, but added another line. For this reason i prefer to use a more mature logging framework like log4net.

When you need a more structured setup, you can also use the DelimitedListTraceListener. With this you can create a csv file with the trace output.

Tracing page information

Note that the old style web forms tracing is also available. You have to setup some things in the web.config to make it work:

    <trace enabled="true" mostRecent="true" pageOutput="false"/>

When this is configured you can click through the application and then point the url to http://server/sitename/trace.axd and then a page will be shown containg limited information about the last requests. In a webforms application you could use Trace.Write and the data would be written to the trace.axd result. However, in MVC this will not happen and you will need to use the tracing mechanism as described above.

What’s here?

This is the place for me to publish my notes on developing software. The main purpose of this blog is to structure my thoughts on software developement, tools en techniques, but while doing it, other may benefit from it as well.

The title of this blog contains three words, that are interrelated when developing software:

  • Build: Building working software is the ultimate goal.
  • Design: Some design is needed in order to be able to build it.
  • Learn: Before you can design or build, you have to learn about the problem domain, the tool and the techniques.

The relations also work the other way around. When building or designing, you learn new things and enforces the things you have learned.

So, let’s start with it.