You could almost certainly draw your website as a series of boxes that talk to each other. If you zoom all the way in, you would see one method or function talking to another. If you zoom all the way out, you would see something like an e-commerce site using an external payment system, or an intranet collaborating with an external system to achieve single sign-on.
This division into boxes or modules is called modularisation. There are many benefits to modularisation, but there are also risks. As a developer or manager of developers, you need to be aware of these risks and have a plan for how to keep on top of them.
Where Communication Can Break Down Between Modules
Here is an excellent opportunity for some cliché stock images.
If your site looks more like the first one, where the communication isn't working correctly, it could be because:
- It's one side's fault
- It's the other side's fault
- It's both sides' fault
The general approach is to divide and conquer:
- Test one side on its own
- Test the other side on its own
- Test them together
However, the first two are easier said than done. The two sides form a partnership, like a pair of hands, and it's like you want to hear the sound of one hand clapping. The way you deal with this is to fake the other side of the partnership.
How to Diagnose a Broken Connection
Depending on the scale you're working at, and which technology you use, there will be different options available to you. For testing a single unit of code, like a C# method or a stored procedure, then frameworks such as Moq and tSQLt can help. The bigger the modules, the more likely you are to have to write something from scratch. If one side of the interface is a commercial 3rd party product, it's worth checking if the vendor offers a fake version of their product for development and testing.
How to Fake a Connection for Testing
There's more than one thing you could aim for in your fake module:
- The smallest and simplest is just the minimum that lets the not-faked thing work. If A sends stuff to B (A calls a method on B, sends it an HTTP request etc.), then make a fake bit of B that receives stuff in the correct way, but then throws it on the floor.
- A more advanced fake would build on the minimum by logging what the not-faked thing sent it. That helps at the end of the test, where you can check that the not-faked thing sent the correct things in the correct order via the correct channels (method calls etc.)
- Another more advanced version (which can be combined with the logging enhancement above) is for the fake thing to return pre-set responses to the not-faked thing. That can be as simple as returning data that the not-faked thing depends on, all the way up to deliberately returning an error (throwing an exception etc.) This is a handy way of testing the error-handling code in the not-faked thing, which is where a surprising number of bugs hide.
Once you have the two parts tested separately, you have cut down a lot on where you need to hunt for any remaining bugs that you find when you bring the real version of each part together. It could be down to a misunderstanding of the interface, or something subtle like the correct things being done in the wrong order or taking too long to do something.
The Cost of Not Testing Thoroughly
A fundamental question when you're designing or building some software is: How will I test this? The wrong answer is: I'll let my customers do that for me. Not only does inflicting avoidable bugs on your customers cost your reputation, but it's also inefficient and hence costly. The shorter the feedback loop of writing the bug, detecting the bug and fixing the bug, the better. All the thoughts will still be in your head if an automated integration test flushes out your bug the night after you check your code in. If you wait for days or weeks until your code gets into the hands of customers, then it will take time to think your way back into the code before you can fix it.
"How will I test this?" has interesting answers if you're building something incomplete. For instance, a library rather than a whole executable, or something that just sits there waiting for requests to come in. You'll need to write a test harness (a particular version of fakery) to drive your real code, to be able to test it.
You Need to Leave Time for Testing!
All the fake things need to have time allocated to writing them, even though they will probably never end up being used by customers. Their value lies in ensuring that the code that the customers do use is fit for purpose, which after all is the whole point of the exercise.