Software development, as a profession, has evolved in fits and starts over the years. When I think back a couple of decades, I find myself a little amazed. During the infancy of the web, hand-coding PHP (or PERL) live on a production machine seemed perfectly fine.
At first blush, that might just seem like sloppiness. But don’t forget that stakes were much lower at the time. Messing up a site that displayed song lyrics for a few minutes didn’t matter very much. Web developers of the time had much less incentive to install pre-production verification processes. Just make the changes and see if anything breaks. Anything you don’t catch, your users will.
Of course, that attitude couldn’t survive much beyond the early days of dynamic web content. As soon as e-commerce gained steam in the web development world, the stakes went up. Amateurs walked the tightrope of production edits while professional shops started to create and test in development or sandbox environments.
As I said initially, this didn’t happen in some uniform move. Instead, it happened in fits and starts. Some lagged behind the curve, continuing to rely on their users for testing. Others moved testing into sandbox environments and pushed the envelope beside. They began to automate.
Web development then took another step forward as automation worked its way into the testing strategy. Sophisticated shops had their QA environments as a check on production releases. But their developers also began to build automated test suites. They then used these to guard against regression tests and to ensure proper application behavior.
Eventually, testing matured to a point where it spread out beyond straightforward unit test suites and record-playback-style integration tests. Organizations got to know the so-called test pyramid. They built increasingly sophisticated, nuanced test suites.
Building upon all of this backstory, we’ve seen the rise of DevOps movement in recent years. This movement emphasizes automating the entire delivery pipeline, from written code to production functioning. So stakes for automated testing are higher than ever. The only way to automate the whole thing is to have bulletproof verification.
This new dynamic shines a light on an oft-ignored element of the testing strategy. I’m talking specifically about performance testing for your web application. Automated unit and acceptance testing have long since become a de facto standard. But now automated performance testing is getting to that point.
Think about it. We got burned by hand-editing code on the production server. So we set up sandboxes and tested manually. Our applications grew too complex for manual testing to handle. So we built test suites and automated these checks. We needed production rolls more frequently so we automated the deployment process. Now, we push code efficiently through build, test, and deployment. But we don’t know how it will behave in the wild.
Web application performance testing fixes that. If you don’t yet have such a strategy, you need one. Let’s take a look at the fundamentals for adding this to your testing approach. And I’ll keep this general enough to apply to your tech stack, whatever it may be.
First up, you have some homework. You need to figure out what sorts of conditions your application will actually face in production. Obviously, you’ll have an easier time of this if you have already released. But in either case, figure out what normal conditions and peak conditions look like.
You’re doing this to prep for the different kinds of performance testing you’ll do. These will include load testing, stress testing, endurance testing, and possibly others. With load testing, you contrive of production conditions (e.g., number of users, traffic volume, etc.). Then you simulate that load to see how your system handles it. You can also increase the load to the point of breaking to see what you can handle.
With stress testing, you throw adverse conditions at your app to see how it behaves. You want to verify that, even when it fails, it does so reasonably and gracefully. And with endurance testing, you observe behavior over the course of time, rather than with unusually heavy load.
All of these different tests involve simulating potential production conditions and testing the behavior. So you need to know what to expect in production.
Once you’ve done your due diligence, you need to start in earnest. And your first task will involve actually setting up your environment for testing.
Conceptually, this will differ from your previous setups. In the past, you’ve mainly concerned yourself with deploying the software somewhere neutral. In other words, as long as you test it somewhere besides a developer’s machine, you’re probably good. Once deployed into your environment, you execute tests to verify the correctness of functionality.
Performance testing requires something different. Here, you need to simulate production to the best of your ability. At the least, this will probably mean beefier servers. Depending on your company, you may need to put in requests and requisition machines. If you can, try leveraging cloud technologies to make your life easier.
But whatever you do, figure out how to get as close to production as possible.
Once you have a production testing home for your app, you still have work to do. You’re going to need to figure out how to simulate production conditions for load and stress tests. How do you go about throwing thousands or millions of requests at your app?
You don’t want to do it by paying hordes of developers, testers, and data entry people to do it manually. Seriously, please don’t do this. It’s inhumane and ineffective. We’re not talking about a good ole’ fashioned bug bash here. We’re talking about a simulation of exposing your app to the internet.
Find yourself some tools and/or services to help you with this.
Now that you have an environment and tooling for the testing, you need to automate the operation. I’m talking about a different sort of automation than in the last section. That involved automating the simulated requests and usage scenarios. I’m talking now about automating the bigger picture.
As part of your deployment pipeline, you need to automate deployment to this performance testing environment. You’ll then need to automate the kickoff of the performance tests, as well as the recording of the results, along with passes and fails. Don’t underestimate the complexity here, particularly in the case of any endurance testing you might do.
Traditional correctness is easy in a sense. “When I input X, I should get back Y.” You can easily test for such a thing. You just input X, check for Y and fail for anything but Y.
With performance testing, the waters muddy a bit. You no longer have true and false, but rather thresholds and guidelines. For instance, you might say that a response should have a certain average time and a certain unacceptable maximum time. You might have runs that trouble you, but don’t stop you from shipping. And, frankly, you might start out not even knowing exactly what to expect.
For this reason, you should establish initial baselines. See how your app performs as-is, recording lots of data. Assuming you find the run acceptable, establish your results as baselines. (If it’s not acceptable, fix it until it is.) You’ll keep this around so that you can judge whether or not performance regresses with time. And you’ll also want to have this data in case you want to target sustained improvements over the course of future releases.
With all of the prep out of the way, you’re ready to incorporate the performance testing into your overall strategy. You have your environment, all surrounding automation, and your baselines. Now you just have to run your tests regularly, benchmarking against the baselines that you’ve established.
You may start out with improvement goals from the start. But you also might content yourself for the time being. Either way, make sure to evaluate both your test results and your evolving needs. If your competitors later start to boast faster load times or more responsive applications, you should adjust your strategy accordingly.
We had indeed come a long way since the early days of my career when people hand-coded things on servers in the cgi-bin folder. Testing techniques have evolved in surprising ways. But bear in mind that they will continue to do so. Make sure you keep adapting quickly enough to stay on the right side of the curve.
If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]