Four Reasons to Move Beyond Manual Testing

Posted Jun 7th, 2018

Does your organization still test its software by hand? If the answer is yes, it is time (if not long past time) to consider moving beyond manual testing to a fully automated, script-driven software test system. In this post, we'll talk about some of the key reasons why you should make the move to automated testing.

When the Only Way Was the Hard Way

For a long time, of course, the most (and often only) reasonable way to test software was by hand. Each platform on which an application was tested was a separate physical system, typically with few resources for automation beyond simple batch language files. Test technicians had to work from test regime documents, manually carrying out each step of a test, then entering the results. The time required to do this limited the number of tests which could be performed, so the test regime often consisted of the most basic and most important tests, with the goal of catching the most probable and damaging bugs and performance issues prior to release. Other less pressing or more subtle problems would then be reported by beta users, or post-release.

Triage as a Necessity

It's a sensible system, if all you can do is test manually. Essentially, however, it's triage turned into fundamental policy—You test for the things that are most likely to require attention, and which you can test for and fix, given limited time and resources. What it means, however, is that less likely (but still potentially serious) problems, or problems which would require more time or other resources to detect, such as cumulative performance degradation, are likely to be left out of the test regime.

Automation Changes Everything

All of that changes, however, with automated testing. While you can (and in some situations, must) run automated tests on discrete physical systems, automated testing is typically performed on virtual systems, often in a cloud environment. This increases the speed of testing, reduces the time required for individual tests, and makes large-scale parallel testing possible. The result is not simply faster tests or more tests—Automation allows you to run tests which would not have been practical or possible under a manual test regime, and to test for problems which would have previously gone undetected.

1. Speed Means More Time

Automated tests are pretty much by definition script-driven, and typically, a single script will consist of a sequence of tests. Since individual test scripts can be run automatically, you can run an entire test regime from a single script.

This eliminates the time required to physically perform common manual testing tasks (clicking a mouse, pressing a key, reading a screen prompt, etc.), and with virtual test systems, reduces the time required to set up most tests to a small fraction of what it would otherwise be. For most tests, this means that the actual time required for the test is determined by such things as processor speed. Even highly repetitive performance tests take considerably less time.

Parallel is Even Faster

When you are testing in a virtualized environment (particularly one that's cloud-based), you can also take advantage of parallel testing to run a large number of tests simultaneously, performing multiple tests on the same platform, the same set of tests on multiple platforms, or a combination of the two.

The overall result is an increase in test speed which can often be measured in orders of magnitude; a test regime which would otherwise have taken weeks can be completed in a few hours. This increase in speed doesn't just allow you to finish testing sooner—It also allows you to think about what other tests you'd like to run, since you know that even a large number of additional tests won't take that long.

2. Sometimes More is Much Better

Once you are free of the time constraints of manual testing, you can afford to set aside the triage approach to your test regime. You can add test cases to cover a large number of program functions which would previously have gone untested because they had lower priority, had little or no record of trouble, or appeared to be isolated from new or changed features, as well as multiple variations on an existing test case. This gives you a chance to detect hidden or previously unreported problems, as well as unsuspected interactions between widely separated parts of the application.

Testing In-Depth

When you do this, you probably will find issues that your users have known about, but that have gone unreported or underreported. You are also likely to detect functional and performance problems which your users may not even have recognized as bugs ("That feature? I've never figured out how to use it—I think I'm just stupid!"). You may also discover results like computed data, report contents, formatted output, and so on which fail to meet the application's requirements.

3. Covering More Platforms

How many different platforms can your application run on? Even if it's designed for a closely controlled proprietary system such as the iPhone, the number of possible OS/hardware configurations can be high. When the operating system doesn't require proprietary hardware (e.g., Android, Windows, Linux), the number of combinations can seem overwhelming.

Even with automation, It may turn out that you can't test all possible Android/mobile device combinations, if only because there are too many to keep track of. But with automated testing, you can test a large number of those combinations, and you can add to the list as new systems come on the market.

Virtual and Physical

With a cloud-based testing service such as Sauce Labs, there are options for both virtual/emulated devices and physical hardware. Cloud-based services take care of much of the overhead (such as managing both virtual and physical devices), while providing you with an environment that is optimized for script-driven, automated testing. This means that even with hardware-based tests, your setup time and associated overhead is reduced to a minimum.

4. Repeat, Then Repeat Again

Some performance problems take time to make their appearance. This is notoriously true of memory creep, but it can also be a factor in any situation where unwanted incremental processes result in cumulative loss of performance—or for that matter, in the sudden appearance of an otherwise undetected functional problem (such as an overflow).

In a manual test system, the amount of time required before such problems show up may make testing for them impractical, or only practical for the most obvious/most easily produced performance problems. This means that untested performance issues are likely to show up in the field, where they will be reported by users. In the case of unanticipated overflows, a lack of testing may lead to undetected system vulnerabilities.

Testing for Incremental Problems

Automated testing gives you the opportunity to include long-term incremental performance/functional issues in your test regime. Even if each test requires a significant amount of time to run, large-scale parallel testing allows you to take care of a large number of such tests simultaneously, potentially reducing the total test time to that required by the longest singe test. In a virtualized test environment, the actual time required for even the most time-dependent endurance tests may be relatively small compared to equivalent tests performed manually.

Manual testing? There was a time when it was necessary, and we all learned to make the best of it. It served its purpose, and it served it well. But that was then, and this is now. Now is the time to make the move to automated testing.

Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ‘90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues. He is a regular contributor.

Written by

Michael Churchman


Manual/Live testingAutomated testing