How do so many bad designs reach the market?
You have probably heard yourself asking the question “doesn’t anyone ever test this stuff?” upon discovering that something isn’t working well, or that it’s apparently suffering from a design flaw. It seems that “untested” products continue to reach consumers despite all the advances in design and manufacturing technologies.
With the sudden and untimely catastrophic failure of my iron, I quickly found a replacement which didn’t have quite the same features but which appeared a worthy replacement. Despite not being quite the same, I figured that it’s a mature market with few major leaps in technology so “it’s hard to go wrong with an iron“. I was wrong. I bought a product which has a fundamental design flaw – it is almost impossible to pick it up without inadvertently changing the temperature. Ironing is a task which (for me at least) requires the iron to be regularly put down and picked up. So almost every time I do this seemingly straightforward thing, I adjust the temperature – not just by a small amount, but to the maximum setting. Major flaw.
I recently visited the toilet in a hotel lobby. The plumbing had been arranged such that the automatic taps in the washbasin were only activated when they detected hands and the toilet had been flushed. In “normal” usage (toilet flush followed by hand-wash) this might not be a problem. But in the case of wanting to wash hands without using the toilet, the problem becomes evident.
Some years ago in the UK there was an issue with a particular supplier of Christmas cards [please add a comment below if you have evidence to support this]. The envelope was made of a waxy (or metallic, or otherwise slippery) material. All very nice but stamps wouldn’t stick to them. Thousands had been sold before the problem was noticed by unhappy customers. Not even sticky tape helped – the final recommendation was to attach the stamps with staples! Somehow, nobody had done the testing on this new material when considering its suitability for envelopes.
How can these things happen?
Somehow, either the idea or the implementation haven’t been tested properly. So maybe it was a bad design in the first place but nobody noticed it. Or the original design was great, but it hadn’t been properly delivered. Or perhaps (and I suspect this is often what happens), the problem was identified but it was considered too late to do anything to resolve it.
Why is product testing so important?
- To uncover fundamental design flaws
- To identify usability problems
- To identify implementation errors, bugs etc.
- To verify product durability
It’s clearly important to be doing the right type of testing at the appropriate time. There’s little point in waiting until just prior to final release before doing basic usability testing. Similarly, it’s probably inappropriate to be doing heavy-load testing in the very early days of sketchy prototype.
Prototype testing is an essential part of product design, delivering an early indication of problems or areas for further study.
Agile methods in software development have significantly reduced the incidence of fundamental errors being detected only at the last moment. But unless regression testing is rigorously applied, errors can creep in at the last minute and cause havoc with the final release.
I vividly remember helping a university colleague testing the software for his final year project. He was studying Civil and Structural Engineering, and his project was a piece of software which calculated the size and position of reinforcing bars in a concrete slab. As I knew nothing about the subject myself, I was considered ideal an ideal tester. After entering my name on the first screen, I was greeted with an input box something like this:
Enter slab length in metres (eg for a 17.45m slab, enter 17.45):
Not having a particular slab in mind, I decided to go with the example and entered 17.45 and was immediately greeted with the message “Error line 218: Type mismatch”. My colleague had decided to change from using millimetres to metres (for my convenience) only minutes before the start of testing, and he had failed to do any regression testing. Oops! We both learned an important lesson that day.
The ultimate test policy
Before any product or service finally ships, there must be an adequate period of testing by users. If done properly, this is the time when any problems should become evident, allowing the manufacturer, supplier, designer or whoever to analyse and (if necessary) rectify the problem.
If this is to be meaningful, it must be end-to-end testing, and include the following as appropriate:
- unpacking, assembly, charging
- installation, first-time-registration, account set-up etc
- use of different browsers, spoken languages, time-zones
- variation in gender, handedness, visual competence
- usage patterns including heavy, light etc
Who should do the testing, and how should they do it?
I list below the essential characteristics of testers, but it is important that the testing should be conducted by as many people as is practical. This often creates its own problems, including:
- Confidentiality may be a concern, often limiting the testing to employees. Needless to say, from an impartiality perspective the best employees are likely to be those not directly associated with the product itself. Agencies can also be valuable partners in undertaking this work.
- Geography may impose constraints in terms of physical distribution of product to appropriate locations.
- Configuration control and management of variants can be complex and time-consuming.
Essential characteristics of the test pool
Before recruiting people to the test pool, it’s important to ensure they have the right characteristics. These include:
- Representative of real consumers. Of course it’s critically important to include representative consumers.
- Enthusiastic. There’s not much point in using testers who aren’t enthusiastic about testing!
- Understanding. Some of the testing will be “less than exciting”, and potentially a waste of testers’ time – so it’s important they understand that the work they do is important. It’s probably best not to use people who only like using fully functioning products, or who are easily irritated by product “niggles”.
- Motivated. People are motivated by a wide range of different things. For some it might be something as simple as seeing their name on a leaderboard; for others it might be the prospect of a prize. It is important that prospective testers understand what mechanism will be in place before they sign up so they can decide for themselves whether it’s right for them.
- Unreasonable. This might seem like a controversial characteristic to possess, but it is important that testers are not “reasonable” people who are happy to let things go without a mention. If something isn’t absolutely right, the tester should be reporting it, not sweeping it under the carpet with the thought that “well, it’s not ideal but perhaps it’s not important enough to mention“.
Motivation is a really big topic, and if not properly handled, even the largest test pool can rapidly dwindle in size if the testers become demotivated by a poorly managed test process. Repeatedly responding to error reports with “Not an error” is likely to demotivate testers and ultimately result in fewer error reports being submitted.
So what happened in the case of my iron? I really don’t know, but here’s my suspicion:
- I’m not the first one to encounter the problem.
- I might be the first unreasonable user. At least I can’t find any complaints from other users on discussion boards about this particular flaw.
- The manufacturer doesn’t undertake adequate testing.
Either way, they released a product to the market which has a fundamental flaw, and this should never have happened.
Maybe there just aren’t enough unreasonable users around. What do you think? Please comment below.