Study by Terence Andre, et al. of the US Air Force Academy on the effectiveness of the usability problem inspector.
Andre’s thesis is to come up with a more reliable method for expert reviews other than heuristics and cognitive walkthroughs. The Usability Problem Inspector is a combination of heuristics and cognitive walkthrough.
Problems with heuristics: tends to identify one-time, low-priority problems, and evaluators identify false alarms. Problems with cognitive walkthrough: requires knowledge of cognitive science, tedious to do, excessive time involved.
Aparently taking the best of both worlds give better results, at least according to their study of 20 people using the different methods. I am not sure if their numbers are reliable, but supposedly they got better reuslts in the hit and miss category than the other two methods.
Of course, a big problem with any expert review is that you have to decide what constitutes a problem.
Going back to the previous post on paper vs. computer prototype testing one would think that you could just spend your time doing a paper eval and probably getting better results. Study after study seems to show that you just shouldn’t rely on expert evaluations as your sole source for indentifying problems with your product.
We are very good at what we do, but don’t hire anyone who tells you they can tell you what’s wrong by just looking. They may be right, but they may be wrong. You will just never know for sure until you implement. And then you are paying a lot more to fix problems than during the design phase.