Severity assessments enable prioritization of problems encountered during usability evaluations and thereby provide a device for guiding the utilization of design resources. However, designers' response to usability evaluations is also influenced by other factors, which may overshadow severity. With the purpose of enhancing the impact of severity assessments, this study combines a field study of factors that influence the impact of evaluations with an experimental study of severity assessments made during usability inspections. The results show that even in a project receptive to input from evaluations their impact was highly dependent on conducting evaluations early. This accorded with an informal method that blended elements of usability evaluation and participatory design and could be extended with user-made severity assessments. The major cost associated with the evaluations was not finding but fixing problems, emphasizing that to be effective severity assessments must be reliable, valid, and sufficiently persuasive to justify the cost of fixing problems. For the usability inspections, evaluators' ratings of problem impact and persistence were weakly correlated with the number of evaluators reporting a problem, indicating that different evaluators represent different subgroups of users or alternatively that evaluator-made severity assessments are of questionable reliability. To call designers' attention to the severe problems, the halving of the severity sum is proposed as a means of visualizing the large payoff of fixing a high-severity problem and, conversely, the modest potential of spending resources on low-severity problems.
International Journal of Human-computer Interaction, 2006, Vol 21, Issue 2, p. 125-146
usability evaluation methods, problem prioritization, severity assessments, test impact