Evaluation techniques for interactive systems
WHAT IS EVALUATION
If a design process to support the design of usable interactive systems is used, we still need to assess our designs and test our systems to ensure that they behave as we expect and meet user requirements. This is the role of evaluation.
Evaluation should not be thought of as a single phase in the design process (still less as an activity tackled on the end of the process if time permits). Ideally, evaluation should occur throughout the design life cycle, with the results of the evaluation feeding back into modifications to the design.
GOALS OF EVALUATION
Evaluation has three main goals:
· To assess the extent and accessibility of the system’s functionality — the design of the system should enable users to perform their intended tasks more easily
· To assess users’ experience of the interaction — how easy the system is to learn, its usability and the user’s satisfaction with it.
· To identify any specific problems with the system — These may be aspects of the design which, when used in their intended context, cause unexpected results, or confusion amongst users.
EVALUATION THROUGH EXPERT ANALYSIS
This Evaluation should occur throughout the design process. Usually, the later in the design process that an error is discovered, the more costly. In expert-based evaluation, the designer or HCI expert assesses a design based on known standard cognitive principles or empirical results. These techniques are also referred to as expert analysis techniques.
· Cognitive walkthrough: Cognitive walkthroughs are used to examine the usability of a product. They are designed to see whether a new user can easily carry out tasks within a given system. It is a task-specific approach to usability (in contrast to heuristic evaluation which is a more holistic usability inspection).
· Heuristic evaluation: A heuristic is a simple principle or “rule of thumb”. A heuristic evaluation is one where an evaluator or group of evaluators walks through the design of an interface and decides whether it complies with these “rules of thumb”
· Review-based evaluation: Review-based evaluation is an expert-based evaluation method that relies on experimental results and empirical evidence from the literature (for instance from psychology, HCI, etc.) to support or refute parts of the user interface design.
Evaluation through user participation
User participation in evaluation tends to occur in the later stages of development when there is at least a working prototype of the system in place. These include empirical or experimental methods, observational methods, query techniques, and methods that use physiological monitoring, such as eye-tracking and measures of heart rate and skin conductance.
Styles of evaluation
There are two types of evaluation namely laboratory studies and field studies.
Laboratory Study: In this type of evaluation studies, users are taken out of their normal work environment to take part in controlled tests, often in a specialist usability laboratory
Field Study: This type of evaluation takes the designer or evaluator out into the user’s work environment to observe the system in action.
Empirical methods: experimental evaluation
· Controlled evaluation of specific aspects of interactive behavior.
· Evaluator chooses hypothesis to be tested
· Several experimental conditions are considered which differ only in the value of some controlled variable
· Changes in behavioral measures are attributed to different conditions
The most common and powerful way to gather information about the actual use of a system is to observe users interacting with it. Users are asked to complete a set of predetermined tasks. The evaluator watches and records the users’ actions. It is more effective when the observations are gathering at their own place while they are doing their normal duties. There are several kinds of techniques such as Think Aloud, Cooperative Evaluation, Protocol Analysis and etc.
Another set of evaluation techniques relies on asking the user about the interface directly. Query techniques can be useful in eliciting detail of the user’s view of a system. They embody the philosophy that states that the best way to find out how a system meets user requirements is to ‘ask the user’. They can be used in evaluation and more widely to collect information about user requirements and tasks. There are two main types of query technique:
interviews — Interviewing users about their experience with an interactive system provides a direct and structured way of gathering information.
questionnaires — An alternative method of querying the user is to administer a questionnaire. This is clearly less flexible than the interview technique, since questions are fixed in advance, and it is likely that the questions will be less probing.
Evaluation through monitoring physiological responses
One of the problems with most evaluation techniques is that we are reliant on observation and the users telling us what they are doing and how they are feeling. However, for this evaluation they mostly use eye tracking and physiological measurement.
Physiological measurements: - emotional response is closely tied to physiological changes. These include changes in heart rate, breathing and skin secretions. Measuring these physiological responses may therefore be useful in determining a user’s emotional response to an interface. Physiological measurement involves attaching various probes and sensors to the user. These measures several factors:
Heart activity — indicated by blood pressure, volume, and pulse. These may respond to stress or anger.
Activity of the sweat glands — indicated by skin resistance or galvanic skin response (GSR). These are thought to indicate levels of arousal and mental effort.
Electrical activity in muscle — measured by the electromyogram (EMG). These reflect involvement in a task.
Electrical activity in the brain — measured by the electroencephalogram