Participatory heuristic evaluation.

Participatory Heuristic Evaluation (PHE) is an extension of the traditional Heuristic Evaluation where a number of design guidelines are applied to a design or prototype by usability experts. PHE uses the same techniques, however users are included as ‘work-domain expert inspectors’. Extra heuristics are added to include the user experience. In addition to the 13 heuristics identified in heuristic evaluation, Participatory Heuristic Evaluation facilitates the checking of

  •     task flow
  •     suitability of design to task
  •     suitability of design to user
  •     quality of work produced

 

 

Muller, M. J., MAtheson, L., Page, C., & Gallup, R. (1998). Participatory heuristic evaluation. Interactions, 5, 13‐18.

Lecture – Evaluation (Theory)

Third stage of product design: Evaluation of prototype by users.

Iteratively developing prototypes and performing usability testing allows to build usable products and applications. These activities identify potential problems, allowing correction before launching the final product or design.

Evaluation is  the proper understanding of the usability and enjoyability of a product design, involving a specified group of users that perform specific activities or tasks in a specified environment or work context. Evaluation identifies the flaws in the design, provides users’ views of the design and helps in taking informed decisions to create a user-centred design. Different aspects of the design can be tested like the functionality, the aesthetics, the safety, the learnability, the memorability and the intuitiveness. The evaluation techniques depend on the purpose of the design or product and its users.

Evaluations are conducted in three different settings:

  1. Controlled settings: Users’ activities are controlled and done in labs using usability testing and controlled experiments.
  2. Natural settings: There is no control over users’ activities and this involves field studies in public places, user homes or online communities.
  3. Setting not involving users: Experts, researchers or consultants analyse the different aspects of the interface through inspection methods, heuristic evaluation or cognitive walkthroughs.

We need to consider the following during evaluation:

  • goals
  • questions for evaluation – what we are evaluating – functionality, aesthetics, etc.
  • evaluation method or approach
  • practical issues and drawbacks
  • ethical issues
  • evaluation, analysis, interpretation and representation of the data.

Evaluation is of three types:

  1. Formative evaluation: This is an ongoing process carried out during the design stage of the product lifecycle by internal teams or external experts. It is done in early stages of design to predict the usability of the product, check the identified users’s requirements by seeing the use of an already existing system in the field and to test ideas quickly. In the later stages of the design process, this is done more to identifies the issues faced by users and improving the product.
  2. Summative evaluation: This type of evaluation assesses the value or worthiness of the product and checks if the product meets the objectives. It also considers any new information arising after the start of the product development and the impact of formative evaluation on the product.
  3. Impact evaluation: This is done three to six months after the implementation of the design to check the effectiveness in changing times. It is mostly used to support the application of interactive media materials.

Iteratively developing prototypes and performing usability testing allows to build usable products and applications. These activities identify potential problems, allowing correction before launching the final product or design.

During an evaluation process, it is important to consider the users’ characteristics, the activities or tasks they perform, the environment of the study and the nature of the product being evaluated. To perform evaluation certain tools are required, four commonly used evaluation tools are:

  1. Observation and monitoring: Direct observation to understand how users interact with a product in their natural settings. Indirect observation like video recording or remotely observing users.
  2. Collecting users’ opinions: Sturcture, Semi-structured or Open-ended Interviews to find out what users’ think about the product. Questionnaires and surveys with closed and open-ended questions to reach a large number of users.
  3. Interpreting situated events: Interpretive evaluation helps the designers to understand how users use the product in their natural environment and the effects of their surroundings on different tasks they perform.
  4. Predicting usability: Predictive evaluation helps to anticipate the problems users’ face when using a product without actually testing the product with users. This can be done by psychological modelling technique such as keystroke analysis or by getting experts to review the design and predict the problems that a typical user might encounter. This technique requires specifications, mock-ups or prototypes.

A thorough understanding of the evaluation process was provided, this helped me to plan and develop an appropriate evaluation tool for my project. My evaluation tool was a structured questionnaire which evaluated the aesthetics, the flow, the easy identification of elements of my design.