Warning
This chapter is still in development
Warning
This chapter hasn’t been written yet, but in the meantime there is a lesson plan that covers the material available from the NZACDITT site level 1 plan and the NZACDITT site level 2 plan. The content of this chapter will mainly support these lesson plans.
Note
For teachers
This chapter supports the user interfaces and usability part of the NZ achievement standard AS91074 (1.44) and the Human-Computer Interfaces and Usability Heuristics part of the NZ achievement standard AS91371 (2.44). The level 1 standard requires a general critical view of interfaces, whereas the level 2 standard requires the usability heuristics (section xxx) to be applied.
Brainstorming section moved to end of document
Computers are becoming thousands of times more powerful every (decade?), yet there is one important component of the computer system that hasn’t changed significantly in performance since the first computers were developed in the 1940s: the human. Our responses are typically a tenth to one second, our immediate range of movement is mainly within a metre, and the number of items that most people can deal with at a time is typically around 5 to 9. For a computer system to work really well it needs to be designed by people who understand the human part of the system well.
For example, [sample good and bad interface]
In this section we’ll look at what typically makes good and bad interfaces. The idea is to make you sensitive to the main issues so that you can critique existing interfaces, and begin to think about how you might design good interfaces.
Key concepts that are likely to be encountered are: Algorithms: Techniques: Applications: Possible activities:
[some introductory activities/ideas to open up the topic]
[an area that is worth knowing about, including activities/exercises to explore it, and guidance for teachers (possibly to be separated) on how to help students use this topic for A/M/E
[an area that is worth knowing about, including activities/exercises to explore it, and guidance for teachers (possibly to be separated) on how to help students use this topic for A/M/E
[an area that is worth knowing about, including activities/exercises to explore it, and guidance for teachers (possibly to be separated) on how to help students use this topic for A/M/E
[an area that is worth knowing about, including activities/exercises to explore it, and guidance for teachers (possibly to be separated) on how to help students use this topic for A/M/E
In this chapter we’ve mainly looked at how to critique interfaces, but we haven’t said much about how to design good interfaces. That’s a whole new problem, although being able to see what’s wrong with an interface is a key idea. Many commercial systems are tested using the ideas above to check that people will find them easy to use; in fact, before releasing a new application, often they are tested many times with many users. Improvements are made, and then more tests need to be run to check that the improvements didn’t make some other aspect of the interface worse! It’s no wonder that good software can be expensive - there are many people and a lot of time involved in making sure that it’s easy to use before it’s released. [explain where the material above has oversimplified things, and if there are any well-known concepts or techniques that have been deliberately left out because they are too complex for this age group. This may include things that require advanced maths, advanced programming, or things where students have seen the problem but not a thorough solution]
How difficult an interface looks can affect how people perceive it, even if it’s easy to use: http://www.cs4fn.org/usability/importanceofsushi.php
RITE interface evaluation: http://www.cs4fn.org/usability/wixon.php
Ideas for material: Think aloud process: Observer needs to press the participant to explain their process. Can be embarrassing, and easy to get flustered. Co-operative experiment: 2 people and the process turns to a dialogue and they become critical of the process.
Raw Text from 2.44 Guide Human Computer Interaction What is Human Computer Interaction? According to Wikipedia, Human-Computer Interaction (HCI) “involves the study, planning, and design of the interaction between people (users) and computers. It is often regarded as the intersection of computer science, behavioral sciences, design and several other fields of study.” Why do we care about Human Computer Interaction? Being skillful at HCI has become essential to developing software that is effective and popular. It goes way beyond choosing layouts and colours for an interface, as largely about the psychology of how people interact with digital devices, which means understanding many issues about how people behave, how they perceive things, and how their cognitive processes work so that they feel that a system is working to help them and not hinder them. If you ask people if they have ever been frustrated using a computer system, you’ll probably get a clear message that HCI isn’t always done well. Human Computer Interaction in the Achievement Standard Human computer interaction has requirements in the achievement standard for achieved, merit, and excellence. At this level students are using HCI principles to evaluate existing interfaces, not to design new ones. Sensitising students to weaknesses in interfaces, and especially how the general public react to interfaces (rather than computer specialists) will put them in a much stronger position for designing great interfaces in the future. The standard is based around the widely used concept of “usability heuristics” that provide a well-established benchmark for identifying weaknesses (and strengths) in interfaces. It helps if the teacher is already sensitive to these issues, and there are some approachable and enjoyable books about interface issues that provide a great background e.g. Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Rules by Jeff Johnson. The website useit.com provides a lot of information about usability and heuristics, and Nielson’s book Usability Engineering is a detailed reference that includes a description of heuristics in Chapter 5. The classic HCI book (although it isn’t really about computers) is The Design of Everyday Things by Donald A. Norman. For Achieved, students must: “provide examples from human-computer interfaces that illustrate usability heuristics” For Merit, students must: “evaluate a given human-computer interface in terms of usability heuristics” For Excellence, students must: “suggest improvements to a given human-computer interface based on an evaluation in terms of usability heuristics.” This section of work outlines the process of carrying out a usability evaluation that will enable students to gain merit for human computer interaction and then extends it to identifying possible improvements to the interface to enable them to gain excellence. The achieved level is covered implicitly by the requirements for merit. In order to carry out a usability evaluation, students will need a basic understand of human computer interaction and Nielson’s heuristics, which are covered prior to the usability evaluation. Additional Resources for Human Computer Interaction A set of Powerpoint slides accompanies this document, and can be used to introduce examples of the heuristics. A glossary for the whole standard accompanies this document, and includes HCI terms The basics of HCI Some students may have already done some HCI in AS91074 (1.44) the year before, so some of this material will be revision for them. In this case, the material can be covered faster than if it is new, although it is still probably best to revise it in case the students have forgotten important details. The key ideas are covered in the first few slides of the powerpoint presentation that accompanies this plan (which includes notes on each slide for the teacher). The main points to get across are: - The “system” that has to work well is the computer and the human together. - Many people get frustrated with digital devices. Sometimes they will put up with it because it’s the only option, but in other cases devices and software with good interfaces sell way better or can be priced higher because they help the user get their job done. - The worst person to evaluate an interface is the person who designed it. They know exactly how it should work; but if someone else tries it you’ll find out how it looks to a typical user (for this reason a student should not design an interface as a submission for this standard – it would be evidence that they don’t understand HCI evaluation!) - An interface is used to do a task, so it makes the most sense to identify the tasks that a particular interface is for, and then consider how difficult those tasks are using that interface. The common mistake is to focus on features of an interface, but in the real world the question is whether or not those features can be used to achieve a task from beginning to end. Nielson’s Heuristics There are a various sets of heuristics (rules of thumb) for evaluating interfaces, but the one used in this plan are from Jakob Nielson’s http://useit.com site – this particular set is widely used. These can be introduced to the class using the powerpoint presentation that accompanies this plan. Each of the heuristics is given with examples of well-known systems that illustrate a positive or negative use of the heuristic. You or the students may identify your own examples (typically any time an interface frustrates or confuses you) that you can use as as more meaningful examples. The idea of the heuristics is that an evaluator can look at an interface and detect errors even before they get a user to try it out. This approach is only one of many that would be used for a careful evaluation in industry, but it is enough to highligh obvious problems, and illustrates ideas from HCI well. Applying Nielson’s Heuristics to HCI Evaluations Materials required for this lesson In this lesson, students will learn how to evaluate the usability of a digital device or piece of computer software using Nielson’s heuristics as a guideline. Choose a digital device to use for a class demonstration of usability, or a piece of software that can be shown on a data projector. If the chosen device/ software is very complex (like a cell phone that is able to do many different things such as make calls, send texts, play music, take photos, etc.) then constrain the evaluation to a specific component of the system. In the case of a cell phone, this might be “sending text messages with the cell phone” or “playing music with the cell phone”. To make sure students’ reports are personalised, they will need to do their own exercise later on a different task and/or device to the one covered in class. Carrying out a usability evaluation with the class Start by explaining to the students that for the achievement standard, they will be carrying out a usability evaluation of a digital device that is similar to the demonstration evaluation that follows. Throughout this activity, students should be taking their own notes about the tasks, interface and the relevant heuristics. Tell the students that it is very important that they take good notes for this, as the notes will come in handy for giving them ideas for the usability evaluation they’ll be doing by themselves. Step 1 of usability evaluation – identifying common tasks The first thing to do in a usability evaluation is to identify the common tasks that will be carried out with the device/ software. Ask the students something like “what tasks are commonly carried out with this device/ software?” Word the question appropriately for the chosen device/ software and component to focus on. As the students come up with ideas, encourage class discussion and make a list on the whiteboard or data projector of the tasks the students think of. Each task on the list should be very simple, and if it is complex, then it can probably be broken down into a few simpler tasks. Students may take a while to think of all the tasks done on a device (for example, a TV remote is obviously used to change channels and volume, but an important task is switching the TV on, and in some cases that can be complicated because it’s not obvious immediately if the TV has responded, or perhaps the button to press isn’t obvious). A task that involves 3 button presses might seem trivial, but if there are 5 buttons to choose from at each step, there are 5 x 5 x 5 possible outcomes, and only one of the 125 outcomes is the desired one! Exploring the right and wrong sequences that a user can go through can become quite detailed even for small tasks. As an example, if a cell phone was the chosen device, and the focus was on text messaging, the question you might ask the students is “What tasks are commonly carried out when using a cell phone for sending text messages?” Below is a list of the kinds of tasks students might come up with. - Open a new blank message - Enter a few words into the message - Enter a number, such as “23” - Correct an error - Select a person to send the message to - Select more than one person to send a message to - Send the completed message Just saying “Send a text message to a friend” is too general, and would be too broad to evaluate. For some devices, significant tasks might include switching the device on or opening a file. They may seem easy, but often users can struggle even with these simple tasks if the interface is confusing. Step 2 of usability evaluation – applying Nielson’s heuristics to common tasks The next step to the usability evaluation is to carry out each of the identified tasks in the previous step in order to identify usability problems with the interface. To make this easier, start by reordering the tasks into an order that they might be carried out by a person using the device (the above bullet points are already in such an order). For each of the tasks, choose a volunteer in the class to try carry out the task on the device or software. It can be easier to find interface issues if each volunteer is not familiar with the specific model or software that the device uses. Get the volunteer to carry out the task in the way that they think is the right way to go about it. They should say what they are trying to do and what they are thinking about the device so that the class can hear. This includes saying if they are confused, or if they are guessing about what to do, and why they are trying what they are. It’s important to focus on any difficulty the volunteer has with the interface as a fault in the interface, not with the volunteer. Each time each of the following happens, ask the class to explain. Both good and bad points of the interface should be identified. Remind the students that they need to be taking notes about this. - The volunteer was unsure what to do, but after a bit of trial and error figured it out. Discuss what the interface could have done to prevent confusion. Which heuristic(s) are relevant? - The volunteer made a mistake and does something irrelevant to the task and is surprised. Discuss what might have caused the volunteer to do this. What assumptions was the volunteer making? Which heuristic(s) was the volunteer assuming was being followed? How did the device violate that heuristic(s)? - The volunteer says they are doing something because it is obviously the correct thing to do. An example of this would be the volunteer saying “I’m selecting the envelope icon as it is normally associated with sending a message”. Discuss what heuristic(s) the interface was following that made this easy for the volunteer. In the example I gave, the heuristic would probably be “Consistency and standards”, as they probably knew this from other cellphones they have used before, or “Match between system and the real world” because physical envelopes are used to send messages. The volunteers should keep trying to complete their task, although with a bad interface you may harvest plenty of examples before completing it. If identifying the relevant heuristic is difficult, go slowly through each of the heuristics trying to relate them to the problem until a suitable one is identified. Often more than one heuristic will be relevant. Advice for teachers Make sure the students understand that if a task volunteer finds a task tricky with the device, it is because of the device and not the volunteer. It is best to choose volunteers who are fairly well respected in the class, unlikely to get negatively hassled by their classmates for making mistakes, and who are confident talking in front of the class as they are more likely to explain why they’re doing what they’re doing, which is an important part of the activity. Once all the tasks have been completed, several usability problems with the interface should have been identified. Choose two or three of the identified problems (that a heuristic was clearly identified for), and ask the students how the interface could be changed so that it is not violating that heuristic. Encourage class discussion on different ideas. Unfortunately fixing one problem can introduce another, so once a few solutions have been identified, go through each of the heuristics and consider if the solution could violate one of them. A usability evaluation such as the one above is exactly what students need to do for the achievement standard. If they choose their own device to do a usability evaluation on, and follow the same procedure as above, they should sufficiently fulfil the requirements for Excellence in the Human Computer Interaction section of the achievement standard. Having each student choose a different device, or at least a different task for each device, will personalise their report. If the student isn’t familiar with a device they can do the evaluation on their own, otherwise they could do it in pairs, observing the difficulties that their partner has. Having a parent or grandparent try the interface can be a rich source of examples, although it requires a high level of maturity and patience from the student. Common pitfalls in usability evaluations There are several common mistakes that are made when carrying out usability evaluations or when doing the human computer interaction section of the achievement standard that should be avoided. - Focussing on features instead of tasks – a camera might have clever special effects feature, but if users aren’t reminded to save the results, the overall task of adding an effect to a photo might be frustrating. - Blaming a user rather than the device – if a reasonably sensible adult makes mistakes with an interface, it’s probably the interface that’s at fault, and that means there’s a market for an improved device. - Evaluating an interface that you have designed yourself – the designer knows exactly how their system should work. A different user’s first action may well be to do something the designer didn’t anticipate!