Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Human Computer Interaction

Slides for the "Human Computer Interaction" module of the second-year course on Software Engineering. Based mostly on "Human Computer Interaction" by Dix, Finlay, Abowd, and Beale.
by

Pedro Gonnet

on 11 September 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Human Computer Interaction

Human Computer Interaction "Human Computer Interaction", or HCI, involves the design, implementation, and evaluation of interactive computing systems for human use, and with the study of major phenomena surrounding them. Introduction Foundations Design practice Computer Human Interaction The fundamental components of an interactive system are: The human user, the computer system itself, and the nature of the interactive process. Vision Touch Hearing Memory Reasoning and problem solving Input/Output Storage Processing Involves knowledge of psychology and cognitive science, sociology, computer science and engineering, graphical design, and many more. HCI is an interdisciplinary science which also involves artistic aspects which are difficult to define and/or quantify. The goal of this module is to provide an overview of the basic concepts and methodologies involved in the design of good user interfaces. EC Directive 90/270/EEC defines a good user interface as one that: Is suitable for the task.
Is easy to use and adaptable to the user's knowledge and experience.
Provides feedback on performance.
Displays information in a format and at a pace that is adapted to the user
Conforms to the 'principles of software ergonomics'. Understanding the basic capabilities and limitations of each component is essential for designing good interfaces. This module is based largely on "Human Computer Interaction", second edition, by Dix, Finlay, Abowd, and Beale. This requires stepping out of computer science, and into psychology and cognitive sciences, for a short while. Introduction Human vision is a highly complex activity with a range of physical and perceptual limitations. The human eye The eye transforms light into electrical signals which are processed by the brain. Visual perception "Rods" are the receptors responsible for perceiving brightness. Colour Colour is perceived by the "cones" in the centre of the fovea. Context What we see in an image is strongly influenced by the context we put it into. Reading During reading, the eye makes jerky movements called "saccades", backwards and forwards across the text, followed by fixations.
Adults can read ~250 words per minute.
Words are not read character by character, instead, words are often recognized by shape. Brightness Although ~130 million cells absorb light, only ~1.2 million transmit signals to the brain, hence a lot of pre-processing occurs in the eye itself. In summary Knowing how human vision works is indispensable for designing "good" user interfaces. Source: Holly Fischer, via Wikipedia. Primary source of information for the average person. Vision happens in two stages: Physical reception of an image.
Processing of the received stimulus. Both stages have their own set of limitations which influence what can be visually perceived. ~6 million "cones" concentrated in the center of the fovea, which react to intensity and wavelength. ~120 million "rods" around the fovea which react only to intensity, but are more sensitive. Specialized nerve cells on the retina react to patterns (near the fovea) and movement (at the periphery) directly. This pre-processing compensates for movement and changes in colour, brightness, and size. It is also the cause for several involuntary reflexes which control what we shift our visual attention to, e.g. movement at the periphery. Since there are fewer rods in the fovea, objects in low lighting can be seen less easily when fixated upon. A person with normal vision can detect a single line of 0.5 arc seconds width, spaces between lines as of 30-60 arc seconds. Visual acuity increases with luminance, yet so does flicker, especially in peripheral vision. Hue: Spectral wavelength of the light, of which ~150 can be discriminated. Intensity: The brightness of the colour, amount of light. Saturation: Amount of whiteness/blackness in the colour. By varying these three values, ~7 million different colours can be perceived, yet usually only ~10 can be identified. This is the basis of several well-known optical illusions. Source: Wikipedia. Source: Wikipedia. Ofetn, it is engouh to see the lettres in a word to recgonize it. It is easy to miss
miss duplications. USING CAPS DESTROYS MOST OF THE VISUAL CUES WE USE TO RECOGNIZE WORDS. Visual acuity is highest at the point of focus. Colour changes at the periphery will not be noticed. Movement at the periphery can catch attention, or distract. Reading is not a straight-forward process which is robust in many ways, but can still be disrupted quite easily. Context is very important for what we will actually perceive. Hearing is usually underestimated and only considered second to vision. Humans with normal hearing can perceive a wide range of frequencies (20-15000 Hz), as well as determine the source of the sound. Sounds can be described in terms of pitch (frequency), loudness (amplitude), and timbre (waveform). As with vision, a lot of pre-processing occurs before we perceive a sound, e.g. filtering background noise, or perceiving distance. Despite its potential, sound is only rarely considered in interface design, with a few notable exceptions, e.g. Bitching Betty/Barking Bob/Sonya, paragliding variometers. Touch, or haptic perception, is an important means of feedback and provides information such as temperature, pressure, or weight. As opposed to vision and hearing, touch is not localized, i.e. it can be felt all over the body, albeit with higher precision in different places. Three types of receptors: Thermoreceptors respond to temperature, nociceptors respond to intense pressure, heat, and pain, and mechanoreceptors respond to pressure. Two types of mechanoreceptors, i.e. slowly and rapidly adapting which react to absolute and changing pressure respectively. Kinethesis, due to receptors in the joints, provide information on the position of the body and limbs. Sensory memory Working memory Long-term memory Acts as a buffer for stimuli received through the senses, i.e. iconic memory, echoic memory, and haptic memory. Sensory memory can be perceived as a persistence of the image/sound/touch after the stimulus has been removed, e.g. residual images after a flash, or acoustic play-back of sounds. Sensory memory is very brief and lasts on the order of 0.5 seconds. Information can be passed to short-term or working memory by attention, i.e. by filtering the stimuli of current interest, and is otherwise lost and/or overwritten. Attention can usually be focused selectively, i.e. we choose to attend to one thing rather than another. Short-term memory, or working memory, can be considered as a scratch-pad for temporary recall of information. For example, to store intermediate values when performing a calculation, or as a buffer to store an entire sentence while reading it. The average person can remember 7 +/- 2 chunks of information, e.g. digits or words. Longer sequences can be more easily remembered by partitioning them into larger chunks, e.g. telephone numbers. Closure, e.g. completing a task, can commit chunks to long-term memory, but also flushes short-term memories. Long-term storage of everything we "know". Slower recall than working memory, but practically unlimited capacity and very little/slow decay. Episodic memory stores events and experiences in a serial form.
Semantic memory is a structured record of facts, concepts, and acquired skills, which can be derived from episodic memory. Structured in a way that allows access to information, representation of relationships, and inference, e.g. in a semantic network with links of varying length and/or strength. Information is stored through rehearsal of short-term memories, and memories are made more persistent by linking them in more than one way. Memories are retrieved via recall and/or recognition. Reasoning Reasoning is the process by which we use the knowledge we have to draw conclusions or infer something new. Deductive reasoning: Derives logical conclusions from a given set of premises, e.g. if A -> B, and B -> C, then A -> C, often falsely short-circuited by knowledge. Inductive reasoning: Generalizes or extrapolates from things we know to things we have not actually seen, e.g. all dogs have four legs. Abductive reasoning: Infers a reason or explanation for an observation, e.g. if A -> B, and we observe B, then we might assume A. Problem solving Whereas reasoning infers new information from what we already know, problem solving is the process of finding a solution to an unfamiliar task, e.g. adapting the information we have to deal with new situations. Gestalt theory: Problem solving is both reproductive (applying known solutions) and productive (deriving new solutions), e.g. Maiers' pendulum problem. Problem space theory: Problem modeled as states and transitions between them, and solved using heuristics to get from the initial state to the solution state. Whereas the problem as a whole may be new, but the individual steps are not. Analogical mapping: Maps knowledge relating to a similar known domain to the new problem, e.g. Gick and Holyoak experiment. A doctor is treating a malignant tumor. In order to destroy it she needs to blast it with high-intensity rays. However, these will also destroy the healthy tissue surrounding the tumor. If she lessens the rays' intensity, the tumor will remain. How does she destroy the tumor? A general is attacking a fortress. He can't send all his men in together as the roads are mined to explode if large number of men cross them. He therefore splits his men into small groups and sends them in on separate roads. Skill acquisition When a problem, or set of problems, is solved repeatedly, we acquire skill in that domain area, which differentiates novices from experts. Skill acquisition is a hierarchical process in which rules are refined and/or abstracted, e.g. Anderson's Adaptive Character of Thought (ACT) model: Beginner: Uses general-purpose rules which interpret facts about a problem, slow and demanding on memory access.
Learner: Develops new rules specific to the task.
Expert: The rules are tuned to speed up performance. Once skill has been acquired, it is often difficult to break down the individual steps in a task, e.g. playing chess, or typing your password. Errors Despite the rather impressive human capability for storing and processing information, things do go wrong quite often. Most problems are due to the same mechanisms that actually make things work so well: False deductions/inductions/abductions due to interference from assumed knowledge, short-cuts.
Slips: If a pattern of behaviour has become automatic and we change some aspect of it, the more familiar pattern may break through.
Incorrectly inferred causation or rules may lead to incorrect analogies. Errors can be avoided by designing processes such that they match our way of reasoning, e.g. implicitly avoid the above problems. Text entry Visual displays Pointing Other modes of input Input devices Output devices Keyboards Keyboards are the most commonly used input devices in use, for both entering textual data and commands. The QWERTY layout, first used in the 1870s, is the de-facto standard in English-speaking countries, is the result of mechanical and other constraints and is neither the most ergonomic or economic layout possible, but is still used by force of habit/training. Some early hand-held devices used alphabetic keyboards to appeal to users with no typing experience. The DVORAK keyboard was designed to improve typing speed by alternating left/right-hand keys and placing the most common letters in the middle row. Users can train to become fluent with any keyboard, using DVORAK has a speed improvement over QWERTY and alphabetic of only 10-15%. Other keyed text entry Chorded keyboards, which preceded regular keyboards, use key combinations to compose individual characters or groups of characters. Handwriting recognition Handwriting recognition was popular with early hand-held devices on which the screen was not sufficiently large/sensitive to provide a regular keyboard. Speech recognition Research in speech recognition has been ongoing for several decades, and despite a potential market worth billions, has yet to produce any really viable results. Originally used for keying in telegraph messages. Stenotypes are still commonly used in recording court proceedings, or real-time closed captioning. Several advantages: Extremely compact, easy to learn, and fast, e.g. ~5x faster than regular keyboard, or ~225wpm/19cps. Chorded keyboards are also available for hand-held devices, but have not gained much popularity. Most successful approaches are based on recognizing character strokes, as opposed to recognizing the finished character. Difficulty in recognizing/adapting to different handwriting styles, and context-dependent ways of writing the same character. Most successful approach was the Graffiti alphabet, originally developed for PalmOS, in which the system recognizes pre-defined single-stroke characters. Even if perfect character recognition was possible, it would still be much slower than keyed input, even for inexperienced users. Source: Wikipedia Source: Wikipedia Source: metapropos, Chorded Keyboard app Source: Wikipedia Major problems include accents, different intonations, and background noise. Despite successful use in some applications, e.g. Siri for iOS, the recognition rates are not sufficient for larger-scale text entry. Still, sometimes speech recognition is the only alternative, e.g. for disabled users or environments in which no other means of entry are available. The most common way of controlling the cursor is via a mouse with one or more buttons which map to different functions, e.g. for copying an pasting text on X-windows systems. Most computer interfaces rely on a pointing device, e.g. a graphical cursor, through which the user can interact with a graphical user interface, e.g. clicking, selecting, and dragging. More recently, touch pads are used to move the cursor and/or emulate the actions of the mouse buttons. Early mouse-based systems used combinations of the three mouse buttons to produce a wide range of actions, e.g. on the Oberon system. Although the use of the cursor may seem counter-intuitive at first, the required hand-eye coordination skills can be quickly acquired. Touchscreen devices have mostly done away with the visual cursor and instead rely on direct physical interaction with the user interface elements, e.g. using your fingers to push buttons or scroll text. This form of interaction, although much more natural, has some disadvantages, e.g. precision, smudges, and visually blocking the user interface while using it. Interaction with touchscreens and touchpads need not always have physical analogs, e.g. gesture recognition for zooming, panning, and scrolling. As with the mouse and keyboard, although the interactions may not be intuitive, they quickly become part of our motor reflexes. In almost all interactive computing interfaces, output occurs principally via a rasterized display. A decade ago, display types, quality, and ergonomics were an important issue, but current technology has made that point moot. Many low-cost embedded devices, e.g. washing machines, now also allow for graphical displays. Some variations on this scheme exist, e.g. head-up or augmented reality displays which overlay information on a real-world view. 3D displays can provide a more realistic working environment, but this is much more complex than flat displays, and our perception is easily put off by minor imperfections. Definitions Interaction models Interaction styles The interaction can be described as the communication between a human user and a computer. The difficulty comes from both having drastically differing ways of communicating and processing information. Interaction models are used to describe and evaluate these interactions qualitatively. Interaction styles describe common modes of human computer interaction. The interactions between a user and a system can be modeled as a form of communication between the two.
The user communicates via a task language, the computer with a core language.
The user interface works as a translator between both languages.
The interaction can be described in terms of domain-specific goals.
Intentions are specific actions required to meet those goals, and are realized by performing specific tasks.
E.g. when using a CAD software to draw a house: The domain is drawing, the tasks are shape manipulations, the goal is to produce something that looks like a house. The execution-evaluation cycle Norman's execution-evaluation cycle, probably the most influential and intuitive human-computer interaction model, consists of seven stages: Establishing the goal.
Forming the intention.
Specifying the action sequence.
Executing the action. Each stage is an activity only of the user, i.e. from a strict user perspective. Perceiving the system state.
Interpreting the system state.
Evaluating the system state with respect to the goals and intention. Execution Evaluation Problems occur with the gulf of execution/evaluation, e.g. the user and the system do not use the same terms to describe the domain and goals. The interaction framework Same cycle as before, yet now including the system. Each part of the system uses its own language, and each step is a translation. The closer the input and output languages are to the task language, the easier, or more straight-forward, the translation becomes. The input language must be able to map to all functions of the system. E.g. moving a file to the trash to delete it is a simple translation, but batch-renaming files on such a system is not that easy. No single interface is good for all tasks, i.e. the domain of the interaction is important in assessing a task! O I U S Output Input Task Core presentation observation performance articulation Command line First interactive dialog style, still widely used on UNIX-based systems. Provides a powerful means of access to the system's functionality, but usually only for experienced users. Flexibility of combining commands and options at the cost ease-of-use. No cues or intuition provided by the system as to which command is needed, i.e. recall vs. recognition. Menus In menu-driven interfaces, the user is provided a set of options on how to proceed. Selection via keypad, pointer, touch, etc... Hierarchical ordering, individual items are usually grouped in a meaningful and/or logical way. All possible interactions with the system can be presented, i.e. recognition vs. recall. Slow or tedious for experienced users, but very suitable for first-time users, e.g. ATMs. Natural language Input via keyboard or voice recognition. Ambiguity of natural language makes it difficult for a machine to understand, both in terms of syntax and structure, e.g. "The man hit the boy with the stick". Individual words may also be ambiguous, e.g. "set". Mapping literal instructions to concrete tasks can be inconsistent. Due to these limitations, most systems attempt to recognize specific commands or syntactic structures, which defeats the purpose, as it requires the same recall skills as command-line systems. Query dialogs and forms Similar to filling out forms on paper, usually used when data, and not commands, are required from the user. Somewhat limited form of interaction, yet appropriate for both expert and novice users. WIMP interfaces Windows, Icons, Menus, Pointer: The most commonly used paradigm for interactive desktop computing. Point-and-click interfaces Newer paradigm commonly used in web browsers and touchscreen-based applications where user input is reduced to a single form of interaction, i.e. a visual or physical cursor. Source: Wells Fargo Source: Wikipedia Evolved from the Xerox Alto, used in different flavours by X-windows, Apple, and Microsoft. Emulate, to a certain extent, a real desktop or physical environment. Easy to use for both experts and novices alike, but not always the best paradigm for non desktop-like applications. Although some software designers are trying to move away from it, most developers are locked in by habit and availability. Source: Wikipedia Interaction may also involve gestures, e.g. multi-touch controls. Simplified, intuitive interaction, suitable for many types of interactive applications, but not so for text entry. Interaction paradigms Interaction principles Design rules There are two main questions when designing a user interface: How can an interactive system be designed to ensure its useability?
How can the useability of an interactive system be demonstrated or measured? The former is described in terms of paradigms, the later in terms of principles. Design rules provide direction for the actual design of the system. Introduction Interaction paradigms are similar to design patterns in that they embody individual tried and proven concepts in human-computer interaction. Usually evolutionary in nature, i.e. most paradigms build on earlier paradigms. The reasons for the success/failure of different paradigms are difficult to evaluate. Driven mainly by advances in computer performance and specific features. Time-sharing The first computers were single-user/single-program machines that ran in batch mode, i.e. non-interactively, as it made little sense to waste cycles waiting for user input. Video display units Until the late 1950s, computer output was restricted to lights, punchcards, and electronic typewriters. Programming toolkits An important step for both software engineering and user interface design was the introduction of re-useable software libraries, or programming toolkits. Personal computing In this context, personal computing does not just refer to the concept of personal computer ownership, but also that of computing directed to the masses, i.e. the not necessarily computer literate. WIMP interfaces Windows, Icons, Menus, and Pointer based interfaces introduced the concept of non-linear workflows. The metaphor LOGO had the turtle, the Alto had the office desktop, Apple introduced the trash can: The use of metaphors to make interactions more intuitive is an important concept in user interface design. Direct manipulation Direct manipulation of data and/or objects depends on rapid feedback, which was not always available on early computers. Language vs. action The exact opposite of the direct manipulation paradigm, i.e. using commands instead of actions. Hypertext First introduced, conceptually at least, by Vannevar Bush in 1945 for the Memex system.
Information, until then mostly considered something static, is more useful and accessible if interlinked and made dynamic, e.g. active text.
Several projects as of the 1970s provided the basic functionality, preceding Tim Berners-Lee's HTML in 1990.
The rest is history: Hypertext is now completely commonplace when presenting textual information. Multi-modality A multi-modal interactive system is a system that relies on the use of multiple human communication channels, e.g. visual and haptic for keyboard and mouse-based systems. Cooperative work Human-computer-human interaction, i.e. the system mediates interactions between two or more humans working towards a common goal. World-Wide Web Merger of several other paradigms paired with the possibility for anyone to share and link data. Agent-based systems Agent systems are based on real-world agents, e.g. travel agents, who are given a rough description of a task by the user, usually formulated in abstract terms, and query several other systems and perform some processing to complete it. Ubiquitous computing Ubiquitous computing is the concept that computing, or computing devices, are now everywhere, not just on the desktop.
Cheap complex logic means that computers can be embedded in virtually anything, allowing for more complex interactions with most devices.
Different types of devices imply drastically different approaches to modeling human-computer interactions.
The interaction types are mostly driven by the environment in which the devices are used, which can vary significantly. The introduction of time-sharing systems in the 1960s changed that: That time spent waiting for one user's input could be used by other users' computations. Shift from planned activity, i.e. batch processes, to truly interactive interaction. Faded out in the 1980s as computing power became cheap enough to waste, time-sharing between tasks. Still in use on "login nodes", and conceptually in Cloud Computing. The Semi-Automatic Ground Environment (SAGE) was the first computer system to use cathode ray tubes for user intput/output. In 1963, Ivan Sutherland, a PhD student at MIT, introduced Sketchpad, a CAD software that used an oscilloscope for both input and output. Sketchpad not only introduced the concept of a Graphical User Interface (GUI), but also several concepts in OOP. Concept introduced in the 1960s by Douglas Engelbart oN-Line System (NLS), a complex system which used a CRT, keyboard, chorded keypad, and mouse. Makes it easier to create complex user interfaces. Makes user interfaces uniform. Source: ArtMuseum.net LOGO was created in 1967 by Daniel Bobrow, Wally Feurzeig, Seymour Papert, and Cynthia Solomon as a constructivist teaching aid, e.g. for problem-based learning, remembered mainly for its turtle-based graphics. Alan Kay put together the ideas behind NLS and LOGO in his work at the Xerox Palo Alto Research Center (PARC), where he developed Smalltalk and, conceptually, the Dynabook. Source: Wikipedia Only possible thanks to personal computers, i.e. a machine dedicated to a single user. Earliest example is the Xerox Alto/Star, developed in 1973 and commercially introduced in 1981. Followed in 1983 by the Apple Lisa, developed as an office system, introduced menus and window controls. Quickly followed by X-windows in 1984, and the Atari ST, Amiga, and finally Microsoft Windows in 1985. Source: Wikipedia Source: Wikipedia Source: Wikipedia Source: Wikipedia Source: Wikipedia Not all metaphors are good, e.g. using a typewriter metaphor for a word-processing software. Metaphors often become outdated when new technology or forms of user interaction become available. Metaphors also suffer problems of cultural bias. Term coined in 1982 by Ben Shneiderman, five basic features: Basically actions over commands, removes the problems associated with incorrect/illegal commands. Visibility of the objects of interest.
Incremental action at the interface with rapid feedback on all actions.
Reversibility of all actions, so that users are encouraged to explore without severe penalties.
Syntactic correctness of all actions, so that every user action is a legal operation.
Replacement of complex command languages with actions. Not all commands can be represented conveniently, or at all, by intuitive actions. User-level scripting languages: Provide a simplified programmistic way of interacting with the system. May sound like a regression to human-computer interactions of the earliest days, but mainly concerned with empowering non-expert users to perform complex tasks via commands. Most human-human interaction is multi-modal and system designers have tried to mimic this to a certain extent. Lots of interesting research, e.g. for three-dimensional virtual environments, but not much yet in terms of consumer products. Generally referred to as Computer-Supported Cooperative Work (CSCW), which comes in synchronous and asynchronous flavours, e.g. collaborative text editing and e-mail respectively. Several products such as IBM's Lotus Notes and Domino specifically designed to increase office productivity and cross-site collaboration. Several factors contributed to the Web's exponential growth, e.g. the spread of PCs, the availability of the internet, availability of practical user interfaces, critical mass of users in academia, etc... Whole new types of services that can be provided via a computer. Many computers are now seen, or marketed, mainly as devices for accessing/using the web. Agents often try to learn from their users, whether the latter like it or not, using simple logic-based rules. Several forms commonly in use, e.g. flight booking sites, news aggregators, direct marketing/advertising, Clippy. Introduction As opposed to paradigms, usability or interaction principles describe general concepts that are not based on specific implementations. Visibility of system status The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. System/real-world match The system should speak the user's language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Consistency and standards Users should not have to wonder whether different words, situations, or actions mean the same thing. Error prevention Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Recognition over recall Minimize the user's memory load by making objects, actions, and options visible. Flexibility and efficiency of use Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Aesthetic & minimalist design Dialogues should not contain information which is irrelevant or rarely needed. Error recognition, diagnosis and recovery Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Summary Nielsen's ten principles cover several aspects of user interaction and may even overlap in certain areas. There are many catalogs and hierarchies of principles, but the most commonly used/referenced are Nielsen's usability heuristics, originally published in 1994. Principles are not based on past examples, but on theories and/or results from psychology and cognitive sciences. Principles apply to all aspects of user interfaces and interactions, e.g. could also be used for cars, cooking recipes. Allows users to evaluate the internal state of the system by means of its perceivable presentation at the interface. Important for the execution-evaluation cycle interaction model. Communication should be unambiguous. Software: Status bar for current activity, progress bar for long tasks to keep the user informed/assured that something is going on. Cars: Speedometer, display and signal lights/sound, and engine noise. Cooking recipe: Visible indications of progress, e.g. cook until brown. Guessability of the system, intuition to guide novice users. Software: Buttons that can be pressed actually look like buttons, output in plain language. Cars: The standard indicator for lights is a pictogram of a light. Recipe: Descriptive terms, e.g. regarding colour or consistency, to describe different states of cookedness. Encourages users to try things out intuitively without having to fear an irreversible error, feels more comfortable and trusting with the user interface. Software: Undo/Back buttons now commonplace in most interfaces. Cars: Reverse gear. Cooking: Add water or reduce to achieve desired consistency in sauces and soups. Users should not have to wonder if different words or symbols mean the same or different thing. Consistency is relative, subject to contextual differences. Software: Consistent use of platform-specific GUIs and conventions so that users do not have to learn a new interaction form for each piece of software. Cars: Stick shift gears are always in the same order. Cooking: Cup, tablespoon, and teaspoons are actually standard defined measures. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Software: Smaller buttons or links for less-commonly used options, only present available options. Cars: Warnings for pre-error states, e.g. empty tank, seatbelt. Cooking: Preemptive warnings for most common mistakes. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Overlaps with several previous principles, ties-in nicely with mental models. Software: tooltips, Google keywords, new Microsoft Office menus. Allow users to tailor frequent actions. Multi-modality can play an important role. Software: Menu icons and accelerator keys. Car: Automatic lights and/or window wipers with override function. Cooking: Use of both specific terms and detailed instructions for standard procedures. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Users should be able to find the information they need as quickly and painlessly as possible. Software: Google vs. Altavista search engines. Cars: Dashboard design. Cooking: Japanese food. The user should be able to learn from his/her errors in the least frustrating way possible. Software: "Did you..." error messages. Cars: ABS, Airbag. Cooking: How to fix a broken Hollandaise sauce. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. Software: Context-sensitive help function available on most interfaces. Cars: Unfortunately no example. Cooking: Recipes are, by definition, a form of help and documentation. Can be used both as a framework/checklist to design, as well as to evaluate user interfaces. Based on humans and not on existing software or hardware, and should apply to any type of future system. Definition Design rules provide designers with the ability to determine the usability consequences of their design decisions, i.e. they are provided to increase usability. Principles vs. Standards Design rules can be classified along two dimensions: Generality and authority. Standards Standards, which have a long history in other areas, are usually set by national/international bodies to ensure compliance with a set of rules by a large community. Principles and guidelines Principles and guidelines are usually the result of the incompleteness of the underlying theories, which can not produce authoritative results. These rules are supported by psychological, cognitive science, ergonomic, sociological, economical, and computational theories, but still often lack empirical evidence. Design rules are the practical application of theories in these areas. Since designers often lack background in all these areas, rules provide a basis for some design decisions. Many rules overlap and/or conflict, and can thus not all be applied. The more general a rule is, the more likely it is to conflict with other rules, and the more theory we need to understand to apply them correctly. The more authoritative a rule is, the less specific knowledge is needed to apply it. Hardware standards are usually driven by physiological reasons, which are usually well understood, which is good as hardware is expensive to change/adapt to new rules. Software standards are based on psychology or cognitive science, which is less well formed and continuously evolving, but that's ok since software is relatively inexpensive to change. Standards have "strength in masses", may evolve from "de-facto" standards. The more abstract design principles/guidelines resemble the usability principles discussed earlier. Usually apply to abstract software concepts or interaction styles, e.g. the order of the buttons in a dialog box, as opposed to their size. Most standards, e.g. for GUIs, have associated "style guides" describing the more abstract principles of how to apply them in the best way possible. There are massive amounts of published design principles/guidelines out there, and much conflicting advice too.
Full transcript