Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Usability Evaluation

No description
by

Robert Griffin

on 5 February 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Usability Evaluation

Evaluation & design are closely integrated in user-centered design.
Some of the same techniques are used in evaluation & requirements but they are used differently (e.g., interviews & questionnaires)

Triangulation involves using a combination of techniques to gain different perspectives

Dealing with constraints is an important skill for evaluators to develop.
Key points

“Iterative design, with its repeating cycle of design and testing, is the only validated methodology in existence that will consistently produce successful results. If you don’t have user-testing as an integral part of your design process you are going to throw buckets of money down the drain.”

See AskTog.com for topical discussion about design and evaluation.
Bruce Tognazzini tells you why you need to evaluate

Discuss how developers cope with real-world constraints.

Explain the concepts and terms used to discuss evaluation.

Examine how different techniques are used at different stages of development.
The aims

Throughout design
From the first descriptions, sketches etc. of users needs through to the final product
Design proceeds through iterative cycles of ‘design-test-redesign’

Evaluation is a key ingredient for a successful design.
When to evaluate
What to evaluate

Iterative design & evaluation is a continuous process that examines:

Early ideas for conceptual model
Early prototypes of the new system
Later, more complete prototypes

Designers need to check that they understand users’ requirements.

Formative evaluation is done at different stages of development to check that the product meets users’ needs.

Summative evaluation assesses the quality of a finished product.

Our focus is on formative evaluation
Two main types of evaluation
Introducing evaluation
“The Butterfly Ballot: Anatomy of disaster” is a very interesting account written by Bruce Tognazzini, that you can find by going to AskTog.com and looking through the 2001 column.
Alternatively go direct to: http://www.asktog.com/columns/042ButterflyBallot.html
More early prototypes can be found at:
http://webdesignledger.com/inspiration/18-great-examples-of-sketched-ui-wireframes-and-mockups
Task
Go to the following link:
Read and review the parts relevant to your assignment:
http://courses.csail.mit.edu/6.831/wiki/index.php?title=Projects/Mobility_On_Demand
How data is analyzed & presented depends on the paradigm and techniques used.
The following also need to be considered: - Reliability: can the study be replicated? -
Validity: is it measuring what you thought? - Biases: is the process creating biases? -
Scope: can the findings be generalized? - Ecological validity: is the environment of the study influencing it - e.g. Hawthorn effect
Evaluate, interpret & present data
What are the high-level goals of the evaluation?
Who wants it and why?
The goals influence the paradigm for the study
Some examples of goals:
Identify the best metaphor on which to base the design.
Check to ensure that the final interface is consistent.
Investigate how technology affects working practices.
Improve the usability of an existing product .
Determine the goals
Experts apply their knowledge of typical users, often guided by heuristics, to predict usability problems.
Another approach involves theoretically based models.
A key feature of predictive evaluation is that users need not be present
Relatively quick & inexpensive
Predictive evaluation
‘quick and dirty’
usability testing
field studies
predictive evaluation
Four evaluation paradigms
User studies involve looking at how people behave in their natural environments, or in the laboratory, both with old technologies and with new ones.
User studies
Explain key evaluation concepts & terms.

Describe the evaluation paradigms & techniques used in interaction design.

Discuss the conceptual, practical and ethical issues that must be considered when planning evaluations.

Introduce the DECIDE framework.
The aims
Apply the ideas presented in this presentation to your assignment.
Task
An evaluation paradigm is an approach that is influenced by particular theories and philosophies.
Five categories of techniques were identified: observing users, asking users, asking experts, user testing, modeling users.
The DECIDE framework has six parts: -
Determine the overall goals -
Explore the questions that satisfy the goals Choose the paradigm and techniques - Identify the practical issues -
Decide on the ethical issues -
Evaluate ways to analyze & present data
Do a pilot study
Key points
A small trial run of the main study.
The aim is to make sure your plan is viable.
Pilot studies check: - that you can conduct the procedure - that interview scripts, questionnaires, experiments, etc. work appropriately
It’s worth doing several to iron out problems before doing the main study.
Ask colleagues if you can’t spare real users.
Pilot studies
Develop an informed consent form
Participants have a right to: - know the goals of the study - what will happen to the findings - privacy of personal information - not to be quoted without their agreement - leave when they wish - be treated politely
Decide on ethical issues
For example, how to:
select users
stay on budget
staying on schedule
find evaluators
select equipment
Identify practical issues
The evaluation paradigm strongly influences the techniques used, how data is analyzed and presented.
E.g. field studies do not involve testing or modeling
Choose the evaluation paradigm & techniques
All evaluations need goals & questions to guide them so time is not wasted on ill-defined studies.
For example, the goal of finding out why many customers prefer to purchase paper airline tickets rather than e-tickets can be broken down into sub-questions: - What are customers’ attitudes to these new tickets? - Are they concerned about security? - Is the interface for obtaining them poor?
What questions might you ask about the design of a cell phone?
Explore the questions
Determine the goals the evaluation addresses.
Explore the specific questions to be answered.
Choose the evaluation paradigm and techniques to answer the questions.
Identify the practical issues.
Decide how to deal with the ethical issues.
Evaluate, interpret and present the data.
DECIDE:
A framework to guide evaluation
observing users,
asking users’ their opinions,
asking experts’ their opinions,
testing users’ performance
modeling users’ task performance
Overview of techniques
Field studies are done in natural settings
The aim is to understand what users do naturally and how technology impacts them.
In product design field studies can be used to: - identify opportunities for new technology - determine design requirements - decide how best to introduce new technology - evaluate technology in use.
Field studies
Usability testing involves recording typical users’ performance on typical tasks in controlled settings. Field observations may also be used.
As the users perform these tasks they are watched & recorded on video & their key presses are logged.
This data is used to calculate performance times, identify errors & help explain why the users did what they did.
User satisfaction questionnaires & interviews are used to elicit users’ opinions.
Usability testing
‘quick & dirty’ evaluation describes the common practice in which designers informally get feedback from users or consultants to confirm that their ideas are in-line with users’ needs and are liked.
Quick & dirty evaluations are done any time.
The emphasis is on fast input to the design process rather than carefully documented findings.
Quick and dirty
Any kind of evaluation is guided explicitly or implicitly by a set of beliefs, which are often under-pined by theory. These beliefs and the methods associated with them are known as an ‘evaluation paradigm’
Evaluation paradigm
An evaluation framework
Wikipedia Usability testing
Setting the Scene: http://vimeo.com/4049134
Editing Makes Me Feel Stupid : http://vimeo.com/4502130
The Good the Bad and the Ugly : http://vimeo.com/15726413
More user tests can be found on Vimeo : http://vimeo.com/boltpeters
Field testing an App: http://vimeo.com/28612263
Full transcript