Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Papers review + ideas (multimodal, gesture, voice, touch, kinect, pie menu,micro-interactions)

HCI, human computer interaction, RWTH, HumTec, Counter Entropy, micro-interactions, kinect, pie menu
by

Ivan Golod

on 22 May 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Papers review + ideas (multimodal, gesture, voice, touch, kinect, pie menu,micro-interactions)

Free hand "To date we have demonstrated g-stalt to over 250 people, many of these demonstrations took place during the MIT Media Lab’s open house events. Some people were concerned that the gesture set was too complicated." g-stalt chirocentric - specific configurations of the hands and fingers in space Charade For example, the hand should move upward for a "move up" command. Widely used iconic gestures (e.g., stop, go back) can be associated with corresponding commands. Favor ease of learning Use hand tension for the initial position "Immersion Syndrome":
A hand gesture input system must therefore provide well-defined means to detect the intention of the gestures. 2010 1993 Multimodal Comparison http://dl.acm.org/citation.cfm?id=159562&dl=ACM&coll=DL&CFID=67335549&CFTOKEN=17289334 http://dl.acm.org/citation.cfm?id=1709939 "Participants generally preferred TwoHanded to OneHanded techniques (8/12) and Linear to Circular gestures (10/12)." "When participants were given the choice of using any method of controlling the lights, they chose to use their voice.
While this finding is notable, it is possible that this result is due to the novelty effect. Additionally, all tests were done with only one person in the room." “ Let There Be Light ” Examining Interfaces for Homes of the Future http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.7781 Mid-air Pan-and-Zoom on Wall-sized Displays http://dl.acm.org/citation.cfm?id=1978969 Degree of guidance (1d vs. 2d vs. 3d) "Involving smaller muscle groups improves performance; providing higher guidance further contributes to this. However, this effect is less pronounced in TwoHanded conditions. This con- firms the previous observation that a higher degree of guid- ance is especially useful when a single hand is involved." Unimanual vs. Bimanual Input Linear vs. Circular Gestures Experimental Evaluation of Vision and
Speech based Multimodal Interfaces http://dl.acm.org/citation.cfm?id=971481 Point and Wait Point and Shake Point and Speak "This strategy has a particular visual feedback. It consist of a particle system of small circles as shown on the figure. Circles appear wherever there is movement on front of the camera. The amount of movement defines the number of particles. The color of each particle is initially red and opaque; then, it gradually turns to blue and transparent in a linear fade that last 500 milliseconds. If the amount of movement is low, the particles are not created completely opaque, and the fade time is shorter. Although this visual representation does not sug- gest any specific position of the hand." "The mean selection time for all the trials was 1.39 seconds. The mean for the selection error was 22.35%. Point and Speak had the smallest mean selection time with 1.34 seconds, followed by Point and Shake with 1.36 seconds and Point and Wait was the slowest with 1.47 seconds. On the other hand, Point and Wait was the most accurate strategy with mean error rate of 11.91%. The next most accurate strategy was Point and Speak with 20.6% error rate. Point and Shake registered the highest error rate, 34.94%" Mean selection time results: "Results did not show an absolute winner in all the measured variables. Still, if we have to choose the best strategy for this task, it would be Point and Wait; this strategy was the best in terms of error rate and user preference." Best Selection Strategy: Design Implications: 1. Use large selection areas
2. Select carefully the position of the selection area
3. Prevent involuntary selections
4. Avoid fatigue
5. Maximize tracking precision 1999 2011 2001 Speech and Gesture http://dl.acm.org/citation.cfm?id=348941.348990 The limits of speech recognition 2000 "Since speaking consumes precious cognitive
resources, it is difficult to solve problems at the same time. Proficient keyboard users can have higher levels of parallelism in problem solving while performing data entry." “Put-that-there” 1980 http://dl.acm.org/citation.cfm?id=807503 "Where the feedback cursor is residing on the screen at the time the spoken "there" occurs becomes the spot where the to-be-created item is placed." "Create a pink circle .... there" Imaginary Interfaces 2010 http://dl.acm.org/citation.cfm?id=1866033 "Promising directions for future work include investigating methods of learning an imaginary interface, extending Imaginary Interfaces to allow annotation of interactions with speech, adding auditory cues as a feedback channel that allows users to explore an imaginary space and extend- ing Imaginary Interfaces to 3D." "We also integrated gesture interaction to access various services such as lamp, media players in a smart home environment. In our experiments we found that the proposed gesture recognition approach is robust and accurate. Usability studies show that our gesture capturing technique and its integration for accessing Ambient Media Services are interesting and appealing to the users." Motion-path based gesture interaction
with smart home services 2009 http://dl.acm.org/citation.cfm?id=1631407 Ubi-Finger : Gesture Input Device
for Mobile Use http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.83.9989 2002 "By using Ubi-Finger, users can select target appliances with “pointing”, and control the appliances with gestures of fingers. In our approach, users can control multiple devices without bothering about complicate control methods. Moreover, they can sensuously control various appliances by using existing metaphors and corporeality. Almost all users valued our approach in the evaluation, and we have confirmed the effectiveness of our approach." Touch T-GALLERY http://www.artcom.de/projekte/projekt/detail/t-gallery/ Light Widgets :
Interacting in Every-day Spaces http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.139.6308 OmniTouch: Wearable Multitouch
Interaction Everywhere 2002 http://dl.acm.org/citation.cfm?id=2047255 2011 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.15.6595 The Everywhere Displays Project http://www.research.ibm.com/ed/ 2001 A Brief Overview of Hand Gestures Used in Wearable Human Computer Interfaces 2003 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.8170 Sensor based XWand: UI for Intelligent Spaces 2003 http://dl.acm.org/citation.cfm?id=642611.642706 "This user study suggests that users are most comfortable with a system that incorporates good tracking, but a lack of precise tracking may be compensated for by concise audio feedback, or judicious spacing of the targets, or both. This has implications in the design of larger intelligent environments where users are likely to roam among areas with varying degrees of tracking resolution and intelligence, and yet are likely to expect a high degree of functionality everywhere." Pointing in Intelligent Environments with the WorldCursor 2003 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.88.4174 Clutching "The WorldCursor has been demonstrated to several hundred people, some of which have used the device to turn lights on and off, manipulate the cursor on the display, and control the Media Player. All users found the use of the WorldCursor immediately understandable, including the clutching operation. Many are captivated by the fluid and responsive motion of the laser dot. However, it is interesting to note that many people do not find its implementation apparent. If the WorldCursor is clutched so that the wand and laser spot is aligned, many external observers at first conclude that a laser pointer is mounted on the wand, and that some external sensing mechanism is employed. Those that recognize that the laser pointer is on the ceiling often ask why the laser is not instead mounted on the wand itself." http://dl.acm.org/citation.cfm?id=1149819 Accelerometer-based gesture control for a design environment 2005 Most popular gestures for VCR control Percentage of user types preferring
certain control modalities Percentage of users preferring certain
control modalities for controlling certain application "Accelerometer-based gesture recognition was studied as an emerging interaction modality, providing new possibilities to interact with mobile devices, consumer electronics, etc. A user questionnaire was circulated to examine the suitability and type of gestures for controlling a design environment and selected home appliances. The results indicated that people prefer to define personal gestures, implying that the gestures should be freely trainable. An experiment to evaluate gesture training and recognition based on signals from 3D accelerometers and machine learning methods was conducted. The results validated that gesture training and accurate recognition is feasible in practice. The usefulness of the developed gesture recognition system was evaluated with a user study with a Smart Design Studio prototype that had a multimodal interface. The results indicated that gesture commands were natural especially for simple commands with spatial association. Furthermore, for this type of gestures a common set of gestures, agreed upon by a group of users, can be found but this requires hands-on experimentation with real applications. Test results were based on multiple test sessions over a short period of time. In the future, more extensive sessions are required to acquire more detailed results of the long-term useful- ness of the system" Soap: a pointing device that works in mid-air http://dl.acm.org/citation.cfm?id=1166261 2006 a) "When the user releases the fabric, the hull returns to its original position. This type of self-centering behavior is reminiscent of a joystick, hence its name. Joystick interaction is soap’s fastest and most precise interaction style" b) "The belt interaction allows users to posi- tion a pointer without the self-centering behavior of the joystick interaction. Repeated belt interactions allow users to move across large distances." Clutching User-defined gestures for surface computing http://dl.acm.org/citation.cfm?id=1518866 2009 "We have presented a study of surface gestures leading to a user-defined gesture set based on participants’ agreement over 1080 gestures. Beyond reflecting user behavior, the user-defined set has properties that make it a good candidate for deployment in tabletop systems, such as ease of recognition, consistency, reversibility, and versatility through aliasing. We also have presented a taxonomy of surface gestures useful for analyzing and characterizing gestures in surface computing. In capturing gestures for this study, we have gained insight into the mental models of non-technical users and have implications for technology and translated these into design." Imaginary Phone: Learning Imaginary Interfaces by
Transferring Spatial Memory from a Familiar Device http://www.hpi.uni-potsdam.de/baudisch/projects/imaginary_phone.html 2011 SpeckleSense 2011 http://specklesense.media.mit.edu/ "The mouse sensor and our algorithm integrate huge amount of small shifts each second (10 kHz). This results in an accumulative error which increases over time" Accuracy "Participants liked the concept but found the public display prototype the hardest to use. The lack of actuation made it difficult to get in/out of tracking mode or reposition the hand." "Participants appreciated the Mobile Viewport (See Figure 15), where they panned around and zoomed in a 1920×1200 screenshot of a news website on the 480×854 pixel mobile display. All participants were able to pan and zoom into different areas when asked. Several participants especially liked the single-handed zoom when moving the mobile phone closer or further from the surface" "The TouchController for controlling a 3D model on a projected wall was the most appreciated prototype.
The combination of multi-touch input and spatial manipulation worked well for the participants, and two of them suggested that we also added depth control." Strengths and weaknesses of common interface technologies HCI Principles Visibility:
- detach scale (labels) and control
- provide at-a-glance overview of possible states (what can I do?)
- design control to show how it can be operated
- make sure current state is easy to determine
- use natural odering of setting
- KEEP IT SIMPLE Point and Wait strategy is used in xBox menu “How easy is it to…?" (7 Stages of Action)

- Perceive system functions? (Goal)
- Determine executable actions? (Intentions)
- Determine their mapping to basic actions? (Action sequence)
- Perceive the system state? (Perception)
- Map it to an interpretation? (Interpretation)
- To decide if the goal was reached? (Comparison) - Visibility (state and actions easy to determine)
- Good conceptual model
operations and results are presented consistently
User gets a coherent image of the system
- Good (natural) mappings
Use physical analogies
Use cultural standards
Between actions and results
Controls and their effects
System state and its visualization
- Good feedback
About results, complete and continuous Knowledge in the World and in the Head Constraints are in effect
Physical constraints limit the actions possible
Cultural constraints are in effect


Humans can minimize the amount/precision/depth of information to remember Seven Principles Of Design • Use knowledge in the world and in the head
• Simplify task structures
• Make things visible, bridge the gulfs of execution and evaluation
• Use natural mappings
• Use natural and artificial constraints
• Design for error
• When all else fails, standardize http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.7749 The Gesture Pendant 2000 "Control gestures should be simple because they need to be interactive and will be used more often. User defined gestures, on the other hand, can be more complicated and powerful since they will be used less frequently." "Eight gestures were defined for the Gesture Pendant. The gestures are determined by continual recognition of hand poses and the hand movement between frames. These hand poses consist of: ”ver- tical pointed finger” (vf), ”horizontal pointed finger” (hf), ”horizontal flat hand” (hfh), and ”open palm” (op). The gestures were ”horizontal pointed finger up”, ”horizontal pointed finger down”, ”vertical pointed finger left”, ”vertical pointed finger right”, ”horizontal flat hand down”, ”horizontal flat hand up”, ”open palm hand up”, and ”open palm hand down”" Gesture-controlled user interfaces , what have we done and what ’ s next ? 2009 http://www.glyndwr.ac.uk/computing/research/pubs/SEIN_BP.pdf Gesture Theory Pie Menu / Learning Gestures General A FRAMEWORK FOR GESTURE GENERATION AND INTERPRETATION 1998 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.6983 Kinds of Gestures Found in Human-Human Communication: "When we reflect on what kinds of gestures we have seen in our environment, we often come up with a type of gesture known as emblematic . These gestures are culturally specified in the sense that one single gesture may differ in interpretation from culture to culture."


"Another conscious gesture that has been the subject of some study in the interface community is the so-called ’propositional gesture’ (Hinrichs & Polanyi, 1986). An example is the use of the hands to measure the size of a symbolic space while the speaker says "it was this big"."


"The spontaneous unplanned, more common gestures are of four types:

Iconic gestures depict by the form of the gesture some feature of the action or event being described; such as the gesture of holding a tube with a handle that accompanies "Press the [handle of the caulking gun slowly as you move the nozzle across the window ledge that needs caulk]""

"Metaphoric gestures are also representational, but the concept they represent has no physical form; instead the form of the gesture comes from a common metaphor. An example is "the meeting went on and on" accompanied by a hand indicating rolling motion."

"Deictics spatialize, or locate in the physical space in front of the narrator, aspects of the discourse; these can be discourse entities that have a physical existence, such as the tube of caulk that the narrator pointed to on the workbench, or non-physical discourse entities"


"Beat gestures are small baton like movements that do not change in form with the content of the accompanying speech. They serve a pragmatic function, occurring with comments on one’s own linguistic contribution, speech repairs and reported speech" A Survey of Design Issues in Spatial Input 1994 http://dl.acm.org/citation.cfm?id=192501 "This paper represents a first attempt to extract design issues from a large body of work. We have identified common themes in what has worked well for spatial input, and what has not. The issues we have presented are not formally proven principles
guide to designers who are getting started in spatial input, and should not be expected to serve as a substitute for user testing of any spatial interface based upon strategies we have suggested." Human Perception Understanding 3d space vs.
experiencing 3d space

Relative gesture vs. absolute gesture

Two-handed interaction

Physical constraints and affordances

Control metaphors

Issues in dynamic target acquisition

Recalibration mechanisms A morphological analysis of the design space
of input devices http://dl.acm.org/citation.cfm?id=128726 1991 LightBeam : Nomadic Pico Projector Interaction with Real World Objects 2012 http://www.tk.informatik.tu-darmstadt.de/de/research/tangible-interaction/lightbeam/ + visual GestureBar : Improving the Approachability of Gesture-based Interfaces 2009 ShadowGuides : Visualizations for In-Situ Learning of Multi-Touch and Whole-Hand Gestures 2009 Gesture Play : Motivating Online Gesture Learning with Fun , Positive Reinforcement and Physical Metaphors 2010 Continuous Marking Menus for Learning Cursive Pen-based Gestures 2011 Combining and Measuring the Benefits of Bimanual Pen and Direct-Touch Interaction on Horizontal Interfaces 2008 OctoPocus : A Dynamic Guide for Learning Gesture-Based Command Sets 2008 Whole-hand Input (PhD Thesis) 1992 A three-state model of graphical input 1990 A Real-Time Framework for Natural Multimodal Interaction with Large Screen Displays 2002 http://dl.acm.org/citation.cfm?id=847776 "This paper describes issues related to the development of a robust real-time framework that exploits transfer real-world interactions natural gestures and spoken command as input and a large screen display for visual feedback. The framework has been validated by implementing a number of prototype systems, which to novel metaphors thus bridging the gap between digital environments and user interactions. It was found that a careful design, integration and due considerations of possible aspects of a systems interaction cycle can yield a successful system. "

"Room for improvement still exists for the speech recognition module that can perform unreliably in very noisy public environments even if using advanced sound acquisition devices such as microphone arrays or -domes. The introduction of multiple users introduces additional challenges, especially when users are spatially close to each other. The system must resolve ambiguity in identifying and attaching motion and spoken command to the right user. Model based head tracking for extracting lip motion, and gaze tracking to localize attention are currently investigated to improve disambiguation. Also, model-based articulated tracking is being developed to extract reliable information from visual data. Finally, a prosody based speech-gesture co-analysis is under inves- tigation to improve on continuous gesture recognition" Gesture Registration , Relaxation , and Reuse for Multi-Point Direct-Touch Surfaces 2006 http://xenia.media.mit.edu/~djs/thesis.ftp.html Scratch Input : Creating Large , Inexpensive , Unpowered and Mobile Finger Input Surfaces 2008 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.60.2583 http://www.chrisharrison.net/index.php/Research/ScratchInput http://dl.acm.org/citation.cfm?id=725582 deictic gesture to point control gesture in 2d for the better Ubiquitous interaction - using surfaces in everyday environments as pointing devices 2002 http://dl.acm.org/citation.cfm?id=1765451 Mogees Project 2012 http://www.brunozamborlin.com/mogees/ "Mogees is a project that uses microphones to turn any surface into an interactive board, which associates different gestures with different sounds." "So will touch interfaces of the future rely on sounds as well as capacitance? Perhaps sound would be a cheaper, more-durable option for certain kinds of interfaces, making touch interactions all the more ubiquitous" From http://www.technologyreview.com/blog/mimssbits/27458/
"This paper addresses the problem of helping users to learn, execute and remember gesture-based command sets. We examined existing feedforward and feedback systems that provide on-screen guidance in gesture-based interfaces and classified them along six dimensions in a design space. We then introduced the concept of dynamic guides, which combine dynamic feedforward and feedback to directly guide novice user’s performance, without penalizing expert users. We describe OctoPocus, a dynamic guide that continuously updates the state of the recognition algorithm by gradually modifying the thickness of possible gesture paths, based on its ‘consumable error rate’. We have shown that users can better learn, execute and remember gesture sets if we reveal, during input, what is normally an opaque process i.e. the current state of recognition, and represent gestures in a graphical form that shows the optimal path for the remaining alternative" http://dl.acm.org/citation.cfm?id=1449724 Micro-Interactions http://xrds.acm.org/article.cfm?aid=1764856 Korg Wavedrum mini http://www.korg.com/wavedrummini https://www.lucidchart.com/documents/view#40f8-8534-4f4648f4-aae0-3a5c0abeb66a?branch=22d0bc5f-0144-40b3-af08-7448bc7da4b3 Kinect system chart http://channel9.msdn.com/Series/KinectSDKQuickstarts/Working-with-Depth-Data http://channel9.msdn.com/Series/KinectSDKQuickstarts/Skeletal-Tracking-Fundamentals degree of guidance Interesting but: "The methods of light control that required speech
and/or gesture recognition were conducted using the
“Wizard of Oz” technique: although participants
were told that the computer was responding to their
commands, a researcher sitting behind the one-way
mirror controlled the lights" Point and Wait Xbox 360 Kinect Hub : Fun Labs Menu Gesture interface for micro-interactions in domestic spaces but Dance Central for Kinect Xbox360 has another menu strategy: Kinect
Xbox360 videos bimanual gesture vs. VS. 2D gesture paths are drawn on the paper
or are projected on the surface.
So new user can learn the gestures and expert can use learned gestures on every surface (with no visual help) 2010 or unimanual serial gestures We don't need an extra mode of interaction
to activate the input with one handed gesture
(speech, time - point and wait strategy).
May be ;) it is possible for the future system to remember the deictic gesture and the object it was pointed to. After that the system is activated with 2D gesture (double "click" with the wrist). "Two main categories of techniques have been studied formid-air interaction on wall-sized displays: freehand techniques based on motion tracking; and techniques that require the user to hold an input device. Input devices provide some guidance to the user in terms of what gesture to execute, as all of them provide some sort of passive haptic feedback: A finger operating a knob or a mouse wheel follows a specific path; gestures on touchenabled devices are made on planar surfaces. Freehand techniques, on the contrary, provide essentially no feedback to the user who can only rely on proprioception to execute the gesture. We call this dimension the degree of guidance. Gestures can be guided to follow a particular path in space (1D path); they can be guided on a touch-sensitive surface (2D surface) ; or they can be totally free (3D free). These three values correspond to decreasing amounts of passive haptic feedback for the performance of input gestures." Guidance through Passive Haptic Feedback Degree of guidance (1d vs. 2d vs. 3d) Question for us all (Counter Entropy team):
what kind of micro-interactions do we need? light - on/off - discrete and continuous (dimming) windows/doors - on/off - discrete and continuous volume (TV, Music) - on/off - discrete and continuous For 2D touch gesture: For deictic pointing gesture: select one light/TV/HiFi Center/Speaker/Door/Window select a set of lights/Speakers/Doors/Windows set timer/notification (speech, sound feedback, visual feedback ???) make audio note (speech) For deictic pointing gesture navigate through spatial hierarchy of ojects (Window Curtain) 2D touch gesture
(or other interaction modes): + what gestures are better?

time, fatigue, learnable, differentiable with other gestures .......

specal gesture for Undo??

how much additional gestures do we get? http://sijm.ca/2010/wp-content/uploads/2010/11/ryanchallinor.pdf pdf by developer of Dance Central menu Ryan Challinor Proposal: The project proposal is a short description of your initial ideas about what you would like to do, and how you intend to achieve the overall goal of the project. Subject area. What is the topic and scope of your project?
Aim. What is the goal of your project?
Arguments. Why is it important to investigate the chosen topic?
Objectives. Preliminary ideas for how you intend to achieve the aim. Title of project?

Introduction
To the subject area (e.g., XML documents).
To the problem within the subject area (e.g., preserving links when transforming XML documents to another data format).

Reasons why it is important to investigate the chosen problem.

Aim of project
A short description of what you intend to do.

Objectives
How (by what steps) do you intend to achieve the aim of the project?

Name
Contact information (email, phone) http://dl.acm.org/citation.cfm?id=1959827
Interacting with smart walls: a multi-dimensional analysis of input technologies for augmented environments 2011 Visual Ubiquitous Projector vs. Wall-Sized vs. TV vs. Pads vs. Smarthpones http://www.microsoft.com/download/en/details.aspx?id=27977 Kinect Hands: Finger Tracking and Voxel UI (US) Consistent Menu (independent of size)
Consistent Interaction (independent of size and interaction mode) Visualization (GUI): Gesture Path Feedback vs. Menu vs. Micro-Interactions Widgets Point and wait
(fields, Microsoft windows) List (list, Dance Central) Zooming UI limited set of full-hand gestures 2D "Touch" Gesture,
learned with paths + Projector on
nearby surface Tabletop Pads Smartphones Direct Touch Remote Trackpad display size interaction quality Trackpad of different sizes
Mapped to a display of different sizes Music navigation Navigating through
spatial hierarchy Timer music navigation (list like in Dance Central) Ubiquitous Displays in dynamic environments :
Issues and Opportunities http://eprints.lancs.ac.uk/13053/1/2004-UbiquitousDisplay.pdf 2004 "Future displays proposed such as the projector based systems envisioned by Welch tend to involve a paradigm shift away from computers having their own fixed display and towards making use of the multiple sur-faces that already exist in the environment. Welch et al. see every-day surfaces such as desks, tables, cupboards, walls and even floors all becoming the displays of the fu-ture. These potential display surfaces should be accessible where ever and whenever we need them, unobtrusively available, thus enabling new and creative ways of inter-acting with information. The fact that these displays will be ubiquitous makes them powerful, even if they only convey small amounts of information.

Obviously we cannot embed displays into each surface of every object manufac-tured in the future – this is not a sensible or obtainable goal. We therefore need to ex-amine other solutions to creating viable ubiquitous display solutions. In this paper we discuss new developments in ubiquitous displays. We survey ubiquitous display technologies and identify issues in developing ubiquitous display systems for dy-namic environments."


"While careful mounting of projectors can reduce the problem of occlusion in front projected displays, this only holds for static environments. In dynamic environments, different furniture configurations could easily create a case where there are large por-tions of possible display area obscured from fixed projectors by bookshelves or other large furniture. There is also the problem of the office inhabitants, who will not just sit passively at their desks, but will move through the display environment, creating dynamic occlusions of projected displays." depends on Recognition Quality Different links (videos, frameworks) about Kinect in pearltrees http://pear.ly/6gpr Deictic Pointing: + robust (Win SDK)
+ intuitive
+ quick by big targets ?

- error with small targets
- fatigue
- less degree of guidance 2D Touch Gesture + quidance (2d path)
+ direct touch (Menu)
+ compactness

- occlusion
- brightness
- only horizontal (in Counter Entropy)
- subjective acceptance (hedonizm, aesthetics)
- static environment (depends on surfaces) Projector: + vertical
+ fits to dynamic environment


- menu
- remote trackpad vs. direct touch Display (distant) : + robust?
+ 2D degree of guidance
+ less fatigue vs. pointing
+ familiar than 3D gesture http://www.centigrade.de/en/blog/article/micro-interactions-vs-macro-interactions/ telephone/skype - receive / cancel(to mailbox) incoming call vs. A distinction between macro-interaction design and micro-interaction design can be helpful to prevent team-members to simply speak of “interaction design” but referring to diverging concepts. Entries in a “project glossary” could look something like this:

Macro-interactions are concerned with the way that users interact with the user interface in order to carry out meaningful key tasks, usually by navigating the interface and using a series of system functionalities and several widgets/controls. Macro-interactions are about reaching goals that are related to the users’ work.

Micro-interactions focus on the behavior of individual widgets/controls without taking into account interactions of a larger scale, they are “blind” to the semantics of users’ workflows, so to speak. Micro-interactions are concerned with the generic behavior of components in reaction to users’ actions on an “atomic” and context-free level. In a separate line of work, Daniel Ashbrook of Georgia institute of Technology measured
the overhead associated with mobile interactions and found that just getting a phone out of the pocket or hip holster takes about 4 seconds and
initiating interaction with the device takes another second or so [1]. They propose the concept of micro-interactions—interactions that take less than 4 seconds to initiate and complete, so that the user can quickly return to the task at hand. change light themes (morning, day/working, evening, cooking, reading, watching TV, party etc.) change closing themes (all doors and windows, only doors ... etc.) SOME ARTICULATORY AND COGNITIVE ASPECTS OF " MARKING MENUS ": AN EMPIRICAL STUDY 1993 http://dl.acm.org/citation.cfm?id=1461839 "As predicted, hiding the pie menus both slows performance and increases the error rate, especially for large menu sizes. As menu size increases, added to the problems of articulation is the difficulty of successfully mentally reconstructing the menu layout or remembering the necessary strokes to make menu selections. However, when menu size is small (up to 5 slices), there is little or no performance difference, even early in practice."

"The pattern for hidden menus was different however. Instead of a monotonically increasing response time and error rate as a function of menu size, certain menu sizes facilitated performance, while others were particularly difficult. Specifically, even-numbered menu sizes allowed subjects to easily find the target slice (sizes 4, 8, and 12). For example, a 12- slice menu facilitated performance probably by association with the clock-face metaphor. Subjects reported that the 8-slice menu was easy to learn because they could easily mentally subdivide the pie and infer the position of the target slice. On the contrary, menu sizes 5, 7, and 11 presented subjects with more difficulty." http://dl.acm.org/citation.cfm?id=1943455 The limits of expert performance using hierarchic marking menus 1993 http://dl.acm.org/citation.cfm?id=169426&CFID=88597513&CFTOKEN=65549480 "Q1: Are hierarchic marking menus a feasible idea? Even if using a marking to access an item is too hard to draw or cannot be remembered, a user can perform a selection by displaying the menus. Nevertheless, since the subjects could perform the experiment, it is feasible that markings could be used to select hierarchic menu items.

Q2: How deep can one go using a marking? Our data indicates that increasing depth increases response time linearly. The limiting factor appears to be error rate. For menus of four items, even up to four levels deep, the error rate was less than ten percent. This is also true for menus of eight items, up to a depth of two. However, when using markings for menus with eight items or more, at depths greater than two, selection becomes error-prone, even for the expert. However, our "off-axis" analysis indicates that the source of poor performance at higher breadths and depths is due to selecting "off-axis" items. Thus, when designing a wide and deep menu, the frequently used items could be placed at "on-axis" locations. This would allow some items to be accessed quickly and reliably with markings, despite the breadth and depth of the menu." Scenarios 1. Projectored on the surface 2. iPad on the wall 3. iMac 1. Visual feedback (new user) 2. Audio feedback (experienced user) Different feedback Different micro-interactions 1. Pointing interactions (context menu) 2. Standard interactions (context independent) set timer/notification (speech, sound feedback, visual feedback ???) make audio note (speech) music navigation (list like in Dance Central) telephone/skype - receive / cancel(to mailbox) incoming call change light themes (morning, day/working, evening, cooking, reading, watching TV, party etc.) change closing themes (all doors and windows, only doors ... etc.) light - on/off - discrete and continuous (dimming) windows/doors - on/off - discrete and continuous volume (TV, Music) - on/off - discrete and continuous Thesis Description http://prezi.com/m7juhyghkozd/gesture-interface-for-micro-interactions-in-domestic-spaces-thesis-description/ RWTHAACHEN
UNIVERSITY i10 http://hci.rwth-aachen.de/ http://www.humtec.rwth-aachen.de/index.php?article_id=7&clang=0 Mapping strategies (visual, sound): - day time - circle - pitch (4 or 8 discrete) -
- volume - visual gradient - http://www.lifehacker.com.au/2009/03/polarclock_tracks_the_time_with_concentric_circles-2/ http://www.englishclub.com/vocabulary/time-day-night.htm nomic mappings http://arrow.uws.edu.au:8080/vital/access/manager/Repository/uws:4421 http://opensiuc.lib.siu.edu/cgi/viewcontent.cgi?article=1342&context=tpr Description: PDF: In developing a theoretical framework for the field of
ecological acoustics, Gaver (1993b) distinguished between the
experience of musical listening (perceiving sounds) and everyday
listening (perceiving sources of sounds). Within the everyday
listening experience, Gaver (1993a) proposed that the frequency
of an object results from, and therefore specifies, the size of that
object. The relation in which frequency and object size stand to
one another is an example of a nomic mapping. A symbolic
mapping involves the pairing of unrelated dimensions and, relative
to a nomic mapping, requires an additional step in recognition and
learning. Using a perceptual identification task, an experiment
investigated the hypothesis that nomic mappings are identified
more easily than symbolic mappings. It was predicted that the
advantage manifests only during the everyday listening
experience, and that the initially superior recognition of nomic
mappings is equaled by symbolic mappings after extended
exposure. The results provided support for the hypotheses.
Theoretical implications of the differential recognition of nomic and
symbolic mappings are discussed, together with practical
applications of nomic relations. work Trade-offs: pie menu size vs. time vs. fatigue http://solar.arch.rwth-aachen.de/ Counter Entropy House ToDo First real prototype Second Prototype Implementation of the system
Full transcript