Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


augmented reality vocabulary

No description

angel sanchez

on 24 May 2016

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of augmented reality vocabulary

Augmented Reality
By Angel Sanchez

a technology that superimposes a computer-generated image on a user's view of the real world, thus providing a composite view.
Chroma Key Video
Popular for AR experiences, Chroma Key Video allows for the projection of video content into live environments. It is defined as video shot on a unique, brightly colored background (often called a “green screen”). Best practices in video production recommend using green or blue colored backgrounds for the best results. This technique allows the video editor to remove the brightly colored areas in the footage and replace those regions with transparent pixels. This technique is often used in weather broadcasts on local television news.
Extended Tracking
Allows you to launch the AR experience with an Image Target in the camera’s view persist the experience even when you’re not tracking the image. This is particularly useful for visualizing large or complex objects that may be larger than the Image Target.
Image Target (also: trackable, trigger, marker, AR target)
The image recognized by the App, which launches the AR experience. Images with high contrast and unique features with sharp edges are key pillars in supporting image recognition.
AR overlay

An image or graphic superimposed over an Image Target (see “Image Target”).
AR Video Playback
A video anchored in 3D space (typically superimposed on an Image Target) while maintaining the view of the physical environment as opposed to full screen playback.
Interactive Video
A video with features, such as hotspots, that call for the user to interact with the video.
Tappable spots within the AR experience that reveal more content or options. Hotspots can be animated and are often shown as a glowing orb.

Augmented reality
Augmented Reality that maps the physical environment real-time. Often uses a smartphone camera and sensors to position a virtual object in a room without the need for Image Targets.
Marker less AR (also: dead reckoning)
The computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as the Oculus Rift.
Virtual Reality
An AR app such as Layar or Aurasma that is designed to provide Augmented Reality viewing experiences across multiple brands and content types.
Augmented Reality viewer app
An AR App with 1 to 5 experiences that is published and available through the Apple App Store and/or Google Play.
Custom Campaign App
An AR App with 1 to 5 experiences, used for a tradeshow or live event that is distributed to dedicated event iPads. Event apps are not available on the Apple App Store or Google Play. Event Apps typically have a life span of 3 to 6 months.
Custom Event App
A custom app is client branded and developed and licensed for use by one client. A universal app is a vendor branded app (i.e. Aurasma or Layar) that uses a single, universal App to view all clients’ AR experiences. Marxent is a custom app Augmented Reality vendor.
Custom App vs. Universal App

Vuforia is a software platform that uses top-notch, consistent, and technically resourceful computer vision-based image recognition and offers a wide set of features and capabilities. Marxent is a Qualcomm Vuforia preferred vendor.
Qualcomm® Vuforia™
Unity 3D is a cross platform game engine developed by Unity Technologies. It is used to develop video games for web plugins, desktop platforms, consoles and mobile devices. Marxent uses Unity 3D to implement 3D models within Augmented Reality apps.
Unity 3D
The ‘unlimited AR’ platform for hosted, downloadable content. Visual Commerce apps are typically developed as a sales support tool, virtual product trial, with the option to purchase. The V-Commerce™ platform is intended for AR apps that will be expanded on over time, adding new content or experiences.
VisualCommerce® (V-Commerce)
An open-source operating system for mobile devices created by Google.
Publishing an app allows users to download the app via the Apple App Store and/or Google Play.
App Publishing

Apple’s App market for purchasing and downloading apps for use on Apple devices such as the iPhone and iPad.
Apple App Store
Google’s online store for purchasing and downloading apps for use on Android-powered smartphones, tablets, Google TV and other similar devices.
Google Play
An operating system used for mobile devices created by Apple.

Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms
Various technologies are used in Augmented Reality rendering including optical projection systems, monitors, hand held devices, and display systems worn on the human body
A head-mounted display (HMD) is a display device paired to a headset such as a harness or helmet. HMDs place images of both the physical world and virtual objects over the user's field of view. Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user's head movements.[12][13][14] HMDs can provide users immersive, mobile and collaborative AR experiences
AR displays can be rendered on devices resembling eyeglasses. Versions include eyewear that employ cameras to intercept the real world view and re-display its augmented view through the eye pieces[16] and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear lens pieces.
Near eye augmented reality devices can be used as portable head-up displays as they can show data, information, and images while the user views the real world. Many definitions of augmented reality only define it as overlaying the information.[20][21] This is basically what a head-up display does; however, practically speaking, augmented reality is expected to include tracking between the superimposed information, data, and images and some portion of the real world.
Contact lenses that display AR imaging are in development. These bionic contact lenses might contain the elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for wireless communication. The first contact lens display was reported in 1999[27] and subsequently, 11 years later in 2010/2011[28][29][30][31] Another version of contact lenses, in development for the U.S. Military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time.[32][33] The futuristic short film Sight features contact lens-like augmented reality devices.
Contact lenses
A virtual retinal display (VRD) is a personal display device under development at the University of Washington's Human Interface Technology Laboratory. With this technology, a display is scanned directly onto the retina of a viewer's eye. The viewer sees what appears to be a conventional display floating in space in front of them
Virtual retinal display
The EyeTap (also known as Generation-2 Glass[37]) captures rays of light that would otherwise pass through the center of a lens of an eye of the wearer, and substitutes synthetic computer-controlled light for each ray of real light. The Generation-4 Glass[37] (Laser EyeTap) is similar to the VRD (i.e. it uses a computer controlled laser light source) except that it also has infinite depth of focus and causes the eye itself to, in effect, function as both a camera and a display, by way of exact alignment with the eye, and resynthesis (in laser light) of rays of light entering the eye
Handheld displays employ a small display that fits in a user's hand. All handheld AR solutions to date opt for video see-through. Initially handheld AR employed fiducial markers,[39] and later GPS units and MEMS sensors such as digital compasses and six degrees of freedom accelerometer–gyroscope. Today SLAM markerless trackers such as PTAM are starting to come into use. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR is the portable nature of handheld devices and ubiquitous nature of camera phones. The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times as well as distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye.
Spatial Augmented Reality (SAR) augments real world objects and scenes without the use of special displays such as monitors, head mounted displays or hand-held devices. SAR makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users.

Examples include shader lamps, mobile projectors, virtual tables, and smart projectors. Shader lamps mimic and augment reality by projecting imagery onto neutral objects, providing the opportunity to enhance the object’s appearance with materials of a simple unit- a projector, camera, and sensor.

Other applications include table and wall projections. One innovation, the Extended Virtual Table, separates the virtual from the real by including beam-splitter mirrors attached to the ceiling at an adjustable angle.[41] Virtual showcases, which employ beam-splitter mirrors together with multiple graphics displays, provide an interactive means of simultaneously engaging with the virtual and the real. Many more implementations and configurations make spatial augmented reality display an increasingly attractive interactive alternative.

A SAR system can display on any number of surfaces of an indoor setting at once. SAR supports both a graphical visualisation and passive haptic sensation for the end users. Users are able to touch physical objects in a process that provides passive haptic sensation
Modern mobile augmented reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID and wireless sensors. These technologies offer varying levels of accuracy and precision. Most important is the position and orientation of the user's head. Tracking the user's hand(s) or a handheld input device can provide a 6DOF interaction technique
Techniques include speech recognition systems that translate a user's spoken words into computer instructions and gesture recognition systems that can interpret a user's body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear.[47][48][49][50] Some of the products which are trying to serve as a controller of AR Headsets include Wave by Seebright Inc. and Nimble by Intugine Technologies.
Input devices
The computer analyzes the sensed visual and other data to synthesize and position augmentations.
A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration which uses different methods of computer vision, mostly related to video tracking.[51][52] Many computer vision methods of augmented reality are inherited from visual odometry. Usually those methods consist of two parts.

First detect interest points, or fiducial markers, or optical flow in the camera images. First stage can use feature detection methods like corner detection, blob detection, edge detection or thresholding and/or other image processing methods.[53][54] The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiducial markers) present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.

Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC),[55] which consists of an XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects.

To enable rapid development of Augmented Reality Application, some software development kits (SDK) have emerged.[56][57] A few SDK such as CloudRidAR [58] leverage cloud computing for performance improvement. Some of the well known AR SDKs are offered by Vuforia,[59] ARToolKit, Catchoom CraftAR, Mobinett AR,[60] Wikitude,[61] Blippar[62] and Layar.[63]
Software and algorithms
AR can be used to aid archaeological research, by augmenting archaeological features onto the modern landscape, enabling archaeologists to formulate conclusions about site placement and configuration.[65]

Another application given to AR in this field is the possibility for users to rebuild ruins, buildings, landscapes or even ancient characters as they formerly existed
AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view of a property before the physical building is constructed there; this was demonstrated publicly by Trimble Navigation in 2004. AR can also be employed within an architect's work space, rendering into their view animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout
AR technology has helped disabled individuals create art by using eye tracking to translate a user's eye movements into drawings on a screen.[72] An item such as a commemorative coin can be designed so that when scanned by an AR-enabled device it displays additional objects and layers of information that were not visible in a real world view of it.[73][74] In 2013, L'Oreal used CrowdOptic technology to create an augmented reality at the seventh annual Luminato Festival in Toronto, Canada.[24]

AR in art opens the possibility of multidimensional experiences and interpretations of reality. Augmenting people, objects, and landscapes is becoming an art form in itself. In 2011, artist Amir Bardaran's Frenchising the Mona Lisa infiltrates Da Vinci's painting using an AR mobile application called Junaio. Aim a Junaio loaded smartphone camera at any image of the Mona Lisa and watch as Leonardo's subject places a scarf made of a French flag around her head.[75] The AR app allows the user to train his or her smartphone on Da Vinci’s Mona Lisa and watch the mysterious Italian lady loosen her hair and wrap a French flag around her in the form a (currently banned) Islamic hijab.
One of the interesting and innovative uses of Augmented Reality is related to greeting cards. They can be implemented with digital content which users are able to discover by viewing the illustrations with certain mobile applications or devices using augmented reality technology. The digital content could be 2D & 3D animations, standard video and 3D objects with which the users can interact.

In 2015, the Bulgarian startup iGreet developed its own AR technology and used it to make the first premade “live” greeting card. It looks like traditional paper card, but contains hidden digital content which only appears when users scan the greeting card with the iGreet app
Greeting cards
AR can enhance product previews such as allowing a customer to view what's inside a product's packaging without opening it.[79] AR can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views of additional content such as customization options and additional images of the product in its use.[80][81] AR is used to integrate print and video marketing. Printed marketing material can be designed with certain "trigger" images that, when scanned by an AR enabled device using image recognition, activate a video version of the promotional material. A major difference between Augmented Reality and straight forward image recognition is that you can overlay multiple media at the same time in the view screen, such as social media share buttons, in-page video even audio and 3D objects. Traditional print only publications are using Augmented Reality to connect many different types of media.
With the continual improvements to GPS accuracy, businesses are able to use augmented reality to visualize georeferenced models of construction sites, underground structures, cables and pipes using mobile devices.[87] Augmented reality is applied to present new projects, to solve on-site construction challenges, and to enhance promotional materials.[88] Examples include the Daqri Smart Helmet, an Android-powered hard hat used to create augmented reality for the industrial worker, including visual instructions, real time alerts, and 3D mapping.[89]

Following the Christchurch earthquake, the University of Canterbury released, CityViewAR, which enabled city planners and engineers to visualize buildings that were destroyed in the earthquake.[90] Not only did this provide planners with tools to reference the previous cityscape, but it also served as a reminder to the magnitude of the devastation caused, as entire buildings were demolished.
Full transcript