Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
(e)merge --Body trace group project
Transcript of (e)merge --Body trace group project
(e)merge Theory Inspiration Story Board Interaction Preview Theoretical context Technical Explanation Methodology Settings Future Consideration References Figures
The body schema is an internal representation of the posture and extension of a body in space (Maravita, Spence, Sergent & Driver, 2002). This should not be confused with the body image.
The body schems can be observed in action through the mirror test, which is used to test for self-recognition in infants and non-human animals. While most infants succeed in this task, the only animals which succeed are chimpanzees and orangutans (Gallup, Anderson and Shillito, 2002).
Is this self recognition a result of a bodily or an innate conceptual sense of self? Tsakiris & Haggard (2005) suggest that the former option is the more likely case. In fact, there are experiments that can further attest to this.
Maravita et al (2002) showed that as one gains experience in using tools, that particular tool will start to form part of that individual's body schema, and that cross modal interactions are not mostly dependent on the visual hemisphere in which the tool is placed, but of which appendage the tool is considered to be an extension of. Again, this only applies after the user has gained experience in the usage of a particular tool. This is further evidence that the body schema is flexible, and not conceptually fixed from the moment of birth.
Ritchie and Carlson (2010) in their mirror box experiment, further reinforce the suggestion that this cross modal interaction between vision and proprioception is a result of a sense of body ownership rather than a result of online visual feedback. An interesting observation regarding this experiment is that the alteration of the body schema when faced with a mirror image of ourselves is almost instantaneous, as opposed to the the rubber hand and bodily displacement experiments, in which extensive training of the individual is required before an alteration in the body schema can be observed.
Pavani and Castiello (2004) show that in light of the fact that our body schema is flexible, it can also be altered to include the human shadow. Given that our body shadows are projected onto our environment, this alteration of the body schema would bridge the perceptual discrepancy between our personal space and our environment that we naturally experience.
Space in the animal kingdom is a very important factor. While animals need to keep to their herd, overcrowding will quickly result in a variety of physiological and psychological factor working together in order to reduce the population to manageable proportions again. (Hall, 1966)
The same goes for human society. History shows us that when overpopulation grows to the extent that personal and social distances can no longer be respected a number of factors such as disease and tension run higher than what is considered normal and inevitably, population control occurs. (Hall, 1966).
That being said, closeness and intimacy are equally important to humans. With the rise of online social networking and expansive virtual environments, our conception of personal, social and public distance is being skewed, and a number of people are spending as much time living in virtual worlds as in real worlds, and the situation doesn't seem to be getting any better.The obvious result of this trend is that people will start getting less and less familiar and comfortable with real world interactions, therefore experiencing increased personal and social distances.
By definition a self-organising system has been defined as “a set of entities that obtains an emerging global system behaviour via local interactions without centralized control” (Wilfred, 2010).
Second order cybernetics refer to the methods in which cybernetic systems are made up. Fundamentally, while a system may only be made up of a small number of rules, or programmed pieces of behaviour, these can then work together to produce a potentially infinite number of variations. The perfect example would be Boyd's algorithm, in which a very narrow set of rules result in a totally different, constantly fluctuating flock every time.
A relevant theory which explains the self-organising system is Gordon Pask's Conversation theory (1976). The conversation theory explains that socially organised systems, such as the one we live in, are nothing but symbolic systems in which people's interpretation of other's behaviour results in possible shifts in this organisation. This resembles a sort of Pavlovian conditioning system, where the nodes in a network condition each other to act in an optimal way.
Learning, according to Pask, happens through conversations about a particular subject, which makes the knowledge easily obtainable. This notion of conversation also applies to the conditioning aspect mentioned previously; Differences between nodes in a social network may be reduced through 'conversation' until an agreement is reached. Body schema This section provides context to the decisions we made in respect to our real-time installation, and the themes related to it. Proxemics Self-Organising System and the Conversation Theory http://www.chromeexperiments.com/detail/silk/ The live installation combines two aspects that need to be developed in tandem: art and technology. Furthermore, each group member is required to bridge the gap between themselves and the participants. For these reasons, the methodology to be incorporated in the development of the live installation will be neither a top-down nor a bottom-up approach. It will be a combination of both approaches. The reason for this is that the higher and lower levels of abstraction need to be considered simultaneously. Thus, the methodology can be seen as a multi-level approach that is continuously evolving and has its stages merged with each other. The following diagram depicts the different stages of the proposed methodology (figure 4). Figure 6: Diagram depicting the different stages of the proposed methodology (Gershenson, C., 2006) This section describes the implementation of our real-time installation. 1.Representation 2.Modelling 3.Simulation 4.Application 5.Evaluation Thank you http://www.opengraphicdesign.com/wp-content/uploads/2010/06/vector-background-part2-01.jpg Background image : This section describes the relationship between the theory and our real-time installation. The Body Schema Since we exert a sense of bodily ownership over our shadows, it follows logically that any appendages growing seamlessly out of this shadow would effortlessly be included in our body schema. Since the alteration of the body schema when faced with a shadow takes more training than when faced with a mirror reflection, visual cues and similarity to prior experience will presumably aid with this transition. The realistic movement of the floating seeds through the screen estate when one's shadow is detected will reinforce the notion that our shadow, and by extension, our body, is in an environment which may be considered to be considered to be real, as it demonstrates some of the behaviour we have come to expect our physical environments to demonstrate. All of this will aid to achieve the final aim of the installation: To bridge the perceptual gap between our body and our natural environment. An important question to consider is: Why are we assuming that visual cues and prior experience play an important role on the alteration of the body schema? In Petkova and Ehrsson’s body swapping experiment (2008), aparticularly remarkable outcome was observed when the participants experienced the illusion of havingswapped bodies with an object which does not possess an anthropomorphic shape, such as a cardboardbox. While all of the other factors were kept constant, the alteration to the body schema did not take place in this case. This might explain why a shadow can also be adapted into the schema; Apart from being proportionally identical to our body, it obviously also moves in exactly the same way. Furthermore, it could be argued that the element out of which all of this emerges is previous experience. If, theoretically one were born with an irregularly shaped body, say with four arms and three legs, it is extremely debatable whether these experiments would work in their current form. The mannequin used in the body swapping experiment would have to conform to this image, as the participant's previous experience of their body would have altered their body image (notice the usage of the word image) to include three legs and four arms. Since shadows bridge the gap between body and environment, it is assumed that the environment should also conform to previous experience as much as possible so as not to shatter the illusion being created. It is also important to note, however, that this increased element of realism may not be entirely necessary. Holmes et al (2006) noted that changing the colour of the rubber hand so that it is different to the subject's does not result in any kind of remarkable difference in the final outcome of the experiment; it appears that "the limits of bodily awareness appear to be set by a categorical representation of what people’s bodies are like in general" (Longo, Haggard, 2012), as opposed to what the subject's own body is like. Proxemics In our live installation, the viewer is forced to get into another person's personal distance to get the full experience. This acts as a kind of commentary to present society. It emphasizes the importance of social interaction while superimposing it over the implied link created between humans and their environment, thereby suggesting that everything we know and have is a result of interaction. Self-Organising Systems and Conversation Theory Our live interaction's system will itself be self-organizing. Our system is an organization. This means that it is a structure with a function that has no known “solution” beforehand and is constantly changing. The structure in this case is composed of the system's components (blob detection and recursive tree generator). Each component communicates with one another creating both a sense of integration within the system and at the same time a sense of separation so as to avoid interference. Thus, the live interaction's end result will be dynamically worked on by the system's components. In this way any unforeseen changes can be adapted quickly due to the participants' unpredictable behaviour.
With respect to second order cybernetics, its central idea is that the person examining a system can never truly find out how it works by examining it externally. In Pask's words, they enter into a conversation, and the user and system then exist in a mutually dependent structure where both the system and the user are affected byeach other. Therefore, the user affects the system and the system, in turn, affects the user resulting in our message being conveyed through a kind of explorative interaction. Scene 1: Participant enters the interactive space. His shadow is projected onto the screen. Scene 3: Scene 4: Scene 5: Scene 6: Scene 7: This section describes our aims and goals for the real-time installation. As our body shadows move through our environment, new meanings are constantly being created. The omnipresent perceptual gap between self and surroundings is bridged, and the dividing role of space starts to weaken and lose its meaning.
Our real-time installation aims to redefine the way in which we see our natural environment through body shadows and metaphoric representations. Therefore, our project will make use of the concepts of seeds and tree like structures. We merge the body with nature by extending its shadow in the digital space via generative nature such as fractals. In this way, a new body schema emerges together with new methods of interaction with its surrounding space.
The following series of images and their respective descriptions give a brief outline into the progression of the real-time installation. Seeds floating aimlessly around the screen.Background music layer begins to play. Roll over Scene 2: Participant reaches out to touch one of the seeds. Shadow interacts with a seed and a tree like structure grows out from the shadow.First sound layer begins to play. Roll over Another participant enters the interactive space and the body is projected as a shadow onto the screen. The shadows touch each other and a tree like structure grows into the shadows themselves. Second sound layer begins to play. Roll over The shadows continue interacting with the seeds around them.Third sound layer begins to play Roll over Silk is an interactive piece of work that depicts a natural looking effect created using a generative algorithm. To test it go to the following link. Silk Yuri Vishnevsky (2012) ‘Compliant’ is an example of a real-time installation that uses the shadow of the participants’ body schema in order to distort, push, and chase the screen. Compliant (2003) Scott Sona Snibbe This video demonstrates how the author uses data from a Kinect Sensor and have it manipulated to mirror back a puffy emaciated view of the participant using the real-time interactive project. (Click PLAY) Body Dysmorphic Disorder (2011) Robert Hodgin “Shadow Sound” is a real-time installation that creates a link between audio and visuals. The interactive is composed of five balls each representing a unique instrument. The participant must then use their own shadow to combine the different balls at various heights in order to mix an assortment of melodies. Willy Chyr Shadow Sound This video shows David Rokeby interacting with his sound installation. He make use of video cameras, image processors, computers, synthesizers and a sound system in order to conceive an area where a participant uses their own body to create sound and/or music. Very Nervous System (1986-1990)
David Rokeby Body Movies (2001)
‘Body Movies’ is a real-time installation that alters public space with interactive projections. Participants used their bodies to create shadows on the projections unveil photographic portraits. This section provides the images, videos and works that were used for our inspiration of the real-time installation. This section gives a general idea of what we aim to achieve through our installation. Sound While scientists may refer to sound as a series of transverse and longitudinal waves, it is important to understand its importance to our daily lives. We exist in a world that infinitely and exhaustingly produces and reproduces sound.Sound is always there, even when it is not there. Total silence, or the absence of sound has its own sound. The study of how sounds are used to induce an emotional reaction by its stimulation on the human brain has fascinated people since ancient Greece (Budd, 1985).
In a recent report called 'Sound mind and emotion' (2009) a group of researchers investigate the composition of a descriptive model to describe how sounds can stimulate the human brain to arouse emotions. The model that they composed is made up of a number of psychological components: “brain stem reflexes, evaluative conditioning, emotional contagion, visual imagery, episodic memory, and musical expectancy” (Mossberg, F, 2009). Each component is structured and classified on the model with respect to their impact on human evolution. They are then subjected to arbitrary factorial changes and the respective regions of the brain are further examined. They moreover evaluate the theoretical infrastructure and how it embodies an understanding of the interaction between sound/music and emotions. Synaesthesia Synaesthesia or synesthesia can be defined as an “involuntary” and “consistent” condition where the stimulation of one sense is experienced through the concurrent perception of another sense (Cytowic, 1996). More than 65 different forms of synaesthesia can be experienced by a human being (Simner et al., 2006). It has also been approximated that about 4.4% of the general population have some form of synaesthesia (Simner et al., 2006). For the purpose of our real-rime installation, the key observation that can be made from synaesthetes is that the material world that non-synaesthetes perceive as reality can be perceived in a variety of different forms. Therefore, non-synaesthetes should expose themselves to reality in as many ways as possible in order to seize the richness it has to offer. One such way non- synaesthetes can do so is by having their sensory exchanges heightened through digital media.
Over recent years, digital media has come to emulate us, our desires, and our daily activities (Van Den Boomen et al., 2009). We have come to view them not only as conceptual tools but rather as perceptual. Pursuits have already been made towards the progression of digital media to intensify our emotional charge. For example, the phenomenon of auditory-to-visual synaesthesia is being simulated in digital music players, on VJ screens in clubs, and in robot light shows. These media convert sound into visual patterns. Also, in a number of music videos such as, Justin Timberlake’s Lovestoned video (2007), audio visualizations are being combined into the narrative and performative assemblages of the genre. Sound The implementation of sound (be it speech, sound effects or music) is a major aspect within interactive installations and itʼs design and effectiveness is always being taken into great consideration because of the additional cognitive and emotional depth it can offer to the audienceʼs or userʼs experience. Sound is crucial because it excites the brain in addition to visual stimulation and can positively extend the immersion of the interactivity. Sound can either be adaptive to the environment of the installation and/or can be designed to focus on the userʼs standpoint.
The influence of abstractionism in architectural trends and designs over the years has been tremendous in installations hosted within these spaces, and subsequently in the way that artists tend to interact with sound in its entirety. Greek mathematician Pythagoraʼs proposition, according to which universal harmony is being reflected in the similarities between harmonious proportions in architecture and harmonious consonance in music, is being revisited. Hence, spatial design is being aesthetically inspired through sound and music.
When creating interactive sound installations, artists consistently strive to generate environments in which sound is the active subject of design. They aim to intensify the soundscape of the given space. The foremost purpose of this conscious design is the remodelling of our responsive behaviour within the sonic space itself. Importance of Sound According to Jose Iges, a Spanish sound artist, "Sound sculptures and sound installations are intermedia works, and they behave like expansions of sculpture and installation" (1999). Iges makes the connections between sound and visual aspects, by suggesting two structural potentials:
1. “perceptive reality, dialectic or complementary” (Iges, 1999);
2. visual works emulated as instrumental providing fluidity to the sound discourse.
It is this importance of sound that we try to rethink and incorporate in our project. The creation of sound through interaction, the constant growing of sounds that leads to music and soundtracks and the void when sound is not there any more. The fascination, the excitation and the emotional splurge with which sounds enrich us everyday have been reasons we want to manipulate them and use them as we please Sonic Similarities to our Inspirations Our installation has similarities with David Rokeby’s installation in the way that the interface with which participants interact is not the body itself as in the “Very Nervous System” but a projection and hence an extension of it. In addition, the movements of the participants within the interactive space alter their shadows and since the shadows interact and are the cause of sound as well as visual alterations within the system.
The concept of our installation is also similar to Willy Chyr’s “Shadow Sound Interactive Installation 3” in that it includes the interaction of the participants shadow with elements projected on a screen. What is really different though is the fact that through the interactions implemented in our installation, we have growth ( of “pollens” into abstract tree formations) whereas in Willy Chyr’s installation there is no growth or expansion of the existing entities (balls), only an alteration of their position is achieved.
Though there is a resemblance in the technology used for the interaction of shadows, there is a major difference in the sound as well. While Willy Chyr’s installation includes sounds of common instruments, our implementation of sound is a series of layers of abstract melodic patterns that add to the overall abstract sense of the installation. This section provides a rationale for the research strategy that will be adopted for the development of the real-time installation. This section explains how our equipment will be set up. This section contains points that the group still needs to discuss and work on to ensure the live installation is improved upon. 1. The number of participants that can play with the real-time installation at any one time.
2. The size of the white screen/wall and the interactive space.
3. Testing for the ideal settings of the position of the camera, the lighting, the projector, and the four speakers.
4. The reconsideration of the graphics visualisation and audio sounds with respect to the progress/development of the concept.
5. The participants' sense of control in the real-time installation; mainly their navigation, anticipation, autonomy, and learnability.
6. Testing on whether the simultaneous incorporation of the visual and audio effects will affect the participants' engagement with the real-time installation positively.
7. The number of sound layers to be created, whether it will be a finite or an infinite number.
8. Whether to include the option of headphones to create binaural hearing with respect to the progress/development of the concept.
9. The progression of the layering depending on the number of interactions.
10. The volume levelling and panning effects to be added. A comprehensive example of an interactive sound environment that uses several sound layers. It consists of orbiting blobs, or "orbs" that are accompanied by sound of various tonal properties. The pitch of each sound tone is defined by how far from the center the corresponding blob is. The duration of the tone is determined by the length of the orbit it sits on. Orbiter Rafael Lozano-Hemmer Vera-Maria Glahn (2007) Andy. (Oct 10, 2011). "Technical-recipes.com." Technicalrecipescom. N.p. Retrieved in February, 2013 from: http://www.technical-recipes.com/2011/tracking-coloured-objects-in-video-using-opencv/
Apple Inc. (2007). Apple GarageBand (Version 4.1.2) [Software]. Retrieved June, 2009 from: http://www.apple.com/ ilife/garageband/
Apple Inc. (July 23, 2009). Apple Logic Pro (Version 9) [Software]. Available via the University of Edinburgh October, 2012 from: www.apple.com/logicpro/
Avid Technologies Inc. (October 21, 2011). Pro Tools (Version 10.1) [Software]. Available via the University of Edinburgh October, 2012 from: http://www.avid.com/US/products/pro-tools-software/
Bekoff, Marc, Allen, C and Burghardt G. M. "Gallup, Anderson, Shillito: The Mirror Test." The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition. Cambridge, MA: MIT, 2002. Print.
"BlobDetection Library." BlobDetection Library / V3ga. N.p., n.d. Retrieved in February, 2013 from: http://www.v3ga.net/processing/BlobDetection/index-page-home.html
CA&V. (n.d.) "Surround Sound." N.p. Retrieved in February 2013 from http://www.caav.com/learn/surround-sound.
Chu-Carroll, M. C. (July 11, 2007). "Science Blogs." Good Math Bad Math. N.p. Retrieved in February, 2013 from: http://scienceblogs.com/goodmath/2007/07/11/the-mandelbrot-set-1/ Iges, J., (1999). “El Espacio. El Tiempo En La Mirada Del Sonido”. Spain: Kulturanea. Print.
istevaras. (Jan 8, 2013). “Color Tracking w/ Processing.” YouTube. Retrieved in February 2013, from:
JBox2d. (2007). JBox2d (version 126.96.36.199) [Software]. Retrieved in February, 2013 from: http://code.google.com/p/jbox2d/
Kirn, P. (Feb 6, 2009). “Processing Tutorials: Getting Started with Video Processing via OpenCV”. Retrieved in February, 2013 from: http://createdigitalmotion.com/2009/02/processing-tutorials-getting-started-with-video-processing-via-opencv/
Kalogianni, D., & Castorina, G. (2010). “Body as interface module”. Msc adaptive architecture and computation: University College London.
Lastowka, A. (May 8, 2012). Roots. [Application]. Retrieved in February 2013, from: http://www.openprocessing.org/sketch/60845
Longo, M.R., and Haggard, P. (2012) "What Is It like to Have a Body?" Current Directions in Psychological Science 21 Print.
Maravita, A., Spence, C., Sergent, C., and Jon, D. (2002) "Seeing Your Own Touched Hands in a Mirror Modulates Cross-modal Interactions." Psychological Science 13.4 Print.
Marxer, R. (2010). fisica (v0.1.13) [Software]. Retrieved in February, 2013 from:http://www.ricardmarxer.com/fisica/ Tarbell, J. (Mar 6, 2001). Root.System. [Application]. Retrieved in February 2013, from: http://www.levitated.net/daily/levRootSystem.html
Toida, S. (Aug 2, 2009) "Recursive Algorithm." CS381 Discrete Structures/Discrete Mathematics Web Course Material. Old Dominion University. Retrieved in February, 2013 from: http://www.cs.odu.edu/~toida/nerzic/content/web_course.html
Tsakiris, Manos, and Haggard. (2005). "The Rubber Hand Illusion Revisited: Visuotactile Integration and Self- Attribution." Journal of Experimental Psychology: Human Perception and Performance 31.1 Print.
Van Den Boomen M., Lammes S., Lehmann A. S., Joost R., & Schafer M. T. (2009). “From the Virtual to Matters of Fact and Concern”. Digital Material: Tracing New Media in Everyday Life and Technology, pp. 7
Wilfried, E. (Nov 24, 2010). “So When Do We Call a System Self-organizing?” Self- Organizing Networked Systems: Retrieved in February 2013, from http://demesos.blogspot.co.uk/2010/11/so-when-do-we-call-system-self.html
Windle, J. (n.d.) Super Recursion Toy. [Application]. Retrieved in February 2013, from: http://soulwire.co.uk/experiments/recursion-toy/ Cytowic, R. E. (1996). Synaesthesia: Phenomenology and Neuropsychology – a Review of Current Knowledge. In Baron-Cohen, S., & Harrison, J. E. (1997). Synaesthesia: Classic and Contemporary Readings (pp. 17-39). Oxford: Blackwell Publishers Ltd.
Fullerton, J. (Oct 31, 2010). "Fractals and the Passing of Benoit Mandelbrot." Capital Institute. N.p. Retrieved in February, 2013 from: http://www.capitalinstitute.org/node/345
Fry, B., and Casey, R. (n.d.). Processing.org. Retrieved in February, 2013 from: http://www.processing.org/
Gershenson, C. (Feb 9, 2006). A General Methodology for Designing Self-Organizing Systems. Vrije Universiteit Brussel. Print.
Hall, Edward T. (n.d.). “The Hidden Dimension”. Garden City, NY: Doubleday, 1966. Print.
Haynes C., & Mouaness O. (2007) Lovestoned. Salford: Robert Hales/HSI. Retrieved in October from http:// www.laurenindovina.com/Justin-Timberlake-Lovestoned
Hogendoorn, Hinze, Kammers, M., Carlson, T., and Verstraten. F. (2009). "Being in the Dark about Your Hand: Resolution of Visuo-proprioceptive Conflict by Disowning Visible Limbs." Neuropsychologia 47 Print.
Holmes, N.P., Snijders, H.J., and Spence, C. "Reaching with Alien Limbs: Visual Exposure to Prosthetic Hands in a Mirror Biases Proprioception without Accompanying Illusions of Ownership." (2006) Percept Psychopys 68.4 Print.
Hummel, Karin, A., and Sterbenz, J.P.G. (2008). Self-organizing Systems: Third International Workshop, IWSOS 2008, Vienna, Austria. Proceedings. Berlin: Springer, 2008. Print. Mossberg, F. (2009). “Sound, mind and emotion” - Texts from a series of interdisciplinary symposiums arranged 2008 by The Sound Environment Centre at Lund university, Sweden. Retrieved in February from http://www.ljudcentrum.lu.se/upload/Ljudmiljo/rapport8_sound_mind.pdf
OpenCV. (Oct 19, 2006). OpenCV (Version 2.4.4-BETA) [Software]. Retrieved in February, 2013 from: http://opencv.org/
Pask, G. (1976). “Conversation Theory: Applications in Education and Epistemology”. Amsterdam: Elsevier. Print.
Pavani, F., and Castiello. U. (2003) "Binding Personal and Extrapersonal Space through Body Shadows." Nature Neuroscience 7.1 Print.
Petkova, Valeria I., and Ehrsson, H. (2008). "If I Were You: Perceptual Illusion of Body Swapping." Ed. Justin Harris. PLoS ONE 3.12 Print.
Ritchie, J., and Carlson, T. (2010). "Mirror, Mirror, on the Wall, Is That Even My Hand at All? Changes in the Afterimage of One’s Reflection in a Mirror in Response to Bodily Movement." Neuropsychologia 48 Print.
ScienceProg. (March 28, 2007). "Fractal Antenna Constructions." Do It Easy With ScienceProg. N.p. Retrieved in February, 2013 from: http://www.scienceprog.com/fractal-antenna-constructions/
Shiffman, D. (Sept 2, 2008). “Learning Processing: A Beginner's Guide to Programming Images, Animation, and Interaction”. Morgan Kaufmann Series in Computer Graphics. Print. The diagram on the right (figure 1) expresses the dialog between the casual participant and our real-time installation. The functional requirements are also specified. Use case diagram of the live interaction. Therefore, the installation will be conducted in a dark room with one light source. It will provide the participant(s) with an interactive space for them to explore its functional requirements. Their shadow will be captured by the camera and its navigational information will be fed to the installation's central processor. It's outcome results will then be projected back onto the white screen/wall using a projector. The operational workflow of the real-time installation and the events that causes it to be in that particular state with respect to the participant, can be viewed in the diagram to the left (figure 2). Activity diagram of the live installation The notation that will be used to write the program for the installation is the open source programming language processing (Fry, et al., n.d.). It will be split up into three main components: blob detection, recursive tree generator, and sound. The following sections describe each components' implementation. Figure 2: Figure 1: Blob Detection The participants' shadow blobs will be detected using real-time computer vision techniques. In particular these are, colour detection, brightness detection and contour detection techniques. One such library that implements these techniques is OpenCV (OpenCV, 2006). This section presents an analysis into the different algorithms that are being considered for the blob detection. Each algorithm needs to be further analyzed and tested. Brightness Detection Algorithm (Kalogianni, D., et al., 2010):
1. Read the image from the camera
2. Blob detect the image from the camera by:
• going through all the pixels on the source image and checking if a pixel and its close neighbours have brightness above a specific threshold.
• any pixels that are above the threshold are painted white and those below the threshold are painted black in a new image.
• going through all the pixels in the new image and according to the previous step painting the pixels white or black. Color detection algorithm (Shiffman, D, 2008):
1. Read the image from the camera
2. Search through the image's pixels and compare the colour of each pixel with the colour under detection.
3. Calculate the difference between the colour of the pixel and the required colour.
4. If that difference is less than that of the already specified threshold save the then set location of the pixel. Background subtraction algorithm
This algorithm makes use of the blob detection library (BlobDetection Library, n.d.). The algorithm works on a white background and an image will be taken at the start of the installation. The participant(s) will then be introduced onto the scene. The algorithm follows:
1. Read the image from the camera.
2. Store the background brightness information.
3. Calculate the brightness difference between the previous and current frame.
4. Convert the image to grayscale.
5. Search for the pixels whose brightness value is above set threshold.
6. Locate the pixels into 2D space (x,y).
7. Calculate and draw a box to frame the shadow blob from those points. Colour detection algorithm (Andy, 2011)
This algorithm uses the HSV format of an image. The reason for this is because HSV uses only a single number to detect the colour (“hue”) in comparison to RGB where overlapping of red, green and blue generate the colours.
1. Read the image from the camera
2. Convert the image format from BGR (blue, green,red) into HSV(Hue, Saturation,Value).
3. A copy of the source image is created and according to a threshold it is turned into black and white.
4. The threshold to be detected is given in a range of the colour (lower bound and upper bound of HSV) to be detected.
5. The areas of the color to be detected turn into white whilst the rest turns into black.
6. The white blobs with the appropriate size are saved in an array. Movement detection algorithm (Kirn, P, 2009)
This algorithm detects only specific parts of the image. This depends on the colour of the shadow blobs detected.
1. Grab a frame from the camera.
2. Convert the image to black and white according to a threshold filter.
3. Calculate the absolute difference between the current and previous frame.
4. Display the difference image (movement).
5. Store the current frame in memory. Another possible implementation for blob detection is edge/contour detection. This type of implementation identifies points on a digital image where its brightness changes sharply. These are then typically organized into a collection of curved line segments termed edges. Since the images detected by the camera will be black and white then, the best possible solution for the project will be to use colour/brightness detection. Edge/contour detection might be used in the future. The following video depicts an example of colour tracking with processing (istevaras, 2013). Recursive Tree Generator Physics libraries will be used to embed the physics behaviour to the non-human visual graphic objects (seeds and the tree-like structure). The physics libraries that will be used are 'JBox2d' (JBox2d, 2007) and 'fisica' (Marxer, R, 2010). These libraries were chosen since 'fisica', which is a wrapper around the open-source physics engine 'Jbox2D', aims to create physical models by exposing an object-oriented API. This will help structure the code and ensure that all the components are interacting with each other in a seamless manner. Daniel Shiffman (2008) describes a general algorithm to emulate the physical behaviour between objects:
1. Create all the objects in our world.
2. Calculate all the forces in our world.
3. Apply all the forces to our objects (Force = mass *acceleration).
4. Update the locations of all the objects based on their acceleration.
5. Draw all of our objects. The following video depicts an application that uses the fisica library to emulate floating balls (Marxer, R, 2010). A recursive algorithm will be used to create the generative visuals that will be extended from the participants' shadows in the installation. The reason for this is that the most efficient way of implementing generative visuals under different conditions is to utilise “solutions to smaller versions of that same problem” (Toida, S, 2009). Hence, this method makes it possible for there to be repeated instances of the same generative visual which produce complimentary results. This process is described as fractals. Mandelbrot (1975) defines a fractal as “a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole.” (Fullerton, J, 2010). An example can be seen in the figure below. The Mandelbrot fractal pattern (Chu-Carroll, M. C., 2007) Figure 3: The aim for the installation's generative visual is to use the same fractal concept to produce an abstract tree branching structure. An example can be seen in the figure below. An abstract tree branching structure (ScienceProg, 2007) Figure 4: The figure depicts how two branches are connected at either ends of the root. Each one of these branches has two branches at its end and so on and so forth. The end result is a recursive tree-like structure. Mandelbrot describes this as self-similarity. In other words each part is a “reduced-size copy of the whole.” (Fullerton, J, 2010) The following videos depict open-source examples of three recursive tree structure algorithms in action: Sound Surround sound is defined as "a technique for enriching the sound reproduction quality of an audio source with additional audio channels from speakers that surround the listener (surround channels), providing sound from a 360° radius in the horizontal plane (2D) as opposed to “screen channels” (center, [front] left, and [front] right) originating only from the listener’s forward arc." (CA&V, n.d.) This term must not be confused with 3D sound which produces sound not only from the left and right of the participant but also from above and below them. In order to create an effect of surround sound, the space in which the participant(s) interact in, the 'interactive space', will be surrounded by four speakers. Thus, a sound territory will be created immersing the participant(s) in the interactive space to a soundscape. Whilst, the direction in which the sound will travel to the participant(s) will be manipulated using sound channels. By doing so the participants' perception of sound will be increased. The diagram below depicts the settings for the surround sound (figure 5). Settings for the surround sound Figure 5: The sound design will be constructed using sound layering. In essence, layering is the “stacking” of different synths and sounds on top of each other in order to produce a variety of sonic components and structures. This would not have been possible if only one instrument or synthesized sound were used. Layering will be implemented because of it's subtle way for intuitively building the soundtrack of the real-time installation's interactions. The softwares that will be used to create the layering effect are: 'Pro Tools' and 'Apple Logic Pro'.
The video, in the section 'Interactive Preview', is an example of the layering effect that is desired for the real-time installation. For the purpose of this video a calm tuneful ambient melody was selected. This effect complimented with the video's visual images. For each of the interactions sound layers play on top of each other. The output of the "floating embers" synth pad was used to compose each one of the layers' melodies. They were then further processed using a compressor to add the spatial characteristics to the sound. The software used was the Apple software 'Apple GarageBand'.
The aim for the real-time installation will be to create each individual sound layer as a unique piece of musical theme. In this way, each sound layer will be able to stand by itself and describe the interactions occurring between the different entities depicted onto the projected interactive screen. The sounds will focus on depicting the beauty and significance of the abstract interaction, creation and growth. As the interactions begin to multiply, the soundtrack will thicken and the sound layers will mix to produce a harmonic musical ensemble. (2012) Figure 1: Use case diagram of the live interaction.
Figure 2: Activity diagram of the live installation.
Figure 3: The Mandelbrot fractal pattern
Figure 4: An abstract tree branching structure
Figure 5: Settings for the surround sound
Figure 6: Diagram depicting the different stages of the proposed methodology Luke Vella (s1262617)
Marie-Jose' Zammit (s1248357)
Manqing LU (s1140586)
Qingyan CHEN (s1216966)
Yanjie HE (s1220234) Simner J., Mulvenna C., Sagiv N., Tsakanikos E., Witherby S. A., Fraser C., Scott K., & Ward J. (2006). “Synaesthesia: The prevalence of atypical cross-modal experiences”. Perception, 35, 1024 – 1033. The following video is an example of a real-time installation that shows how the blob detection library (BlobDetection Library, n.d.) can be used to implement the interaction between a participant's shadow and the interactive screen.
3. 'Super Recursion Toy' is an example that explores branching algorithms and rendering techniques (Windle, J., ?). 1. 'Root System' is an example of recursive construction in Flash. With every click the user makes the seed depicted “produces an off-shoot of recursive growth” (Tarbell, J., 2001). 2. 'Roots' is an example of recursive construction in Open Processing. Like the previous example, the user clicks on the white screen and a root growth is created (Lastowka, A., 2012).