Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.
Mapping Object Perception Using fmri
Transcript of Mapping Object Perception Using fmri
Inferior frontal regions, particularly the left inferior prefrontal culus, and the right inferior parietal lobe (BA40) . Visual Recognition of Shapes & Textures: An FMRI Study (2010) - Maria Stylianou-Korsnes, Miriam Reiner, Svein J. Magnussen Marcus W. Feldman PARADIGM Eight shape, eight texture and eight fixation baseline blocks were used in random order. Blocks of each lasted 2.5 s, where blocks were seperated by 3.5 s fixation crosses.
Each shape or texture block consisted of a visual instruction for 1 s, followed by 4 recognition trials.
Recognition trials consisted of an incoding item for 0.5 s followed by a 2.5 s fixation cross display, then a memory probe followed immediately after for 0.5 s, again followed by 2.5 s fixation. STIMULI 4 abstract shapes & 4 abstract textures of no particular organization. IMAGING A top-hat elliptical quadrature birdcage head coil was positioned around the participant's head to obtain the activation signal.
Head movement was stabilized using a bite-bar, formed from participant's dental impressions.
768 functional volumes were acquired per participant. Texture Shape Visual Recognition of Shapes & Textures: An FMRI Study (2010) - Maria Stylianou-Korsnes, Miriam Reiner, Svein J. Magnussen Marcus W. Feldman Lateral occipital cortex showed activation in regards to shape stimuli.
As well as adjacent clusters to right inferior parietal lobe (BA 40). *Using the same stimuli and paradigm Reference Maria Stylianou-Korsnes, Miriam Reiner, Svein J. Magnussen and Marcus W. Feldman (2009). Visual recognition of shapes and textures: an fMRi study. Brain Structure and Function, Volume 214, Number 4, Pages 355-359 Reference Maria Stylianou-Korsnes, Miriam Reiner, Svein J. Magnussen and Marcus W. Feldman (2009). Visual recognition of shapes and textures: an fMRi study. Brain Structure and Function, Volume 214, Number 4, Pages 355-359 Coding of Multiple Images Decoding the Representation of Multiple Simultaneous Objects in Human Occipitaltemporal Cortex- Sean P. MacEvoy, Russell A. Eptein (2009) QUESTION How are objects simultaneously occurring in the visual field coded by the visual system? Participants viewed single objects in 4 categories (shoes, chairs, cars & brushes), as well as object pairs containing images from two of the object categories. STIMULI RESULTS PARADIGM Linear models were used to assess multi-voxel patterns and predict whether or not the presentation of single objects could predict voxels in which two objects were presented. Approximate responses in the lateral occipital cortex was found to be able to predict voxel patterns formed by paired objects by creating averages from constituent objects.
Classifiers were able to identify patterns evoked by object pairs based on single object averages. Would be interesting to see if this is the same for texture too.
Article also discusses that salient objects lead to activation in the inferior parietal lobe. Would also be interesting to investigate this in the current study. Reference MacEvoy, Russel A. Epstein (2009) Decoding the Representation of Multiple Simultaneous Objects in Human Occipitaltemporal Cortex Curr Biol. Jun 9;19(11):943-7. Epub 2009 May 14. Scene Identification Constructing Scene from Objects in Human Occipitaltemporal Cortex- MacEvoy, Epstein (2011) QUESTION How and where are objects vs. scenes coded in the brain? PARADIGM & STIMULI Neural activity was recorded while subjects viewed four categories of scenes and eight categories of signature objects strongly associated with the scenes in 3 experiments. FMRI data was analyzed using multi-voxel pattern analysis. FINDINGS Lateral Occipital Cortex: Multi-voxel patterns evoked by scenes were predicted by the averages of the patterns elicited by their signature objects. Parhippocampal Place Area: Believed to respond strongly to scenes and be crucial in scene identification, could not predict relationships between scene and object patterns. Reference MacEvoy, Epstein (2009). Constructing Scenes from Objects in Human Occipitaltemporal Cortex Nat Neurosci. 2011 Sep 4;14(10):1323-9. doi: 10.1038/nn.2903. Depth Coding of Stereoscopic Depth Information in Visual Areas V3 and V3A - Anzai A, Chowdhury SA, DeAngelis GC (2011) QUESTION Where is absolute binocular disparity and relative disparities coded in the brain in regards to depth perception? Absolute Binocular Disparity: the difference in position of corresponding features in the left and right eye images with respect to the points of fixation.
Relative Disparity: the difference between 2 absolute disparities. The brain the thought to compute these for depth perception. Stereoscope Information Stereoscope is an optical instrument for combining 2 pictures taken from different views, giving an appearance of solid forms. Previous studies have shown that V3 and V3A areas are crucial in stereoscope depth processing in fmri studies on humans and monkeys. PARADIGM In this study, neurons in V3 and V3A were measured in fixating monkeys, and compared their basic tuning properties with those previously found in other visual areas.
Eye coils were implanted, and the brain was mapped using SMRI, and a small craniotomy was made over the lunate sulcus to allow micro electrodes to be implanted.
An epoxy-coated tungsten microelectrode was inserted into the guide tube and advanced through the cortex using an oil hydraulic micromanipulator to record extracellular signals from single neurons. FINDINGS V3 and V3A are not specialized for depth perception, but rather all of the visual areas were responsive to depth perception. Reference Anzai A, Chowdhury SA, DeAngelis GC (2011). Coding of Stereoscopic Depth Information in Visual Areas V3 and V3A. J Neurosci. 2011 Jul 13;31(28):10270-82. Depth The Integration of Motion and Disparity Cues of Depth in Dorsal and Visual Cortex- Ban, Preston, Meeson and Welchman (2012) QUESTION How are visual cues combined in the brain to create a perception of the world? Challenge: there is a fusion of quantitatively different signals. PARADIGM Cue Integration: presented a central plane which was nearer or further than it's surround. Different neurons respond in accordance, producing a pattern of activity measured by multivoxel pattern analysis. STIMULI Random patterns of black and white dots with a fixation marker.
1) Depth by disparity
2) Depth by motion
3) Depth by disparity and motion consistent with each other
4) Depth by disparity and motion inconsistent with each other FINDINGS It was found that combined cue settings exceed quadratic summation. Suggesting a region involved in representing depth from intricate cues, whose activity may underlie improved behavioural performance in multi-cue settings.
Dorsal visual area V3/KO was found to fuse quantitatively different signals. It was found that fmri signals are more distinct when more than one signal is being fused in regards to depth. Suggests a dorsal stream basis for depth perception. Reference Ban, Preston, Meeson and Welchman (2012). The Integration of Motion and Disparity Cues of Depth in Dorsal and Visual Cortex. Nat Neurosci. 2012 Feb 12;15(4):636-43. doi: 10.1038/nn.3046. Temporal Object Learning Shaping Representations in Human Medial Temporal Lobe Based on Temporal Regularities-Schapiro AC, Kustner LV, Turk-Browne NB. (2012) QUESTION Where are representations of objects presented in regularities activated in cortical and hippocampus areas of the human medial temporal lobe (which represents info from even a single exposure)? STIMULI Participants viewed a 40 minute stream of colourful fractals presented one at a time, in which they also performed an orthogonal cover task of detecting greyscale patches that appeared infrequently on the fractals. FRACTALS- " Temporally Categorized" Incidental exposure to objects paired in time increases the similarity of object voxel patterns. Such changes occur in both cortical areas and hippocampal subfields of human medial temporal lobe. FINDINGS Object representations in the MTL rapidly changed based on the incidental exposure to temporal regularities. Multivoxel representations of strongly paired objects became more similar in PRC, PHC subiculum, CA1 and CA2/3/DG, with this increase occurring symmetrically in CA2/3/DG.
May underline the tendency for objects in naturalistic settings to activate memories of those in similar contexts. Reference Schapiro AC, Kustner LV, Turk-Browne NB. (2012). Shaping Representations in Human Medial Temporal Lobe Based on Temporal Regularities. Curr Biol. 2012 Aug 9. [Epub ahead of print]. Topological Properties Topological Change Disrupts Object Continuity in Attentive Tracking- Zhou K, Zhou T, Zhou Y, Chen L. (2010) QUESTION What is a perceptual object? We have an intuitive idea that it is an object's holistic identity perserved over shape-changing transformations that constitutes an object.
This can be characterized by a topological invariance, the extraction of topological invariance, and the extraction of topological properties serves as the starting point for the formation of an object representation.
Topological transformations can be imagined as rubber deformations such as bending, twisting, stretching and shrinking without allowing tearing. Main topics of interest are those characteristics which go unchanged, such as number of holes. However, object perception may survive nontopological changes such as colour/number of holes. PARADIGM Multiple object tracking paradigm (MOT)
Specifically manipulating topological changes, and comparing these effects to tracking performance on form properties, luminous flux and colour RESULTS Findings show support for a topological definition of objects. All topological changes, such as addition of holes, showed object discontinuity, showing emergence of new objects
Object continuity survived a number of non-topological changes such as massive shape transformations, salient colour change, and emergence of new objects (subjective). Reference Zhou K, Zhou T, Zhou Y, Chen L. (2012) Topological Change Disrupts Object Continuity in Attentive Tracking. Proc Natl Acad Sci U S A. 2010 Dec 14;107(50):21920-4. Epub 2010 Nov 29. Luminance Temporal Spatial Properties of the Bold fmri Response to First and Second Order Contrast - Thompson, Serena, Kainoe Au (2012) Contrast effects examined between local pixel luminance, the contrast between local pixel luminance, and the contrast between local pixel luminance variance. PARADIGM Used BOLD fmri to measure V1 modulation to the outside edge and inside regions of a disk defined by either first order (luminance) or second order (variance) contrast. STIMULI Images were presented to the observer in the scanner were composed of dynamic white noise, and disks were presented in a block alternating paradigm by modifying either the mean or variance of the noise outside a central disk in 4 degrees of visual angle. RESULTS V1 and other early cortical areas responding to surface luminance changes were not found as shown in previous studies.
This could be because the brightness perception may have been suppressed as the disk moved concomitantly with the luminance change. Furthermore, previous studies used static uniform luminance. Reference Thompson, S. K. A.Temporal and spatial properties of the BOLD fMRI response to first and second order contrast in V1. Dissertation Abstracts International: Section B: The Sciences and Engineering, , 845. Retrieved from https://www.lib.uwo.ca/cgi-bin/ezpauthn.cgi/docview/760235602?accountid=15115. (760235602; 2010-99160-065). Andrew Nicholson