Top Banner
ORIGINAL ARTICLE Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Inuences Haemy Lee Masson 1 , Jessica Bulthé 2 , Hans P. Op de Beeck 2 , and Christian Wallraven 1 1 Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Korea, and 2 Laboratory of Biological Psychology, KU Leuven, Leuven 3000, Belgium Address correspondence to Christian Wallraven, Department of Brain and Cognitive Engineering, Cognitive Systems Lab, Korea University, Anam-Dong 5ga, Seongbuk-gu, Seoul 136-713, Korea. Email: [email protected] Abstract Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically dened novel objects as stimuli. Two groups of participants rst performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were rst compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarityspaces with perceptual spaces, visually perceived shapewas decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing. Key words: haptics, multisensory, multivoxel analysis, shape perception Introduction Shape is one of the most important object properties: Updating and manipulating shape information is necessary for humans to efciently interact with the world. Accordingly, humans are experts in shape processingfrom simple to complex, and from familiar to unfamiliar objects (Rosch 1999). Despite the fact that shape information in general is acquired by various sen- sory systems, previous studies have almost exclusively focused on the visual modality. Nevertheless, haptic identication of a wide range of objects is remarkably fast and accurate, making us haptic experts as much as we are visual experts (Klatzky et al. 1985). Although much is known about the behavioral and neural processing of shape in vision (Malach et al. 1995; Grill-Spector et al. 2001; Kourtzi and Kanwisher 2001), compara- tively little is known about the haptic modality (Bodegård et al. 2001; Reed et al. 2004), and even less about how the 2 modalities may interact in object-shape processing (Amedi et al. 2001, 2002; James et al. 2002; Kassuba et al. 2013). The goal of the present study is therefore to provide a detailed investigation of the neural mechanisms of shape processing in vision and touch, focusing on unisensory and multisensory pathways. For this, we use the framework of perceptual spaces, rst pro- posed by Shepard (1987), in which internal representations are © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: [email protected] Cerebral Cortex, August 2016;26: 34023412 doi:10.1093/cercor/bhv170 Original Article Advance Access Publication Date: 28 July 2015
11

Visual and Haptic Shape Processing in the Human Brain ...

Feb 13, 2017

Download

Documents

doannga
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Visual and Haptic Shape Processing in the Human Brain ...

OR I G INA L ART I C L E

Visual andHaptic Shape Processing in theHumanBrain:Unisensory Processing, Multisensory Convergence,and Top-Down InfluencesHaemy Lee Masson1, Jessica Bulthé2, Hans P. Op de Beeck2, andChristian Wallraven1

1Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Korea, and 2Laboratoryof Biological Psychology, KU Leuven, Leuven 3000, Belgium

Address correspondence to ChristianWallraven, Department of Brain and Cognitive Engineering, Cognitive Systems Lab, Korea University, Anam-Dong 5ga,Seongbuk-gu, Seoul 136-713, Korea. Email: [email protected]

AbstractHumans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostlyfocused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving lessattention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses.Importantly,we use tangible, parametrically definednovel objects as stimuli. Two groups of participantsfirst performed eitheravisual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched thephysical parameter space. In a subsequent fMRI experiment, objects were first compared within the learnedmodality and thenin the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceivedshape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape informationwas mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decodedshape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we foundhaptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual tohaptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing.

Key words: haptics, multisensory, multivoxel analysis, shape perception

IntroductionShape is one of the most important object properties: Updatingand manipulating shape information is necessary for humans

to efficiently interact with the world. Accordingly, humans are

experts in shape processing—from simple to complex, and

from familiar to unfamiliar objects (Rosch 1999). Despite the

fact that shape information in general is acquired by various sen-

sory systems, previous studies have almost exclusively focused

on the visual modality. Nevertheless, haptic identification of a

wide range of objects is remarkably fast and accurate, making

us haptic experts as much as we are visual experts (Klatzky

et al. 1985). Although much is known about the behavioral andneural processing of shape in vision (Malach et al. 1995;Grill-Spector et al. 2001; Kourtzi and Kanwisher 2001), compara-tively little is known about the haptic modality (Bodegård et al.2001; Reed et al. 2004), and even less about how the 2 modalitiesmay interact in object-shape processing (Amedi et al. 2001, 2002;James et al. 2002; Kassuba et al. 2013). The goal of the presentstudy is therefore to provide a detailed investigation of the neuralmechanisms of shape processing in vision and touch, focusingon unisensory and multisensory pathways.

For this, we use the framework of perceptual spaces, first pro-posed by Shepard (1987), in which internal representations are

© The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: [email protected]

Cerebral Cortex, August 2016;26: 3402–3412

doi:10.1093/cercor/bhv170

Original ArticleAdvance Access Publication Date: 28 July 2015

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 2: Visual and Haptic Shape Processing in the Human Brain ...

described as a space with distances between entities defined bytheir perceptual similarity. Importantly, the topology of this per-ceptual space conforms to the real-world, physical properties ofthese objects. Cutzu and Edelman (1996) provided an elegant ex-periment which tested that, indeed, visual perceptual space—asmeasured by similarity ratings and reconstructed using multidi-mensional scaling (MDS)—conformed to a complexmathematic-al parameterization of object shape. Our previous studies havealso demonstrated how such physical spaces can be recon-structed well not only in vision but also in touch. In addition,the perceptual spaces acquired from the different sensorymodal-ities were also highly congruent (Cooke et al. 2007; Gaißert et al.2010, 2011; Gaissert and Wallraven 2012; Wallraven and Dopjans2013). Hence, the perceptual space framework has already provenuseful for understanding shape representations in both visionand touch at the behavioral level.

In the visualmodality, this frameworkhas also beenused to re-late behavioral results with neuroimaging data analyzed throughmultivoxel methods to investigate how the visual perceptualspace is created in the brain using familiar objects (Eger et al.2008; Peelen and Caramazza 2012; Mur et al. 2013; Peelen et al.2014; Snowet al. 2014; Smith andGoodale 2015) andunfamiliar ob-jects (Op de Beeck et al. 2008; Drucker and Aguirre 2009). Thesestudies have provided evidence that shape representations canbe decoded in lateral occipital cortex (including regions BA19 toBA37 in the ventral visual pathway) regardless of object familiar-ity. Furthermore, Smith and Goodale (2015) showed involvementof the parietal lobe (including area BA7, or intraparietal sulcus)in shape decoding. Given that the previously mentioned percep-tual studies have shown that “haptic” processing is capable of re-constructing physical shape spaces with high accuracy as well,onemajor goal of this study is to extend this research to determinethe brain areas involved in creating this haptic perceptual space.

In addition, these imaging experiments have reliably identi-fied one region in the occipital cortex— the lateral occipital com-plex (LOC)—for visual object processing in general (for example,Malach et al. 1995; Kourtzi and Kanwisher 2001) and processingof shape in particular (for example, Op de Beeck et al. 2008;Drucker and Aguirre 2009). A seminal study by Amedi et al.(2001) has further shown that a subpart of this region [the so-called lateral occipital tactile visual region (LOtv)] becomes activenot only for visual processing but also for haptic processing of ob-jects. More recently, Snow et al. (2014) provided evidence thathaptic shape sensitivity is not only limited to LOtv but also coversthe entire LOC [including the lateral occipital (LO) area and pos-terior fusiform gyrus]—in addition to V1 andV4.We therefore hy-pothesize that LOC may be a candidate region that is able toencode visual as well as haptic properties of shape, and that itmay be possible to find both visual and haptic representationsof the perceptual spaces in this area.

In summary, in the present study,we investigate theneural cor-relates of visual and haptic perceptual shape spaces aswell as theirinteraction. We first created a parametrically defined space ofthree-dimensional, novel objects as our physical space. Twogroupsof participants performed similarity ratings to yield visual or hapticperceptual spaces. Neural correlates of these perceptual spaceswere then derived from multivoxel analyses of fMRI data.

Materials and MethodsParticipants

Twenty-five (male = 14: mean age = 25, age range = 18–34; female= 11: mean age = 23.45, age range = 18–27) healthy, right-handed

adults with no prior diagnosis of neurological or perceptual def-icits were recruited as participants for a behavioral and an fMRIexperiment. Participants were randomly assigned to one of the2 groups, the visual group (N = 11, male = 6) and the hapticgroup (N = 14, male = 8). All participants were provided with in-formed written consent, and the experiment received prior ap-proval by the Korea University Institutional Review Board[1040548-KU-IRB-12-30-A-1(E-A-1)].

Stimuli

The stimuli used in this experiment were generated by a para-metric shapemodel (“superformula”; Gielis 2003). This model al-lowed us to create a parameter space of novel, three-dimensionalobjects with complex shape characteristics [see Cooke et al.(2007); Gaißert et al. (2010, 2011); Gaissert and Wallraven (2012)and Wallraven and Dopjans (2013) for similar approaches withdifferent parameter spaces]. Three-dimensional shapes are gen-erated by a complex combination of different trigonometric func-tions (Fig. 1A). In the present study, we used 3 parameters (n1,m1,andm2) that were varied to create a two-dimensional cross with-in a three-dimensional cube of the parameter space [see alsoCutzu and Edelman (1996) for a similar configuration]. The result-ing 9 stimuli in the parameter subspace are shown in Figure 1A.

The stimuli were then printed out as MR-compatible, tangibleobjects (average measurements: 7.59 × 8.56 × 7.56 cm, averageweight: 196.66 g) on a 3D printer (Zprinter650, 3DSystems, USA)for use in both visual and haptic experiments.

Behavioral Experiments

Participants of either group first performed a behavioral experi-ment to determine their visual or haptic perceptual spacebased on a standard similarity-rating task (Fig. 1B). This experi-ment was conducted exactly 2 days before the fMRI experimentand gave participants unimodal experience with the stimuli.During the experiment, the experimenter and the participantsat on opposite sides of a table. For the visual group, the experi-menter placed objects at randomorientations on the table (visualangle: ∼10°). Participants were instructed not to move their headtoo much, nor to touch the objects. Since random orientationswere used, participants were not able to use simple image-based or accidental cues to make the similarity judgment [seeCooke et al. (2007) and Gaißert et al. (2010) for similar protocols].For the haptic group, the experimenter placed the object into theparticipant’s right hand (at random orientation) for haptic ex-ploration. The hand was extended through a curtain, thus block-ing any visual input.

First, as an introduction, participants of either group wereasked to look at (up to 10 s) or to haptically explore (up to 10 s)all 9 objects presented in a random order. This served to anchortheir scales and to acquaint them with shape variability acrossobjects. After this, participants performed a pair-wise similar-ity-rating task. Similarities were rated between all possible objectpairs including same objects pairs, with pairs being consecutive-ly presented in a random order. Each object was presented for 6 s,ensuring ample time for haptic (and visual) exploration. Timingand object presentation were controlled via spoken commandspresented to the experimenter via headphones. There were 4 re-petitions of all object pairs, yielding a total of 180 trials [((9 × 8/2)+ 9) × 4 repetitions; note that this includes presentations of iden-tical objects for each block]. After each pair, participants verballyreported the perceived similarity on a Likert-type scale (1 = fullydissimilar to 7 = identical). There was one mandatory break

Visual and Haptic Shape Processing in the Human Brain Lee Masson et al. | 3403

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 3: Visual and Haptic Shape Processing in the Human Brain ...

after 2 repetitions. Overall, the experiment took approximately1 h to finish. Participants were encouraged to focus on shapeproperties when making their judgment and to use the fullrange of the scale.

Scanning

MRI data were acquired on a SIEMENS Trio 3-T scanner (SiemensMedical Systems, Erlangen, Germany) with a 32-channel SENSEhead coil (Brain Imaging Center, Korea University, Seoul, SouthKorea).

StructuralMR images of all participantswere collected using aT1-weighted sagittal high-resolution MPRAGE sequence [repeattime (TR) = 2250 ms, echo time (TE) = 3.65 ms, flip angle (FA) = 9°,field of view (FOV) = 256 × 256 mm, in-plane matrix = 256 × 256,voxel size = 1 × 1 × 1 mm, 192 axial slices]. Functional imagingwas performed with a gapless, echo planar imaging sequence(TR = 3000 ms, TE = 30 ms, FA = 60°, FOV = 250 × 250 mm, in-plane matrix = 128 × 128, voxel size = 2 × 2 × 4 mm, 30 slices). Thefirst 9 s of each functional run consisted of dummy scans to allowfor steady-state magnetization.

Localizer Run for Visual Object-Selective Cortex

Before themain visual and haptic runs, participants completed astandard localizer task for determining object-selective corticalareas. For this, intact and scrambled images of familiar objects(including pictures of vehicles, fruits, animals, daily objects, fur-niture, etc.) and unfamiliar objects [including Greebles (Gauthierand Tarr 1997), three-dimensional computer graphics-generatedobjects (Wallraven et al. 2013), and random line-drawing figures]were shown in a rapid serial visual presentation paradigm. Thelocalizer consisted of 16 randomized blocks (4 repetitions ×familiar/unfamiliar × scrambled/intact) with each block startingwith a 15-s fixation cross baseline followed by rapid presentationof 15 images for 1 s each. During the stimulus presentation, par-ticipants performed a one-back task to ensure attention. When-ever the current image was the same as the previous one,

participants had to press a button on an fMRI-compatible buttonbox that was held in their left hand. Participants were instructedto fixate the middle of the screen during the task. Images wereshown at a visual angle of 10° on a MRI-compatible LCD display(BOLDscreen; Cambridge Research Systems, UK) that was viewedthrough a mirror mounted on the head coil. The scanning para-meters were the same as for the main visual and haptic runs.The total length of the visual localizer was 480 s [16 blocks × (15 sbaseline + 15 images × 1 s)].

Visual and Haptic Runs During fMRI

The main experiment consisted of a visual run and a haptic runin which participants did a one-back task to ensure sustained at-tention (Fig. 1C). Participants from the visual group performedthe visual run first, followed by the haptic run, and vice versafor participants from the haptic group. This specific orderwas se-lected as to allow investigation of effects of visual and hapticprior exposure from the previous behavioral experiment. Eachrun contained a randomized set of trials with 6 repetitions ofthe 9 objects, yielding 54 trials. Each trial consisted of 6 s presen-tation time, followed by a 9-s brief pause to allow for BOLD relax-ation and a 9-s response period. The pauses after objectpresentations and the long duration of the answering periodwere employed to ensure BOLD relaxation from object explor-ation and button pressing since thesewere related to handmove-ment and touch. Baseline times were inserted before trial 1 andafter trials 27 and 54. A run took 1341 s to finish [15 s baseline × 3+ 54 trials × (6 s exploration time + 9 s BOLD relaxation time + 9 sresponse time = 24 s)]. After a full run, there was a break time of120 s in which the experimenter prepared for the next presenta-tion mode.

Baselines in the visual run consisted of a 15-s fixation cross inthe middle of the screen. Baselines in the haptic run consisted of15 s texture stimulation in which participants palpated a piece ofcloth with their right hand.

In the visual run, the experimenter placed stimuli on top of atable such that they projected through a mirror mounted on the

Figure 1. (A) Cross-like physical parameter space showing the 9 objects used in the experiments at the location defined by the parameter values. The equations for creating

three-dimensional objects are shown below. (B) Design of the similarity-rating task. (C) Design of the visual and haptic runs in the fMRI experiment.

| Cerebral Cortex3404 , 2016, Vol. 26, No. 8

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 4: Visual and Haptic Shape Processing in the Human Brain ...

participant’s head coil (visual angle:∼10 °). Objectswere placed atrandom orientations by the experimenter such as to vary bothobject orientation as well as the experimenter’s hand posture.This was done to prevent simple image-based or hand-posture-based cues (Kaplan and Meyer 2012) to object identity.

In the haptic run, participants were instructed to close theireyes to restrict visual input and to explore the object in their“right” hand. In this condition, the experimenter placed the ob-ject into the participant’s hand for palpation similar to the be-havioral experiment.

During the runs, participants had to perform a one-back taskand were instructed to report if the previous object was the sameas the current object or a different object by pressing either of2 buttons with the index and middle fingers of their left hand.Timing of each trial section was ensured by short, spoken audiocues, synchronized to the beginnings of each section (“Start” forexperimenter to present the object, “Stop” for experimenter toremove the object, and “Answer” for participant to press the but-ton). Cues were presented via loudspeaker to both experimenterand participant with sound volume set so as to be clearly audibleover the scanning noise. Since the audio cues did not contain in-formation about object identity, the experimenter followed aprinted list of trial orders to ensure proper trial randomization.

Since the duration of the runs was quite long, care was takento avoid excessive head movements and to make participantscomfortable: Participants were first instructed to limit theirhead and shoulder movements during the scanning. In addition,participants’ heads were comfortably fixed by fitting foam pad-ding in the head coil to limit head movement. Finally, theelbow was supported by a foam pad during the runs in order tominimize arm fatigue and to reduce movement of the elbowand shoulder.

Localizer Run for Haptic Object-Selective Cortex

A haptic localizer run to determine haptic object-selective cor-tical areaswas performed by 12 additional participants. The hap-tic localizer consisted of shape and texture blocks. The shapeblocks contained either familiar objects (selected from 8 every-day, hand-held objects, such as spoon, cup, comb, etc.) or un-familiar objects (selected from 8 three-dimensional novelobjects, Lee and Wallraven 2013). The texture blocks containedstimuli selected from 16 flat textures spanning a wide variety ofmaterials and texture granularity mounted on a piece of card-board. There were a total of 4 shape and 4 texture blocks pre-sented in a random order with a 15-s break in between blocks.The block type (shape or texture) was announced before thestart of each block via an audio cue to both the experimenterand the participant. During the task, participants were handeda shape or a texture by the experimenter and asked to explore itfor 6 s using their right (dominant) hand. Similar to the main ex-periment (see below), participants had to perform a one-backtask inwhich theywere required to answer for each trial whetherit contained the same or a different object than before. Answerswere given on a button box held in the left hand of the partici-pant. Each block started with a 15-s baseline, followed by 5 trialsand a 15-s break. The total length of the localizer run was 585 s[8 blocks × (15 s baseline + 5 × (6 s exploration time + 3 s responsetime)) + 7 × 15 s break].

Analysis of Results From the Similarity-Judgment Task

Individual ratingswere first correlated across participants to ana-lyze rating consistency. Ratings were then averaged across

participants to obtain group similarity matrices for the visual orhaptic group. MDS was applied to these matrices to reconstructperceptual spaces using the MATLAB (R2014a, The Mathworks,Natick, MA, USA) built-in function mdscale. We determined theoptimal number of dimensions for the fit based on standard cri-teria for the stress value of the MDS solution (e.g., Gaißert et al.2010). The resulting spaces were compared using “Procrustesanalysis,”whichmaps spaces onto each other without distortinginterobject distances of the data. The standardized average Eu-clidean distances between corresponding points after Procrustestransformationwere used to determine the correlation between 2spaces as a goodness-of-fit measure (Gaißert et al. 2010).

Analysis of Imaging Data

Imaging datawere analyzed using the Statistical ParametricMap-ping software package (SPM8,WellcomeDepartment of CognitiveNeurology, London, UK), aswell as the customMatlab code for se-lecting regions of interest (ROI), and conducting ROI-based correl-ational analysis and whole-brain correlational searchlightanalysis (Op de Beeck et al. 2008; Bulthé et al. 2014).

Preprocessing

Participants’ headmovements were checked for excessive valuesin both translation and rotation, and none of the data had to bediscarded.MR imageswere corrected for slice-timing differences,followed by realignment to the mean of the images, and func-tional images were normalized to the Montreal Neurological In-stitute (MNI) template with a re-sampling size of 2 × 2 × 2 mm.Images were then spatially smoothed (a 5-mm full-width half-maximum kernel).

Univariate Analysis for Visual and Haptic Runs

A standard general linear model (GLM) was used to obtain parti-cipants’ object-specific brain activation in the one-back task dur-ing visual and haptic runs. Since there were 9 different objects,the GLM contained 9 variables as well as 6 standard head mo-tion-related covariates. We fitted one GLM for the visual run,and one GLM for the haptic run. The resulting beta-estimateswere later used as basis for correlational analyses to yield visualand haptic “neural” spaces. It should be noted that the purpose ofthis analysis was solely for obtaining these beta-estimates (e.g.,Op de Beeck et al. 2008) and not for determining object-baselinecontrasts. All whole-brain analyses used thresholds of P < 0.05with family-wise error correction.

ROI Selection

To enable ROI-based analysis, we first selected standard visual-processing areas along the ventral stream from the occipitallobe to the temporal lobe: Brodmann area (BA) 17 as early visualcortex, and BA18, 19, and 37 as higher-level visual-processingareas. ROIs were then defined based on group brain activationobtained from the contrast of all 4 kinds of images versus base-line in the visual localizer run (using a 2 × 2 factorial design),masked by anatomically defined BA images generated from thePickAtlas software (Maldjian et al. 2003). Since previous studiesshowed no differences in visual processing across hemispheres(Op de Beeck et al. 2008; Peelen et al. 2014), ROIs were combined(except for BA37, see below). In addition, an object-selectiveROI was defined individually by the contrast of intact versusscrambled images from the visual localizer task (for correspond-ing results using the group-based object-selective ROI, see

Visual and Haptic Shape Processing in the Human Brain Lee Masson et al. | 3405

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 5: Visual and Haptic Shape Processing in the Human Brain ...

Supplementary Material). The resulting visual object-selectiveROIwas located in the LO and occipito-temporal areas (commonlyreferred to as the LOC; Malach et al. 1995; Kourtzi and Kanwisher2001). Since previous studies have implied functional differencesbetween left and right LOC (Zhang et al. 2004), we used separateROIs in the following. Resulting ROI sizes in voxels were as fol-lows: BA17 = 245; BA18 = 628; BA19 = 253; BA37(left) = 333; BA37(right) = 369; LOC(left) = 233 ± 44; LOC(right) = 259 ± 41 (there waspartial overlap of LOC with areas BA19 and BA37).

For haptic processing, we selected early somatosensory areas(BA1, BA2, and BA3), associative high-level somatosensory areas(BA5, BA7, BA39, and BA40), and BA6 as premotor cortex as ROIs.All ROIs were defined based on group brain activation obtainedfrom the contrast of object shape versus baseline in the haptic lo-calizer run (except for BA6 using an object shape versus texturecontrast), masked by anatomically defined BA images generatedfrom the PickAtlas software. To check for lateralization effects,we split all areas into left- and right-hemispheric regions. ROIsizes were as follows: BA1(left) = 86; BA1(right) = 96; BA2(left) = 237;BA2(right) = 177; BA3(left) = 289; BA3(right) = 160; BA5(left) =102; BA5(right) = 84; BA6(left) = 294; BA6(right) = 255; BA7(left) =204; BA7(right) = 230; BA39(left) = 201; BA39(right) = 176; BA40(left) = 311; BA40(right) = 229. For an illustration of the selectedROIs, see Supplementary Figure 1.

Generating Neural Similarity Matrices

Individual visual and haptic neural matrices were determinedfrom extracted beta-values for object contrasts in the visualand haptic tasks. Beta-values were normalized for each voxel ineach ROI by subtracting the average beta-value across all objects(note that by doing so, many extraneous effects such as experi-menter’s hand movement as well as different visual and hapticbaseline structures were eliminated since the contrast consistedof one object vs. the other objects in that run). Next, themultivox-el pattern of normalized beta-values for each object was Pearson-correlated with the multivoxel pattern for each other object,resulting in 9 × 9-element similaritymatrices. Thiswas done sep-arately for each participant, yielding individual visual and hapticneural similarity matrices. Finally, we averaged all individualneural matrices to create one group neural matrix for each par-ticipant group in each ROI. These group neural matrices werecorrelated with group behavioral matrices to determine whichROI corresponded to the behavioral results.

Permutation analyses (e.g., Op de Beeck et al. 2008) were usedto determine the statistical validity of correlations between agroup neuralmatrix and a group behavioralmatrix (based on per-muting the object index in the neural matrix) and for determin-ing differences between the 2 groups (based on permuting thegroup membership of participants).

Searchlight Analysis

We also performed a whole-brain searchlight analysis to aug-ment the ROI-based correlational analysis (Kriegeskorte et al.2006; Bulthé et al. 2014). For this, we computed voxel-wise corre-lationswith the behavioralmatrices for beta-values averaged in a2-mm3 cube. This was done for each participant, followed by astandard random-effect group analysis to identify significantvoxels at the group level. Since these analyses were used to verifytowhat degree the ROI analyses captured the cortical distributionof shape representations, the threshold for significancewas set toP < 0.001 (uncorrected) to ensure broader coverage.

ResultsVisual Perceptual Space

Correlation analyses of individual ratings across participantsshowed high consistency with a mean correlation of r = 0.897(minimum = 0.818, maximum = 0.953), already indicating thatparticipants represented the 9 object shapes in a similar fashion.The average group similarity matrix is shown in Figure 2A. Stressvalues of the MDS on the group matrix yielded a clear drop (or“elbow”) for two-dimensional solutions [stress(1) = 0.220, stress(2) = 0.067, and stress(3) = 0.004], showing that 2 dimensionswere sufficient to explain the data. The resulting two-dimensionalperceptual space after Procrustes transformation is shown inFigure 2C and conforms well to the underlying topology of thesuperformula parameter space (which should show a cross-likeshape—cf. Fig. 1A). The average fit quality with the physical par-ameter space was high with a correlation of r2 = 0.739 [see alsoGaißert et al. (2010) for comparable reconstruction values withdifferent novel objects].

Haptic Perceptual Space and Comparison to Visual Space

Participants’ haptic similarity ratings were also highly consist-ent (mean r = 0.903, minimum= 0.721, maximum= 0.965). Simi-larly, the underlying topology of the superformula parameterspace was also recovered well for the haptic group in 2 dimen-sions as shown by MDS [stress(1) = 0.112, stress(2) = 0.044, andstress(3) = 0.001]: Results of the Procrustes analysis showed asimilar fit quality to the physical parameter spacewith r2 = 0.701(Fig. 2C).

Importantly, we observe that the 2 perceptual spaces notonly capture the physical space, but that they also match eachother very well: Average fit quality between the visual and hap-tic perceptual spaces indicates a good match with r2 = 0.938.This confirms earlier results for 2 different sets of novel objects,for which good fits of perceptual spaces to physical spaces wereobserved and for which fits between the 2 perceptual spaceswere also better than between the perceptual spaces and thephysical space (Cooke et al. 2007; Gaißert et al. 2010, 2011).These results show that the brain is capable of extracting com-plex shape parameterizations [see also Edelman et al. (1998)],but that there are some biases in how the shape informationis extracted—more importantly for the present study, however,these biases occur in the same fashion in both vision and hap-tics, indicating highly similar shape representations across the 2modalities.

Behavioral Results from Localizer Runs and fMRI Runs

For the visual localizer, accuracy was 91.22% (SD = 5.1). For thehaptic localizer, the overall accuracy was 89.84% (SD = 6.5). Bothresults confirm that participants were focused during the locali-zer tasks.

For the functional runs, mean accuracy for the one-back taskwas high (>94% on average), indicating that all participants con-centrated on the task. There were no differences in accuracy be-tween the 2 groups in the visual run (t(21) = 1.285, P = 0.213) or thehaptic run (t(21) = 0.154, P = 0.879). Furthermore, task perform-ance between vision and haptics was not significantly differentwithin groups [visual group: 96.80% for visual task vs. 96.13% forhaptic task (t(10) = 0.256, P = 0.803); haptic group: 94.60% for visualtask vs. 96.45% for haptic task (t(11) =−2.105, P = 0.059); due to po-tential ceiling effects, all tests use arcsin-transformed accuracyvalues].

| Cerebral Cortex3406 , 2016, Vol. 26, No. 8

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 6: Visual and Haptic Shape Processing in the Human Brain ...

Neural Representation of Visual Perceptual Space

As expected from earlier work, results from the correlation ana-lysis between averaged group neural matrices and visual behav-ioral matrices show good correlations along the ventral pathway(for the full list of results for each ROI, see Table 1). Specifically,early visual cortex (BA17) was able to capture visual informationfor both groups, for the visual group in which the experimentstarted with this visual presentation of the objects, as well asfor the haptic group in which the haptic data acquisition pre-ceded this visual presentation. Along with the ventral pathway,BA18, BA19, and BA37 also showed high correlations. Moreover,both groups also had significant correlations in individually de-fined bilateral LOC (note that, in addition, when using LOC-ROIsdefined by both the group visual and haptic localizer as ROIs wefound similar results; see Supplementary Table 1).

The searchlight analysis results shown in Figure 3A supportthis ROI-based analysis, charting the reconstruction of the per-ceptual space in the human occipital lobe along the ventral path-way (see also Supplementary Fig. 1, showing the overlap betweenthe functionally selected ROIs and the searchlight results). As afurther illustration, Figure 3C shows the reconstructed neuralspace from area BA18 (as reconstructed by MDS) together withthe behavioral space for the visual group and the visual task.

Neural Representation of Haptic Perceptual Space

The correlational analyses in the somatosensory areas indicatedthat several areas represented the haptic perceptual space. Thisincluded early somatosensory areas of bilateral BA3 and bilateralBA2. In addition, shape representations were found in higher-

level haptic shape-processing areas in bilateral BA7 and leftBA39 in both groups. Finally, bilateral BA6 as a premotor areaalso had positive correlations for both groups for the haptictask. Results from the whole-brain searchlight analysis confirm-ing the ROI-based analysis are shown in Figure 3B (see Supple-mentary Fig. 1, showing the overlap between the functionallyselected ROIs and the searchlight results). Note that the search-light results revealed bilateral correlations in the parietal lobesimilar to the ROI-based analysis. In addition, left inferior frontalgyrus (IFG) was shown to represent haptic shapewell. As an illus-tration, Figure 3D shows the reconstruction of the neural similar-ity space of the haptic group for the haptic task from left BA7 incomparison with the behavioral data.

Neural Representation of Haptic Perceptual SimilaritySpace in the Human Ventral Pathway

Interestingly, in the ventral “visual” stream, left LOC also showedsignificant correlation with haptic perceptual space in bothgroups in the haptic task (again, results were observed not onlyfor individually defined, but also for group-level LOC based onboth visual and haptic localizer results, see SupplementaryTable 1). In addition, bilateral BA19 showed weak, yet significantcorrelations with the haptic perceptual space. This indicatesthat a visually defined, object-selective area as well as BA19 notonly encode the perceptual topology of “visually presented”shapes (see above), but also that of “haptically presented” ob-jects. Figure 4 illustrates the similar topology of the averagedneural space (across both groups and tasks) for left LOC and theaveraged perceptual space (across both groups).

Figure 2. (A and B) Similaritymatrices for visual (A) andhaptic (B) similarity judgments. Numbers on axes refer to objects in Figure 1 (blue = dissimilar and red = similar). (C)

MDS results for visual (red solid line) and haptic (blue solid line) similarity judgments after Procrustes alignment. Black, thin lines connect corresponding object locations

of the 2 perceptual spaces. The physical parameter space is shown in gray and the objects aligned in the cross-shape are shown as inset. Note the high topological

similarity between the visual and haptic perceptual spaces.

Visual and Haptic Shape Processing in the Human Brain Lee Masson et al. | 3407

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 7: Visual and Haptic Shape Processing in the Human Brain ...

Group Differences

Since participants were tested with both modalities in the scan-ner but only had prior experience in one of the 2 modalities, wecan also investigate how previous experience modulates thiscross-modal information transfer. Permutation analyses on thecorrelations compared across groups revealed that haptically ex-plored objectswere better reconstructed for the visual group thanfor the haptic group in early visual cortex (BA17, visual group:r = 0.482, P = 0.007; haptic group: r = 0.055, P = 0.374; group differ-ences: P = 0.01) and left BA37 (visual group: r = 0.625, P < 0.001;haptic group: r = 0.191, P = 0.148; group differences: P = 0.04).A similar trend was observed in right LOC (visual group: r = 0.412,P = 0.020; haptic group: r = 0.079, P = 0.317; group differences:P = 0.059) for the haptic task. Overall, these results indicate thatprior visual experience enhances the representations in visualcortex for the shape of haptically presented objects.

Interestingly, prior haptic experience did not lead to signifi-cantmodulations of correlations across groups in somatosensorycortex for both visual and haptic stimuli (group differences forvisual task with somatosensory areas: all P > 0.09; group differ-ences for haptic task with somatosensory areas: all P > 0.11).Whole-brain searchlight analyses restricted to the participantsfrom the same groups (Fig. 5A) found that correlations of theneural similarity data of visually presented stimuli with the vis-ual perceptual space peaked in (x, y, z =−10, −92, −14) for the vis-ual group and in (x, y, z =−10,−94,−4) for the haptic group, both ofwhich are located in the occipital lobe (BA17). The extent of acti-vation tended to be different in the 2 groups (red and blue colors

in Fig. 5A), but its general location and peak activity were similar.In contrast, correlations of the neural similarity data of hapticallypresented stimuli with the haptic perceptual space peaked in(x, y, z =−46, −62, −12) located in the inferior occipital gyrus (thenearest graymatter BA is BA37) in the ventral pathway for the vis-ual group, whereas they peaked in the superior parietal lobe (thenearest gray matter BA is BA7) located in associative somatosen-sory areas (x, y, z = −20, −56, 66) for the haptic group (Fig. 5B).These results also confirm that previous experience modulateswhich brain regions are involved in representing hapticinformation.

DiscussionIn this study,we investigated visual andhaptic shape representa-tions in the brain using parametrically defined, novel shapes. Wefirst showed that both visual and haptic perceptual spaces con-form well to the physical parameter space. Importantly, the 2perceptual spaces are also highly similar across modalities.This finding extends our previous studies with different typesof stimuli in different task environments and highlights the abil-ity of the brain to analyze and faithfully represent even complexshape spaces (Cooke et al. 2007; Gaißert et al. 2010, 2011; Gaissertand Wallraven 2012; Wallraven et al. 2013).

We then analyzed fMRI data on visual and haptic shape pro-cessing using a multivoxel approach similar to representationalsimilarity (Kriegeskorte et al. 2006; Peelen and Caramazza 2012;Mur et al. 2013; Peelen et al. 2014) as well as ROI-based analysesand were able to pinpoint areas in the brain that represent thisperceptual space.

Alongwith the ventral stream,we found strong positive corre-lations between neural patterns and the visual perceptual spacein bilateral areas BA17, 18, 19, 37, aswell as in LOC. In terms of vis-ual processing, these findings confirm previous studies high-lighting the role of the occipital lobe in object-shape processing(Eger et al. 2008; Haushofer et al. 2008; Op de Beeck et al. 2008;Drucker and Aguirre 2009; Peelen and Caramazza 2012; Muret al. 2013). Interestingly, whereas most of these studies impli-cated higher-level, object-selective brain areas such as LOC inprocessing fine details of object-shape properties, our resultsalso include early visual cortex with high correlations. AlthoughPeelen and Caramazza (2012) have shown previously that pixel-wise information about shape is reflected in BA17, 18, and 19,the present study used visual presentation of stimuli rotatedrandomly in depth, which cannot be modeled well by directpixel-based comparisons of “images.” Indeed, going furtherthan simple pixel-wise similarity, a recent study has implicatedV1 in processing of fine details of visually presented shape prop-erties, claiming the involvement of early visual cortex maybe caused by top-down influence (Smith andGoodale 2015). Simi-larly, a recent computational model of visual shape processing(Tschechne and Neumann 2014) suggests that feedback fromhigher areas can, for example, enhance task-relevant contour in-tegration and curvature representation in early visual cortex,thus creating a rich shape representation.

In the case of haptic processing, our results for the first timemap out howaperceptual shape space is constructed throughoutthe brain: We found veridical shape representations in severalsomatosensory areas, including bilateral early somatosensorycortex (BA3 and 2), as well as contralateral higher-level areassuch as left BA39 and bilateral BA7 and 6. Very few, if any, studiesso far have investigated higher-level, haptic shape processing ofcomplex shapes in the brain—our results extend those of previ-ous studies based on univariate analyses of haptic processing

Table 1 Results of correlations of the perceptual spacewith the neuralspace for all ROIs

Visual task Haptic task

Visualgroup

Hapticgroup

Visualgroup

Hapticgroup

Left BA1 −0.017 0.179 0.525*** 0.252Right BA1 0.243 −0.004 0.570** 0.258Left BA2 0.096 0.210 0.390* 0.318*Right BA2 0.150 −0.142 0.447* 0.431*Left BA3 −0.026 0.042 0.535*** 0.431**Right BA3 0.342* 0.173 0.470* 0.356*Left BA5 0.224 0.343* 0.201 0.458**Right BA5 0.175 0.210 0.462** 0.210Left BA6 −0.068 0.224 0.584** 0.518**Right BA6 0.080 0.365* 0.430* 0.448**Left BA7 0.394* 0.430** 0.632*** 0.614***Right BA7 0.193 0.052 0.600*** 0.358*BA17 0.726*** 0.679*** 0.482** 0.055BA18 0.701*** 0.702*** 0.496* 0.175BA19 0.648*** 0.573*** 0.440* 0.383*Left BA37 0.579*** 0.518** 0.625*** 0.191Right BA37 0.583*** 0.322* 0.096 0.027Left BA39 0.316* 0.180 0.364* 0.660***Right BA39 0.529*** 0.216 0.271 0.354*Left BA40 0.275 0.499** 0.211 0.514**Right BA40 0.206 −0.031 0.695*** 0.248Left LOC 0.614*** 0.726*** 0.481** 0.497**Right LOC 0.596*** 0.467** 0.412* 0.079

Note: Asterisks denote P-values as determined in permutation analyses (*P < 0.05;

**P < 0.01; ***P < 0.001). Results are reported in detail in the main text for the visual

and haptic tasks only if both areas correlate significantly; in addition, group

differences are reported whenever the correlations across groups differ

significantly.

| Cerebral Cortex3408 , 2016, Vol. 26, No. 8

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 8: Visual and Haptic Shape Processing in the Human Brain ...

of non-parametric objects that indicated the involvement of sev-eral somatosensory areas (Bodegård et al. 2001; Reed et al. 2004;Snow et al. 2014). Interestingly, our results did not implicate SIIin shape processing, confirming earlier results that show SII tobe mostly involved in texture, hardness, andmaterial processing(James et al. 2007).

Importantly, haptic object-shape properties in addition tosomatosensory areas were also well represented in the occipitallobe in left LOC and BA19 (Amedi et al. 2002; James et al. 2002;Lacey et al. 2009). The ROI results highlight both LOC and BA19as multisensory convergence areas, whereas the searchlight re-sult shows correlations in the inferior temporal gyrus mostly

for the haptic task. In addition, visual shapewas also representedto some degree in the superior parietal lobe in left BA7 (Zhanget al. 2004; James et al. 2007; Lacey et al. 2009; Smith and Goodale2015)—an area that searchlight does not highlight for the visualtask. Note, however, that searchlight analyses are in generalprone to issues of parameter selection in terms of their sensitiv-ity (such as searchlight radius; Op de Beeck 2010; Etzel et al. 2013).Additionally, the ROI-based results are consistent with someearlier studies that localize visual representations in broad re-gions of LO [see, e.g., Op de Beeck et al. (2008)], whereas hapticstimuli are confined to a smaller area in LO [LOtv, see Amediet al. (2001, 2002) and James et al. (2002)] and even in the inferior

Figure 3.Whole-brain searchlight analysis for the visual (A) andhaptic (B) taskmapped on inflated cortices using theCARET software (Van Essen et al. 2001). (A) Significant

clusters of positive correlation are visible along the ventral pathway up until inferior temporal cortex (P < 0.001, uncorrected). The labels are defined as follows: IFG, inferior

frontal gyrus; PreCG, precentral gyrus; LOC, lateral occipital cortex; ITG, inferior temporal gyrus; SFG, superior frontal gyrus; MFG, medial frontal gyrus; SMA,

supplementary motor area; PoCG, postcentral gyrus; SPL, superior parietal lobe; MOG, middle occipital gyrus; AG, angular gyrus. (B) Significant clusters of positive

correlation are visible in both early and associative somatosensory areas, as well as in ITG, premotor cortex, and IFG (P < 0.001, uncorrected). The labels are defined as

follows: LOC, lateral occipital cortex; ITG, inferior temporal gyrus; OL, occipital lobe. (C) MDS reconstruction comparing the visual perceptual space (red, white

numbers, solid line) and the neural space in the visual task (pink, black numbers, broken line) for area BA18 for the visual group—the area with the highest

correlations in both visual tasks. VG, visual group; VT, visual task. Black, thin lines connect corresponding object locations of the neural and perceptual spaces. The

original physical space is shown in gray (white numbers), and the original objects are shown in their cross-pattern as an inset. Goodness-of-fit values for physical

space to neural space: r2 = 0.623 and for perceptual to neural space: r2 = 0.765. (D) MDS reconstruction comparing the haptic perceptual space (blue, white numbers,

solid line) and the neural space in the haptic task (sky blue, black numbers, broken line) for left BA7 for the haptic group—the area with the highest correlations in

both haptic tasks. HG, haptic group; HT, haptic task. Black, thin lines connect corresponding object locations of the neural and perceptual spaces. The original

physical space is shown in gray (white numbers), and the original objects are shown in their cross-pattern as an inset. Goodness-of-fit values for physical space to

neural space: r2 = 0.575 and for perceptual to neural space: r2 = 0.652.

Visual and Haptic Shape Processing in the Human Brain Lee Masson et al. | 3409

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 9: Visual and Haptic Shape Processing in the Human Brain ...

temporal lobe as shown in our study [see Amedi et al. (2001, 2002)and James et al. (2002)]. Adding to this, a recent study providedevidence that in addition to visual and haptic information, LOCand ITG may also encode auditory information related to shapeas well as mental imagery of shape representation (Peelen et al.2014). These results highlight the importance of these areas asmultisensory convergence areas for “detailed” shape processing.Note that since this study used separate, unimodal runs, theresults so far demonstrate clear spatial convergence of visualand haptic processing. The degree to which this generalizes tocombined visuo-haptic processing and in how far neurons inthese areas possess true multisensory response characteristics

remains to be studied (using, for example, cross-modal matchingparadigms as in Tal and Amedi (2009) and Kassuba et al. (2013).Also, it should be noted that in the present study, haptic percep-tual shape space was represented well only in left LOC, whereascorrelations were found bilaterally for visual processing. Thequestion of lateralization in visual and haptic object-shape pro-cessing, therefore, needs further investigation (Crutch et al.2005; Hömke et al. 2009; Deshpande et al. 2010).

Thewhole-brain searchlight analysis also revealed significantcorrelations for haptic shape processing in IFG—a region that ex-tends over BA44, 45, and 46 and hence cannot be captured wellusing standard ROI-based analysis. A few other studies havealso implicated IFG in haptic object tasks (Binkofski et al. 1999;Stoeckel et al. 2003; Reed et al. 2004; Lacey et al. 2010). One explan-ation for the involvement of IFG in these and in our results comesfrom a recent meta-analysis, which has shown that IFG is ofteninvolved in sequential ordering of motor execution, especiallywhen the task requires selective attention (Liakakis et al. 2011).This finding is an excellent fit with our shape-processing task,which requires participants to explore the object in a controlledfashion such as to extract discriminative information in an effi-cient way.

Our results also highlight some differences for cross-modalinformation transfer: Whereas the visual group also employedearly visual cortex to represent haptic perceptual space, the hap-tic group did not recruit the occipital lobe (except for left LOC),suggesting that previous visual exposure involves early visualcortex “even in the absence of visual input” in the form of top-down activation. Similarly, although the haptic group also acti-vated early visual cortex in the visual task, correlations wereweaker overall than for the visual group. In addition, prior visualexperience also recruited right LOC for haptic perceptual space,whereas only left LOCwas activated for the haptic group. One po-tential explanation for this dissociation may be that right LOC isassociated more with visual imagery than left LOC (Zhang et al.2004).

Interestingly, Snow et al. (2014) also found visual cortex(including primary visual areas reported here) to be activatedduring haptic processing of highly “familiar” objects. Althoughour objects are comparatively simple conceptually and hence

Figure 4. MDS reconstruction of the averaged neural space for left LOC (averaged

across both groups and both tasks, light green, black numbers, broken line)

compared with the averaged perceptual space (averaged across both groups,

dark green, white numbers, solid line). Black, thin lines connect corresponding

object locations of the neural and perceptual spaces. The original physical

space is shown in gray (white numbers), and the original objects are shown in

their cross-pattern as an inset.

Figure 5. (A and B) Results ofwhole-brain searchlight analysis comparing the visual (red) and thehaptic (blue) groupmapped on inflated cortices using theCARET software

(Van Essen et al. 2001) for the visual task (A) and the haptic (B) task (P uncorrected <0.001). VG refers to the visual group, whereas HG refers to the haptic group. Labels are

defined as follows: (A) LOC, lateral occipital cortex; IOG, inferior occipital gyrus; MOG, middle occipital gyrus; CG, calcarine gyrus and (B) PreCG, precentral gyrus; LOC,

lateral occipital cortex; ITG, inferior temporal gyrus; PoCG, postcentral gyrus; SPL, superior parietal lobe; MOG, middle occipital gyrus; AG, angular gyrus.

| Cerebral Cortex3410 , 2016, Vol. 26, No. 8

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 10: Visual and Haptic Shape Processing in the Human Brain ...

participants may have acquired a certain stimulus familiarityduring the short exposure time before the fMRI experiment(1.5 h), the objects do not share the same familiarity level as theeveryday objects used in the study of Snow and colleagues. Over-all, this may suggest that such top-down activation is indicativeof more general shape processing in early visual areas. Furtherstudies are needed, however, to track the acquisition of expertiseand the accompanying changes in neural representations.

Recently, Vetter et al. (2014) demonstrated decoding of audi-tory patterns from early visual cortex and suggested that the vis-ual activation may be due to visual imagery when fine details ofshape information are required. Since the present study used aone-back task that explicitly required participants to keep onestimulus in mind for comparison, visual imagery may be ableto explain these results at least in part, although more evidenceis needed to resolve the issue.

In contrast to the effect of previous visual experience onneur-al processing of haptic stimuli, prior “touch” experience aboutshape showed no significant modulation. Even though Smithand Goodale (2015) provided evidence that early somatosensorycortex can deliver some visually obtained information of familiarobjectswith rich touch information, in the present study, left BA5and left BA40 in the parietal lobe and right BA6 for premotor cor-tex showed mild correlations with the visual perceptual spacedue to prior haptic experience. These correlations, however,were not strong enough to result in significant group differences.Theremay be several reasons for this result: First, differences dueto task difficulty between the 2 cross-modal blocks in the scan-ner. Since performance in the cross-modal run was equallygood (and at a high level) for both groups, however, we conjecturethat this cannot be the reason for the differences. Second, asstated above, our training timewas not enough to “deeply” famil-iarize participants with the novel objects—perhaps haptic pro-cessing requires much longer periods of exposure to primevisually obtained object-shape information in somatosensoryareas that can only be activated for familiar objects. In this con-text, a recent review discussed cross-modal transfer between vi-sion and haptics, suggesting that performance may be betterwhen visual encoding is followed by haptic retrieval than forthe reverse (Lacey and Sathian 2014); see also Dopjans et al.(2009). In addition, Kassuba et al. (2013) used cross-modalmatch-ing of familiar objects in fMRI and found that activation in the lat-eral occipital gyrus and the fusiform gyrus was higher for visualpresentation followed by haptic presentation than for the re-verse, indicating asymmetric information transfer. Such asym-metries, however, seem to depend on the task and the stimuli,since other behavioral studies have demonstrated symmetriccross-modal transfer for categorization of novel, unfamiliar ob-jects (Wallraven et al. 2013). Further investigations are necessaryto better understand the role of training effects and familiariza-tion—especially for haptic processing in the brain.

In summary, our study provides evidence that the humanbrain is able to reconstruct complex shape spaces remarkablywell using both visual and haptic processing. Furthermore,these 2 different sensorymodalities create highly similar percep-tual spaces. Both visual and haptic object-shape information arereconstructed well already in early sensory areas (V1 for visualinput and S1 for haptic input), as well as higher-level areas. Inaddition, visual and haptic perceptual spaces are representedwell in ventrolateral occipito-temporal cortex (LOC), suggestingthis area as a candidate for a multisensory convergence area, oreven a supramodal shape representation. Moreover, we wereable to demonstrate that prior visual experience activates earlyvisual cortex during haptic processing even in the absence

of visual input. The framework of representational spaces—originated in Shepard’s program—hence provides a powerfulinstrument to investigate the full processing pipeline that under-lies our capability to understand and represent the rich world ofshapes around us.

Supplementary MaterialSupplementary Material can be found at: http://www.cercor.oxfordjournals.org/.

FundingThis research was supported by the Basic Science Research Pro-gram through the National Research Foundation of Korea (NRF)funded by the Ministry of Science, ICT & Future Planning (NRF-2013R1A1A1011768) and the Brain Korea 21plus program throughthe National Research Foundation of Korea (NRF) funded by theMinistry of Education. We gratefully acknowledge the help ofNicky Daniels for assistancewith the searchlight analysis fundedby European Research Council grant ERC-2011-Stg-284101, andfederal research action grant IUAP/PAI P7/11.

NotesThe authors thank the anonymous reviewers for providing help-ful comments. Conflict of Interest: None declared.

ReferencesAmedi A, Jacobson G, Hendler T, Malach R, Zohary E. 2002. Con-

vergence of visual and tactile shape processing in thehuman lateral occipital complex. Cereb Cortex. 12:1202–1212.

Amedi A, Malach R, Hendler T, Peled S, Zohary E. 2001. Visuo-haptic object-related activation in the ventral visual pathway.Nat Neurosci. 4:324–330.

Binkofski F, Buccino G, Posse S, Seitz RJ, Rizzolatti G, Freund HJ.1999. A fronto‐parietal circuit for object manipulation inman: evidence from an fMRI‐study. Eur J Neurosci. 11:3276–3286.

Bodegård A, Geyer S, Grefkes C, Zilles K, Roland PE. 2001. Hier-archical processing of tactile shape in the human brain.Neuron. 31:317–328.

Bulthé J, De Smedt B, Op de Beeck HP. 2014. Format-dependent re-presentations of symbolic and non-symbolic numbers in thehuman cortex as revealed by multi-voxel pattern analyses.Neuroimage. 87:311–322.

Cooke T, Jäkel F,WallravenC, Bülthoff HH. 2007.Multimodal simi-larity and categorization of novel, three-dimensional objects.Neuropsychologia. 45:484–495.

Crutch SJ, Warren JD, Harding L, Warrington EK. 2005. Computa-tion of tactile object properties requires the integrity of praxicskills. Neuropsychologia. 43:1792–1800.

Cutzu F, Edelman S. 1996. Faithful representation of similaritiesamong three-dimensional shapes in human vision. ProcNatl Acad Sci USA. 93:12046–12050.

Deshpande G, Hu X, Lacey S, Stilla R, Sathian K. 2010. Object fa-miliarity modulates effective connectivity during hapticshape perception. Neuroimage. 49:1991–2000.

Dopjans L, Wallraven C, Bülthoff HH. 2009. Cross-modal transferin visual and haptic face recognition. IEEE Trans Haptics.2:236–240.

Visual and Haptic Shape Processing in the Human Brain Lee Masson et al. | 3411

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018

Page 11: Visual and Haptic Shape Processing in the Human Brain ...

Drucker DM, Aguirre GK. 2009. Different spatial scales of shapesimilarity representation in lateral and ventral LOC. CerebCortex. 19:2269–2280.

Edelman S, Grill-Spector K, Kushnir T, Malach R. 1998. Towarddirect visualization of the internal shape representationspace by fMRI. Psychobiology. 26:309–321.

Eger E, Ashburner J, Haynes J-D, Dolan RJ, Rees G. 2008. fMRI activ-ity patterns in human LOC carry information about object ex-emplars within category. J Cogn Neurosci. 20:356–370.

Etzel JA, Zacks JM, Braver TS. 2013. Searchlight analysis: promise,pitfalls, and potential. Neuroimage. 78:261–269.

Gaißert N, Bülthoff HH, Wallraven C. 2011. Similarity and cate-gorization: from vision to touch. Acta Psychol (Amst). 138:219–230.

Gaißert N, Wallraven C, Bülthoff HH. 2010. Visual and haptic per-ceptual spaces show high similarity in humans. J Vis. 10:2.

Gaissert N, Wallraven C. 2012. Categorizing natural objects: acomparison of the visual and the haptic modalities. ExpBrain Res. 216:123–134.

Gauthier I, Tarr MJ. 1997. Becoming a “Greeble” expert: exploringmechanisms for face recognition. Vision Res. 37:1673–1682.

Gielis J. 2003. A generic geometric transformation that unifies awide range of natural and abstract shapes. Am J Bot.90:333–338.

Grill-Spector K, Kourtzi Z, KanwisherN. 2001. The lateral occipitalcomplex and its role in object recognition. Vision Res.41:1409–1422.

Haushofer J, Livingstone MS, Kanwisher N. 2008. Multivariatepatterns in object-selective cortex dissociate perceptual andphysical shape similarity. PLoS Biol. 6:e187.

Hömke L, Amunts K, Bönig L, Fretz C, Binkofski F, Zilles K,Weder B. 2009. Analysis of lesions in patients with unilateraltactile agnosia using cytoarchitectonic probabilistic maps.Hum Brain Mapp. 30:1444–1456.

James TW, Humphrey GK, Gati JS, Servos P, Menon RS,Goodale MA. 2002. Haptic study of three-dimensional objectsactivates extrastriate visual areas. Neuropsychologia.40:1706–1714.

James TW, Kim S, Fisher JS. 2007. The neural basis of haptic objectprocessing. Can J Exp Psychol. 61:219–229.

Kaplan JT, Meyer K. 2012. Multivariate pattern analysis revealscommon neural patterns across individuals during touch ob-servation. Neuroimage. 60:204–212.

Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. 2013. Visionholds a greater share in visuo-haptic object recognition thantouch. Neuroimage. 65:59–68.

Klatzky RL, Lederman SJ, Metzger VA. 1985. Identifying objects bytouch: an “expert system”. Percept Psychophys. 37:299–302.

Kourtzi Z, Kanwisher N. 2001. Representation of perceived objectshape by the human lateral occipital complex. Science.293:1506–1509.

Kriegeskorte N, Goebel R, Bandettini P. 2006. Information-basedfunctional brain mapping. Proc Natl Acad Sci USA.103:3863–3868.

Lacey S, Flueckiger P, Stilla R, LavaM, SathianK. 2010. Object famil-iarity modulates the relationship between visual object im-agery and haptic shape perception. Neuroimage. 49:1977–1990.

Lacey S, Sathian K. 2014. Visuo-haptic multisensory object recog-nition, categorization, and representation. Fron Psychol.5:730.

Lacey S, Tal N, Amedi A, Sathian K. 2009. A putativemodel ofmul-tisensory object representation. Brain Topogr. 21:269–274.

Lee H, Wallraven C. 2013. The brain’s “superformula”: perceptualreconstruction of complex shape spaces. J Vis. 13:445.

Liakakis G, Nickel J, Seitz RJ. 2011. Diversity of the inferior frontalgyrus—ameta-analysis of neuroimaging studies. Behav BrainRes. 225:341–347.

Malach R, Reppas J, Benson R, Kwong K, Jiang H, Kennedy W,Ledden P, Brady T, Rosen B, Tootell R. 1995. Object-related ac-tivity revealed by functional magnetic resonance imaging inhuman occipital cortex. Proc Natl Acad Sci USA. 92:8135–8139.

Maldjian JA, Laurienti PJ, Kraft RA, Burdette JH. 2003. An auto-mated method for neuroanatomic and cytoarchitectonicatlas-based interrogation of fMRI data sets. Neuroimage.19:1233–1239.

Mur M, Meys M, Bodurka J, Goebel R, Bandettini PA,Kriegeskorte N. 2013. Human object-similarity judgments re-flect and transcend the primate-IT object representation.Front Psychol. 4:128.

Op de Beeck HP. 2010. Against hyperacuity in brain reading: spa-tial smoothing does not hurt multivariate fMRI analyses?Neuroimage. 49:1943–1948.

Op de Beeck HP, Torfs K,Wagemans J. 2008. Perceived shape simi-larity among unfamiliar objects and the organization of thehuman object vision pathway. J Neurosci. 28:10111–10123.

Peelen MV, Caramazza A. 2012. Conceptual object representa-tions in human anterior temporal cortex. J Neurosci.32:15728–15736.

Peelen MV, He C, Han Z, Caramazza A, Bi Y. 2014. Nonvisual andvisual object shape representations in occipitotemporal cor-tex: evidence from congenitally blind and sighted adults. JNeurosci. 34:163–170.

Reed CL, Shoham S, Halgren E. 2004. Neural substrates of tactileobject recognition: an fMRI study. Hum Brain Mapp.21:236–246.

Rosch E. 1999. Principles of categorization. In: Margolis E,Laurence S, editors. Concepts: core readings. p. 189–206.

Shepard RN. 1987. Toward a universal law of generalization forpsychological science. Science. 237:1317–1323.

Smith FW,GoodaleMA. 2015. Decoding visual object categories inearly somatosensory cortex. Cereb Cortex. 25:1020–1031.

Snow JC, Strother L, Humphreys GW. 2014. Haptic shape process-ing in visual cortex. J Cogn Neurosci. 26:1154–1167.

Stoeckel MC, Weder B, Binkofski F, Buccino G, Shah NJ, Seitz RJ.2003. A fronto-parietal circuit for tactile object discrimination:an event-related fMRI study. Neuroimage. 19:1103–1114.

Tal N, Amedi A. 2009. Multisensory visual–tactile object relatednetwork in humans: insights gained using a novel crossmodaladaptation approach. Exp Brain Res. 198:165–182.

Tschechne S, Neumann H. 2014. Hierarchical representation ofshapes in visual cortex—from localized features to figuralshape segregation. Front Comput Neurosci. 8:93.

Van Essen DC, Drury HA, Dickson J, Harwell J, Hanlon D,Anderson CH. 2001. An integrated software suite for surface-based analyses of cerebral cortex. J Am Med Inform Assoc.8:443–459.

Vetter P, Smith Fraser W, Muckli L. 2014. Decoding sound andimagery content in early visual cortex. Curr Biol. 24:1256–1262.

Wallraven C, Bülthoff HH, Waterkamp S, van Dam L, Gaißert N.2013. The eyes grasp, the hands see: metric category knowl-edge transfers between vision and touch. Psychon Bull Rev.21:976–985.

Wallraven C, Dopjans L. 2013. Visual experience is necessary forefficient haptic face recognition. NeuroReport. 24:254–258.

Zhang M, Weisser VD, Stilla R, Prather S, Sathian K. 2004. Multi-sensory cortical processing of object shape and its relationto mental imagery. Cogn Affect Behav Neurosci. 4:251–259.

| Cerebral Cortex3412 , 2016, Vol. 26, No. 8

Downloaded from https://academic.oup.com/cercor/article-abstract/26/8/3402/2428026by gueston 11 April 2018