Information

Influence of LIP on visual cortex

Influence of LIP on visual cortex

I understand that the lateral intraparietal cortex (LIP) maps various features of targets onto an eye-centric reference frame. I also know that this mapping predicts where visual attention (in the form of saccades) will be directed and helps maintain the integrity of the mapping. Finally, I've also noted that the LIP has various projections to the visual cortex ventral and dorsal stream.

What is the function of these projections to the visual cortex? How is the visual cortex influenced by the aforementioned functions of the LIP?


Our brain on clutter

Described as anything that is kept, even though not used, needed or wanted, clutter can also be defined as having a disorganized and overwhelming amount of possessions in our living space, cars or storage areas. Clutter creates stress that has three major biological and neurological effects on us—our cortisol levels, our creativity and ability to focus, and our experience of pain.

But clutter isn't just physical. "When you have to-do items constantly floating around in your head, or you hear a ping every few minutes from your phone, your brain doesn't get a chance to fully enter creative flow or process experiences," says Mark Hurst, author of Bit Literacy, a New York Times bestseller on controlling the flow of information in the digital age.

The overconsumption of digital stuff—like social media notifications, news feeds, games and files on our computer—competes for our attention, creating a digital form of clutter that has the same effect on our brain as physical clutter.

Neatness and order support health—and oppose chaos.

So, what is going on? Our brains love order. The human body consists of thousands of integrated and interdependent biological and neurochemical systems, all organized and operating along circadian rhythms, without which our bodies would disintegrate into chaos. It's no wonder that the organization within our very own bodies naturally extends to the desire for order and tidiness in our homes. And, "order feels good, in part, because it's easier for our brains to deal with and not have to work so hard," says psychotherapist and professional organizer Cindy Glovinsky.


Methods

Participants.

Participants were 17 healthy volunteers age range 21–32 years, 7 females). All participants gave written informed consent in accordance with the National Health Service Office for Research Ethics Committees (ref 07/Q1603/11).

Model.

The model was a normative Bayesian learner implemented numerically in MATLAB (MathWorks). It estimated the probability density distribution from which each target location, αt, was drawn, based on the current and previous observations (i.e., α1:t), by assigning probabilities to putative parameters of the underlying Gaussian distribution p(μt, σt | α1:t).

The posterior probabilities assigned to each parameter pair p(μt, σt | α1:t) were calculated as follows. If the trial type is one-off, no updating occurs that is,

Otherwise, probabilities are updated using Bayes’ rule:

where the variable denotes dependence of the prior on trial type as follows. If the trial type is update, a uniform prior over ( ) is used:

where the denominator 360 × 50 represents the number of possible values for ( probed in the numerical implementation of the model. If the trial type is expected (i.e., any trial that is not one-off or update), then the prior on trial t is obtained without modification from the posterior on trial t − 1:

Full details are given in SI Methods.

A prior probability density function for the subsequent trial, t+1, was obtained from the posterior on trial t, and expressed as a function of target location (rather than parameters as follows:

The probabilities of each trial type occurring on trial t + 1, p(one-offt+1), p(updatet+1), and p(expectedt+1) were simply set to the true probability of each trial type in the generative model [i.e., p(one-offt+1) = 1/4, p(updatet+1) = 1/15, and p(expectedt+1) = 1 − (p(updatet+1) + p(one-offt+1)].

Output of the model is illustrated in Figs. S9 and S10.

Information Theoretic Regressors.

For the analyses presented in Fig. 3 (analysis of saccadic RTs), Fig. 4 (analysis of pupil area), and the fMRI whole-brain analysis and ROI analysis presented in Figs. 5 and 6, respectively, we used a GLM analysis with information theoretic regressors based on the learning model.

The four task regressors were main effect of task, entropy of the model, IS, and DKL. Each regressor used in fMRI analysis was defined as a series of 300 delta functions (events of 0.1 s in duration) at the times of target appearance (for 300 trials), weighted by task parameters and convolved with the hemodynamic response function. The weighting of each regressor on each trial was defined as follows.

All trials took value 1 as the main effect of task. Entropy of the model on trial t was expressed as

where defined analogously as in Eq. 8. The DKL on trial t is defined as

where is defined analogously as in Eq. 8 and is defined analogously to the definition of in Eq. 8:

The output of the model and resulting regressors are illustrated in Figs. S9 and S10.

The shared variance between each pair of regressors was relatively low the R 2 value for each pair was as follows: prior entropy and IS/R 2 = 0.14 prior entropy and DKL/R 2 = 0.14 and IS and DKL/R 2 = 0.15.


The neural processes that are involved in synthesizing information from cross-modal stimuli. It should not be confused with the particular underlying neural computation that determines multisensory integration's relative magnitude (superadditive, additive or subadditive).

Stimuli from two or more sensory modalities or an event providing such stimuli. This term should not be confused with the term 'multisensory'.

A situation in which the response to the cross-modal stimulus is greater than the response to the most effective of its component stimuli.

A situation in which the response to the cross-modal stimulus is less than the response to the most effective of its component stimuli.

The qualities of sensation such as the subjective impression that a sensation gives.

A neuron that responds to, or is influenced by, stimuli from more than one sensory modality.

The area of sensory space in which presentation of a stimulus leads to the response of a particular neuron.

The phenomenon whereby the degree to which a multisensory response exceeds the response to the most effective modality-specific stimulus component declines as the effectiveness of the modality-specific stimulus components increases.

A neural computation in which the multisensory response is not different from the arithmetic sum of the responses to the component stimuli.

A neural computation in which the multisensory response is smaller than the arithmetic sum of the responses to the component stimuli.

Electrophysiological studies in which the electrical activity (that is, the electrical potential) of the brain in response to a stimulus is measured using scalp-surface electrodes.

Blood-oxygen-level-dependent (BOLD) signal

An index of brain activation based on detecting changes in blood oxygenation with functional MRI.

A neural computation in which the multisensory response is larger than the arithmetic sum of the responses to the component stimuli.

A neural computation in which the response to a multisensory stimulus (for example, a number of action potentials) equals the sum of the responses to each of the modality-specific component stimuli presented individually.


Materials and Methods

Participants

Participants were recruited at Stockholm University via online advertisements. Participation was compensated with either gift vouchers worth 200 SEK or course credit. Power analyses based on behavioral and ERP pilot data and simulations (see Supplementary Materials ) showed that a sample size of 40 individuals would be sufficient (power > 0.8) even for small effect sizes (standardized βs = 0.2). The initial sample therefore consisted of 46 healthy adults who reported to have normal to corrected-to-normal vision and a normal sense of smell, and who were screened for their ability to correctly identify the four stimulus odors. We excluded data from 10 participants who had less than 75% correct trials in either the congruent or incongruent conditions and data from 1 participant with missing background data. In the analyses of the behavioral data, the effective sample size therefore consisted of 35 individuals (M age, 31.3 years range, 19–59 years, 16 females). In the ERP data analyses, data from an additional five participants was also excluded due to EEG artifacts, resulting in a sample size of 30 individuals (M age, 32.3 years range, 19–59 years, 14 females). In order to preserve statistical power, we chose to keep these five participants for behavioral analyses. However, the results of these analyses also held with these participants excluded. All participants gave written informed consent in accordance with the Declaration of Helsinki. The study was approved by the regional ethics board 2014/2129-31/2.

Stimuli

The stimuli consisted of four visual and four olfactory objects belonging to the categories fruit (lemon and pear) and flower (lavender and lilac). Odors were presented with an olfactometer that was controlled by the stimulus computer. We used 1–3 ml of odor essences and oils from Stockholms Aeter and Essencefabrik AB (pear “päronessens” and lilac “syrenolja”) and Aroma Creative (lemon “citron” and lavender “lavendel”). 2 The odor identification and rating tasks indicated that the odors were easy to identify and perceived as similar in intensity, pleasantness, and specificity but differed in edibility (see Supplementary Materials ). The pictures were presented on a computer screen and consisted of photographed images that were matched in size, brightness, and hue. All pictures were 10.5 cm high (subtending 6.01° vertical visual angle at a 1-m distance). The lavender and lilac pictures were 5.5 cm wide (3.15° horizontal visual angle), and the lemon and peak pictures were 8 cm wide (4.58° horizontal visual angle). The auditory targets consisted of two 0.5-s-long sinus tone that were presented in earphones. Tone amplitudes were adjusted for tone loudness. The low tone consisted of a 630-Hz tone with 60.8 dB and the high tone of 1250 Hz tone with 62.2 dB.

Procedure

The experiment was conducted in brightly lit and well-ventilated olfactory testing room at the Gösta Ekman Laboratory, Department of Psychology, at Stockholm University. Participants were informed about the experiment and that they could abort it at any time. They were seated at a 1-m distance from the stimulus computer screen. Participants performed a training protocol in which they identified the experimental stimuli. Participants also performed perceptual odor ratings (see Supplementary Materials for details).

In the main experimental task, participants categorized visual or olfactory stimuli as fruit or flower. In order to investigate modality dominance, we used a categorization task with cross-modal interference. Visual and olfactory stimuli were presented concurrently in order to achieve a simultaneous bimodal percept (see below for details). On congruent trials, the same visual and olfactory objects were used (e.g., the picture and odor of pear), yielding a total of four different odor–picture pairings. On incongruent trials, objects from each of the two categories were used (e.g., the picture of pear and the odor of lilac), resulting in eight different odor–picture pairings (see, e.g., Olofsson et al. 2012, 2014, for a similar protocol). Importantly, in order to remove bias due to processing speed differences between visual and olfactory perception, the auditory target cues that informed about the object target (i.e., picture or odor) were delayed by a varying interval of 1000–2000-ms poststimulus offset. This allowed for statistical analyses of the possible effect of lag time. Further, the cue onset timing and the delayed auditory target minimized the risk that sensory processing speed would influence the results (see Supplementary Materials ). A categorization trial is illustrated in Figure 2.

Trial structure of the categorization task. During training trials (but not in experimental trials), the object modality to be categorized was also displayed on the final screen, written above the fixation cross, in order to learn the meaning of the two target tones.

Trial structure of the categorization task. During training trials (but not in experimental trials), the object modality to be categorized was also displayed on the final screen, written above the fixation cross, in order to learn the meaning of the two target tones.

Each trial begun with the presentation of the odor. First, a black fixation cross appeared for 1500 ms in the center of the screen. It indicated that it was time to exhale and prepare to sniff. Following a 200-ms blank screen, a sniff cue (red fixation cross) appeared. The odor was simultaneously released by the olfactometer. At 400 ms after olfactory stimulation (and sniff cue) onset, the picture appeared in the center of the screen. It was presented together with the odor for 1500 ms. In other words, pictures were presented with a lag of 400 ms relative to the olfactometer trigger. This was done in order to compensate for the processing time difference between visual and olfactory stimuli. Whereas visual detection RTs are on average about 300 ms (e.g., Collins and Long 1996 Amano et al. 2006), olfactory detection RTs are around 800 ms when following the current protocol ( Olofsson et al. 2013, 2014). Also visual ERPs occur about 300–400 ms before olfactory ERPs ( Geisler and Polich 1994 Alexander et al. 1995 Pause et al. 1996 Romero and Polich 1996). Taken together, these findings are highly suggestive of a 350–500-ms delay in olfactory processing times and prompted our 400-ms odor–picture lag time. However, as our dependent measures (ERPs and RTs) were linked to the onset of a delayed auditory target stimuli, the exact timing of the cue onset should not be critical.

After stimulus presentation, the screen turned blank for 1000–2000 ms. Following this delay, participants were presented with the target cues (i.e., low/high sinus tone, presented together with a black fixation cross at the center of the screen) and performed the categorization task. The fixation cross was flanked by two text boxes that reminded about the button assignment (i.e., whether the left button was used for fruits and the right for flowers or vice versa). The position of the boxes (left vs. right) corresponded to the button assignment. The task was performed by pressing either the leftmost or the rightmost button of a four-button response box. Participants were encouraged to respond as quickly and accurately as possible. The four possible combinations of tone and button assignment were counterbalanced across participants. Each trial ended with a delay (minimum 1000 ms) that ensured that at least 10 s had passed since the start of the trial.

Participants conducted 128 trials, 2 (congruence) × 2 (modality) × 2 (category) × 2 (stimuli) × 8 (repetitions), evenly distributed across 4 blocks. In the incongruent trials, the target stimulus object (i.e., the sensory object to be categorized) co-occurred with either of the two incongruent stimuli equally often (e.g., pear odor co-occurred with lilac and lavender pictures on an equal number of trials). Trial presentation order within a block was randomized. At the beginning of each block, a visual display informed that the next block was about to start. The block started with a button press. In order to get familiarized with the task, participants performed a training session consisting of 16 trials, 2 (congruence) × 2 (modality) × 2 (category) × 2 (stimuli), before the actual experiment started. In the training trials, but not the experimental trials, the sensory modality to categorize was displayed on the screen (directly above the fixation cross, see Fig. 2), in order for the participants to learn the meaning of the target tones. Participants were encouraged to take short breaks in between blocks. They were told to avoid blinking and moving from the time the sniff cue was presented to the time of their response.

Apparatus

Odors were presented birhinally with a custom-built, continuous-flow olfactometer described in detail in Lundström et al. (2010). The olfactometer was controlled using experimental PsychoPy software through a parallel port. In order to evaluate the timing of odor presentation in our experimental setup, we performed measurements of the temporal performance of the olfactometer (presented in detail in the Supplementary Materials ). These showed that the onset of the odor output occurs approximately 54 ± 7 ms after the presentation of the visual sniff cue. The olfactometer has also been shown to emit a stable odor concentration over time (approximately a 0.5% decrease over a 10-min period) and to be suitable for recording olfactory ERPs ( Lundström et al. 2010). The continuous airflow was set to 0.5 l/m and individual channel airflows to 2.5 l/m.

Visual stimuli were presented on a 24″ Benq XL2430-B TN-screen with 100-Hz refresh rate and a resolution of 1920 × 1080 pixels. The experiment was run on a Windows 7 PC. In the odor identification and rating tasks, participants responded with the mouse. In the categorization task, they responded with a Cedrus RB-740 Response Box (Cedrus Corporation).

EEG Recording

EEG was recorded with a 64-pin electrode Active Two Biosemi system (Biosemi), using EEG caps (Electro-Cap International). In addition to the 64 10–20 electrodes, the Biosemi system uses an internal reference electrode (CMS), positioned in between PO3 and POz, and a ground electrode (DRL) positioned in between POz and PO4. EOG was recorded with two flat electrodes attached with an adhesive disk, one positioned at the outer canthus of the right eye and the other directly below the right eye. Data was sampled at 2048 Hz with a hardware low-pass filter at 410 Hz but down sampled to 512 Hz offline.

Data Analysis

EEG Preprocessing

We performed all offline EEG preprocessing in EEGLAB ( Delorme and Makeig 2004) in MATLAB (MathWorks, Inc.). The raw EEG data was down sampled to 512 Hz and band-pass filtered between 0.2 and 40 Hz, using a FIR filter with a cutoff frequency of 0.1 Hz. Irrelevant parts of the filtered data were then removed to select experimental trial segments ranging from 1000 ms prior to the start of the trial (i.e., the presentation of the black fixation cross) to 1000 ms after the response, with training session trials included. Channels were defined as bad if the amplitude difference exceeded 500 mV in more than 50% of 1000-ms time windows, if their correlation with their robust estimates as calculated from the signal of the 16 neighboring channels was less than 0.75, or if their signal-to-noise ratio deviated with more than 4 standard deviations (SDs) from the channel mean signal-to-noise ratio of all channels. On average, five channels in each participant data set was bad (min: 0, max: 27). Bad channels were interpolated using spherical splines, and the data was re-referenced to the average of all channels, using a full-rank average. 3 We then performed ocular artifact rejection using independent components analysis (ICA). The data used for ICA decomposition was high-pass filtered at 1 Hz, trimmed of noisy data by visual inspection, and analyzed with the AMICA EEGLAB plugin. The resulting ICA components were transferred back to the original 0.2–40.0-Hz band-pass-filtered data. Ocular artifact ICA components were automatically identified and removed using the icablinkmetrics plugin ( Pontifex et al. 2017), as based on their relationships with activity in the vertical EOG channel, the horizontal EOG channel, or the mean of channels Fp1, AF7, FCz, Fp2, and AF8. We used a correlation coefficient threshold of 0.9, a convolution coefficient threshold of 0.1, and an artifact reduction threshold of 10% that had to be statistically significant at the .001 alpha level. On average, three components were identified as artifactual in each participant data set (min: 0, max: 6).

The artifact-corrected data from the experimental trials were again re-referenced to the full-rank average of all channels and then divided into −200 to 1000-ms epochs relative to onset of the visual stimuli and onset of the auditory target. Epochs were baseline-corrected by subtracting the mean of the 200-ms prestimulus period. Epochs were removed if they had a ± 120-uV amplitude difference in any channel, if the amplitude difference in any channel deviated by more than 6 SDs from the mean amplitude channel difference in all epochs, or if the amplitude difference of four channels deviated by more than 4 SDs from their mean channel amplitude differences in all epochs. We removed 16% of all epochs using these criteria (per-participant minimum, 5% maximum, 58%). We also excluded epochs from trials with RTs below 200 ms or above 5000 ms (similar to Olofsson et al. 2012). Five participants with less than 15 epochs remaining in any condition were excluded from subsequent EEG data analyses.

Statistical Analyses

All results were analyzed in the statistical software R ( R Core Development Team 2018), using custom-made analysis scripts (available at osf.io/7qnwu/). As stated in the preregistration, we performed Bayesian mixed-effect modeling in the Stan modeling language ( Stan Development Team 2017), using the R package Rstan ( Stan Development Team 2018). Response times and ERP amplitudes were analyzed with linear mixed-effect modeling and accuracy with logistic mixed-effect modeling. Full model specifications and model priors are presented in the Supplementary Materials . Inferences about parameter effects (e.g., the congruence × modality interaction) were done on the basis of the parameter credibility intervals (CIs). We considered a parameter 95% CI not including zero as evidence for an effect of the parameter at hand. We also report the posterior probability (P) of a parameter being zero or taking on values in the opposite direction of the mean parameter estimate, multiplied by 2. All Bayesian analyses were complemented with frequentist mixed-effect modeling (see Supplementary Materials ). All models contained fixed effects for the independent variables congruence (congruent vs. incongruent) and modality (visual vs. olfactory), and for the congruence × modality interaction. The models also included fixed effects for the following potential confounders.

Trial number. In order to control for any learning effects remaining after the initial training session, we included trial number as a control variable.

Delay. The delay between olfactory–visual cues and auditory targets varied randomly between 1000 and 2000 ms, in steps of 200 ms. A longer delay gives participants more time for stimulus processing and response preparation and might thus result in shorter response times and higher accuracies. We also used this varying delay as a control variable, to test whether longer delay times would be more beneficial for any particular sensory system.

Object category. The object category, fruit or flower, was also included in order to control for any potential differences in categorization.

Gender. The gender of the participant was also included as some studies have found women to have somewhat better olfactory perceptual abilities than men (e.g., see Doty and Cameron 2009 for a review).

Similarity index. In order to control for a potential influence of between-modality differences in perceived stimulus similarity, we also included a between-category similarity index as a control variable. This index aimed to capture the participant-specific between-category similarity, that is, the individually perceived similarity between a cue category stimulus (e.g., lemon of the fruit category) and the two stimuli of the other, competing category (i.e., lilac and lavender of the flower category). We wanted to quantify whether a high between-category similarity could render the categorization task more difficult, as the cue stimulus should be harder to differentiate from the stimuli of the competitor category. Within-category similarity, on the other hand, should not influence categorization, since a confound (e.g., confounding pear and lemon) would not affect the categorical decision (fruit). This index was calculated on the basis of between-category similarity ratings of the stimuli (see Supplementary Materials ). First, in order to make similarity ratings comparable across participants, ratings were standardized within participants, ensuring that participant rating means and SDs were the same for each participant. The between-category similarity index was then calculated within each participant, cue stimulus, and modality as the mean of the standardized similarity ratings involving the stimulus at hand. Since participants rated the stimuli for their similarity to each of the two competitor category stimuli twice (e.g., two similarity ratings of lemon–lilac and two of lemon–lavender), this index was the mean of four ratings.

All models also include random intercepts for participants and items, the latter differentiating between all possible visual and olfactory stimulus combinations. Thereby, we control for any systematic differences between subject and stimulus pairs. We also included a by-participant random slope for trial number, thereby controlling for any differences in learning between participants. RT data was log-transformed in order to ensure normality. All continuous covariates were standardized by subtracting the mean and, following Gelman and Hill (2006), divided by 2 SDs. Categorical variables were effect-coded through centering. 4 The main effects are therefore tested against the grand mean of the data. In the event of congruence × modality interaction effects, we conducted simple effect follow-up analyses, testing the effect of modality in the congruent and the incongruent conditions separately. This was done by including three dummy-coded predictors either for visual-congruent, olfactory-congruent, and visual-incongruent (testing modality within incongruent trials), or for visual-incongruent, olfactory-incongruent, and visual-incongruent (testing modality within congruent trials).

ERP Analyses

We investigated corresponding ERP effects time-locked to the auditory target that were related to the increased processing demands of incongruent compared to congruent trials and their observed interactions with sensory modality. However, since this is the first study of its kind to include ERP data, we did not make any specific predictions in terms of exact time windows and regions of interest (ROIs). These were chosen on the basis of previous literature and visual inspection of the data. In order to further confirm our choice of time windows, we also performed cluster-based permutation analysis ( Maris 2012 similar too, e.g., Maris and Oostenveld 2007), using custom-made analysis scripts (available at osf.io/7qnwu/). Although this method does not provide evidence for whether an ERP effect occurs in a particular spatiotemporal region (i.e., a specific cluster), it allows for the identification of regions of interest for further investigation and provides evidence for a difference in the ERP response to two conditions more generally (i.e., by rejecting the null hypothesis that the ERP data of both conditions come from the same probability distribution, see Maris and Oostenveld 2007 Sassenhagen and Draschkow 2019). Importantly, cluster-based permutation does not suffer from problems of making multiple comparisons, resulting in an inflation of the risk of falsely rejecting the null hypothesis (e.g., Benjamini and Hochberg 1995 Dunnett 1955 Hochberg 1988). Our implementation of the cluster-based permutation test is highly similar to that of the EEG analysis tool FieldTrip ( Oostenveld et al. 2011). First, t-values for the ERP condition differences at each spatiotemporal location are calculated. t-values above 2 or below −2 from neighboring spatiotemporal locations are then grouped into positive and negative clusters and summed for each cluster. Two probability distributions of t-values is then calculated on the basis of cluster-based Monte Carlo permutation. This involves randomly assigning data sets to conditions multiple times and, for each permutation, calculating cluster-based, summed t-values, in the same way that was done in the original data. The distributions of maximum and minimum t-values from each permutation then serve as the probability distributions against which the observed summed t-values are tested. This distribution approximates the probability distribution for the largest t-values that can be expected under the null hypothesis that the ERP data from both conditions come from the same probability distribution and therefore do not differ. In our analyses, we included clusters with t-values that had at most a 5% probability to be observed under the null hypothesis (i.e., α = <0.05). For stimulus presentation ERP data, we performed a cluster-based permutation test that compared the congruent and incongruent condition (reported in the Supplementary Materials ). For auditory target ERPs, we first compared the congruent and incongruent conditions across modalities and then compared modality differences within the congruent and the incongruent conditions separately.

We also performed analyses on ERP data on single-trial (rather than subject average) mean ERP amplitudes across time windows and electrode groups (similar to Frömer et al. 2018), using Bayesian and frequentist linear mixed-effect models. These analyses were conducted on ERPs time-locked to the stimulus presentation (reported in the Supplementary Materials ), on the one hand, and to the auditory target, on the other. As motivated by the results of our cluster-based permutation analyses (see below), we performed analyses on mean amplitudes in the P300 time window, ranging from 320 to 580 ms, across the centro-frontal (CF) scalp region which consisted of electrodes AF3, Afz, AF4, F1, Fz, F2, FC1, FCz, and FC2. We also conducted three separate analyses in the late 600–700-, 700–800-, and 800–900-ms time windows across the centro-occipital (CO) region, consisting of P4, P2, Pz, P1, P3, PO8, PO4, POz, PO3, PO7, O2, Oz, and O1 (see Supplementary Fig. 8 in Supplementary Materials ), where a positive slow wave (PSW) effect was observed.

As a final step, we also investigated the relationship between RTs and single-trial mean ERP amplitudes in the P300 and the late PSW time windows, again using both Bayesian and frequentist linear mixed-effect models. In addition to the fixed effects for the confounders listed above, these models contained dummy-coded predictors for each condition (olfactory-congruent, visual-congruent, olfactory-incongruent, and visual-incongruent) and corresponding RT interaction terms (e.g., RT × olfactory-congruent). Intercept terms were excluded. Each interaction term therefore tests whether the RT slope within each condition differs from zero, that is, whether within-condition ERP amplitudes are negatively or positively correlated with RTs.


CCN Forum - Development Talks

Abstract:
Audiovisual integration plays a vital role in speech perception, especially during face-to-face communication. Crossmodal activation of auditory processes by visual stimuli is an important aspect of natural speech perception. It has been previously shown that lip reading activates areas areas in the primary auditory cortex (PAC) including the superior temporal gyrus (STG). Though visual stimuli have been shown to influence neural representations in auditory cortex, it has not been conclusively shown whether auditory and visual stimuli activate the same population of neurons in the PAC. Here, we examine the spatial distribution of silent lip reading signals in the PAC in a large cohort of patients to study if this is indeed the case. We recorded electrocorticographic (ECoG) activity from macroscopic depth electrodes implanted within the STG of 13 patients with epilepsy. On each trial, patients were presented with one of three types of stimuli: (1) single phonemes, (2) videos showing the lip movements articulating each phoneme (visemes), or (3) videos showing audio-visual speech movements. Group-level analyses using parametric statistics were performed to show that visual lip -reading generates neural responses broadly along the PAC, spatially overlapping with the distribution of phoneme responses. Furthermore, we also investigated whether the identity of these phonemes and visemes could be discriminated from neural responses in auditory areas. Several electrodes across patients reliably discriminated between specific instances of the phonemes or visemes. However, preliminary analyses indicate that auditory and visual speech information are encoded at distinct areas of the STG. These results demonstrate that observing silent visual speech crossmodally activates speech-processing areas in a content-specific manner in the PAC. It is also shown that maximum information for phoneme discrimination in the PAC is carried in the frequency band of 4-8 hz.

Title:
Probing cortical inhibition in visual cortex with transcranial magnetic stimulation

Abstract:
Transcranial magnetic stimulation (TMS) is a non-invasive method to stimulate localized brain regions. Despite widespread use in motor cortex, TMS is seldom performed in sensory areas due to variable, qualitative metrics. Our objective was to assess the reliability and validity of tracing TMS-induced phosphenes (short-lived artificial percepts) to investigate the stimulation parameters necessary to elicit decreased visual cortex excitability with paired-pulse TMS at short inter-stimulus intervals.


Külpe, O. Grundriss der Psychologie. Auf experimenteller Grundlage dargestellt (Wilhelm Engelmann, Leipzig, 1893).

Exner, S. Experimentelle Untersuchung der einfachsten psychischen Processe. Pflugers Arch. Physiol. 11, 403–432 (1875).

Wertheimer, M. Experimentelle Studien über das Sehen von Behwegung. Zeitschrfit für Psychologie 61, 161–265 (1912).

Ratliff, F. in Ernst Mach Physicist and Philosopher (eds Cohen, R. S. & Seeger, R. J.) 165–184 (Reidel, Dordrecht, 1970).

Maffei, L. & Fiorentini, A. The unresponsive regions of visual cortical receptive fields. Vision Res. 16, 1131–1139 (1976).

Allman, J., Miezin, F. & McGuinness, E. Stimulus specific responses from beyond the classical receptive field: neurophysiological mechanisms for local–global comparisons in visual neurons. Annu. Rev. Neurosci. 8, 407–430 (1985).

Sabra, A. I. The Optics of Ibn Al-Haytham Vols I and II (The Warburg Institute, London, 1989).

Hering, E. Der Raumsinn und die Bewegungen des Auges (F. C. W. Fogel, Leipzig, 1879).

Rossi, A. F. & Paradiso, M. A. Neural correlates of perceived brightness in the retina, lateral geniculate nucleus, and striate cortex. J. Neurosci. 19, 6145–6156 (1999).

Lamme, V. A., Super, H. & Spekreijse, H. Feedforward, horizontal, and feedback processing in the visual cortex. Curr. Opin. Neurobiol. 8, 529–535 (1998).

Castet, E., Lorenceau, J., Shiffrar, M. & Bonnet, C. Perceived speed of moving lines depends on orientation, length, speed and luminance. Vision Res. 33, 1921–1936 (1993).

Bringuier, V., Chavane, F., Glaeser, L. & Fregnac, Y. Horizontal propagation of visual activity in the synaptic integration field of area 17 neurons. Science 283, 695–699 (1999).

Chavane, F. et al. The visual cortical association field: a Gestalt concept or a psychophysiological entity? J. Physiol. (Paris) 94, 333–342 (2000).

Kanizsa, G. Margini quasi-percettivi in campi con stimolazione omogenea. Rivista di Psicologia 49, 7–30 (1955).

Kanizsa, G. Subjective contours. Sci. Am. 234, 48–52 (1976).

Schumann, F. Einege Beobachtungen uber die Zusammenfassung von Gesichtseindrucken zu Einheiten. Zeitschrift für Psychologie 23, 1–23 (1900).

Spillmann, L. & Dresp, B. Phenomena of illusory form: can we bridge the gap between levels of explanation? Perception 24, 1333–1364 (1995).

Parks, T. E. Rock's cognitive theory of illusory figures: a commentary. Perception 30, 627–631 (2001).

Von der Heydt, R., Peterhans, E. & Baumgartner, G. Illusory contours and cortical neuron responses. Science 224, 1260–1262 (1984).

Grosof, D. H., Shapley, R. M. & Hawken, M. J. Macaque V1 neurons can signal 'illusory' contours. Nature 365, 550–552 (1993).

Lee, T. S. & Nguyen, M. Dynamics of subjective contour formation in the early visual cortex. Proc. Natl Acad. Sci. USA 98, 1907–1911 (2001).

Hering, E. Zur Lehre vom Lichtsinne (Carl Gerolds Sohn, Vienna, 1878).

Von Helmholtz, H. Handbuch der Physiologischen Optik (Voss, Hamburg, 1867).

Hurvich, L. M. & Jameson, D. Some quantitative aspects of an opponent-colors theory. IV. A psychological color specification system. J. Opt. Soc. Am. 46, 416–421 (1956).

De Valois, R. L., Abramov, I. & Jacobs, G. H. Analysis of response patterns of LGN cells. J. Opt. Soc. Am. 7, 966–977 (1966).

Sutherland, N. S. Figural after-effects and apparent size. Q. J. Psychol. 8, 222–228 (1961).

Wade, N. J. & Verstraten, F. A. J. in The Motion Aftereffect: a Modern Perspective (eds Mather, G., Verstraten, F. & Anstis, S.) 1–23 (MIT Press, Cambridge, Massachusetts, 1998).

Barlow, H. B. & Hill, R. M. Evidence for a physiological explanation of the waterfall phenomenon and figural aftereffects. Nature 200, 1434–1435 (1963).

Wohlgemuth, A. On the after-effect of seen movement. Br. J. Psychol. (Suppl.) 1, 1–117 (1911).

McCollough, C. Color adaptation of edge-detectors in the human visual system. Science 149, 1115–1116 (1965).

Anstis, S. M. in Handbook of Psychobiology (eds Gazzaniga, M. S. & Blakemore, C.) 269–323 (Academic, New York, 1975).

Blakemore, C. & Campbell, F. W. Adaptation to spatial stimuli. J. Physiol. (Lond.) 1, 11P–13P (1969).

Westheimer, G. The Fourier theory of vision. Perception 30, 531–541 (2001).

Levelt, W. J. M. On Binocular Rivalry (Royal VanGorcum, Assen, The Netherlands, 1965).

Lansing, R. W. Electroencephalographic correlates of binocular rivalry in man. Science 146, 1325–1327 (1964).

Logothetis, N. K. & Schall, J. D. Neuronal correlates of subjective visual perception. Science 245, 761–763 (1989).

Leopold, D. A. & Logothetis, N. K. Activity changes in early visual cortex reflect monkeys' percepts during binocular rivalry. Nature 379, 549–553 (1996).

Sheinberg, D. L. & Logothetis, N. K. The role of temporal cortical areas in perceptual organization. Proc. Natl Acad. Sci. USA 94, 3408–3413 (1997).

Logothetis, N. K., Leopold, D. A. & Sheinberg, D. L. What is rivalling during binocular rivalry? Nature 380, 621–624 (1996).

Dayan, P. A hierarchical model of binocular rivalry. Neural Comput. 10, 1119–1135 (1998).

Tong, F. & Engel, S. A. Interocular rivalry revealed in the human cortical blind-spot representation. Nature 411, 195–199 (2001).

Lumer, E. D., Friston, K. J. & Rees, G. Neural correlates of perceptual rivalry in the human brain. Science 280, 1930–1934 (1998).

Tong, F., Nakayama, K., Vaughan, J. T. & Kanwisher, N. Binocular rivalry and visual awareness in human extrastriate cortex. Neuron 21, 753–759 (1998).

Pulfrich, C. Die Stereoskopie im Dienste der isochromen und heterochromen Photometrie. Die Naturwissenschafte 10, 553–761 (1922).

Moutoussis, K. & Zeki, S. A direct demonstration of perceptual asynchrony in vision. Proc. R. Soc. Lond. B 264, 393–399 (1997).

Johnston, A. & Nishida, S. Time perception: brain time or event time? Curr. Biol. 11, R427–R430 (2001).

Purushothaman, G., Patel, S. S., Bedell, H. E. & Ogmen, H. Moving ahead through differential visual latency. Nature 396, 424 (1998).

Whitney, D. & Murakami, I. Latency difference, not spatial extrapolation. Nature Neurosci. 1, 656–657 (1998).

Baldo, M. V. & Klein, S. A. Extrapolation or attention shift? Nature 378, 565–566 (1995).

MacKay, D. Perceptual stability of a stroboscopically lit visual field containing self-luminous objects. Nature 181, 507–508 (1958).

Nijhawan, R. Motion extrapolation in catching. Nature 370, 256–257 (1994).

Eagleman, D. M. & Sejnowski, T. J. Latency difference versus postdiction: response to Patel et al. Science 290, 1051a (2000).

Eagleman, D. M. & Sejnowski, T. J. Motion integration and postdiction in visual awareness. Science 287, 2036–2038 (2000).

Eagleman, D. M. & Sejnowski, T. J. The position of moving objects: response to Krekelberg et al. Science 289, 1107a (2000).

James, W. The Principles of Psychology (Dover, New York, 1890).

Dennett, D. C. Consciousness Explained (Little Brown & Co., New York, 1992).

Pascual-Leone, A. & Walsh, V. Fast backprojections from the motion to the primary visual area necessary for visual awareness. Science 292, 510–512 (2001).

Bachmann, T. Psychophysiology of Visual Masking (Nova Science, Commack, New York, 1994).

Macknik, S. L. & Livingstone, M. S. Neuronal correlates of visibility and invisibility in the primate visual system. Nature Neurosci. 1, 144–149 (1998).

Macknik, S. L. & Haglund, M. M. Optical images of visible and invisible percepts in the primary visual cortex of primates. Proc. Natl Acad. Sci. USA 96, 15208–15210 (1999).

Libet, B., Wright, E. W., Feinstein, B. & Pearl, D. K. Subjective referral of the timing for a conscious sensory experience. Brain 102, 193–224 (1979).

Kreiman, G., Koch, C. & Fried, I. Category-specific visual responses of single neurons in the human medial temporal lobe. Nature Neurosci. 3, 946–953 (2000).

Celebrini, S. & Newsome, W. T. Microstimulation of extrastriate area MST influences performance on a direction discrimination task. J. Neurophysiol. 73, 437–448 (1995).

Walsh, V. & Cowey, A. Transcranial magnetic stimulation and cognitive neuroscience. Nature Rev. Neurosci. 1, 73–79 (2000).

McGurk, H. & MacDonald, J. Hearing lips and seeing voices. Nature 264, 746–748 (1976).

Schwartz, J., Robert-Ribes, J. & Escudier, J. P. in Hearing by Eye (eds Campbell, R., Dodd, B. & Burnham, D. K.) 85–108 (Psychology Press, Hove, East Sussex, UK, 1998).

Vroomen, J., Bertelson, P. & De Gelder, B. The ventriloquist effect does not depend on the direction of automatic visual attention. Percept. Psychophys. 63, 651–659 (2001).

Bertelson, P., Pavani, F., Ladavas, E., Vroomen, J. & De Gelder, B. Ventriloquism in patients with unilateral visual neglect. Neuropsychologia 38, 1634–1642 (2000).

Bertelson, P., Vroomen, J., De Gelder, B. & Driver, J. The ventriloquist effect does not depend on the direction of deliberate visual attention. Percept. Psychophys. 62, 321–332 (2000).

Shams, L., Kamitani, Y. & Shimojo, S. Illusions. What you see is what you hear. Nature 408, 788 (2000).

Watanabe, K. & Shimojo, S. When sound affects vision: effects of auditory grouping on visual motion perception. Psychol. Sci. 12, 109–116 (2001).

Loe, P. R. & Benevento, L. A. Auditory–visual interaction in single units in the orbito-insular cortex of the cat. Electroencephalogr. Clin. Neurophysiol. 26, 395–398 (1969).

Benevento, L. A., Fallon, J., Davis, B. J. & Rezak, M. Auditory–visual interaction in single cells in the cortex of the superior temporal sulcus and the orbital frontal cortex of the macaque monkey. Exp. Neurol. 57, 849–872 (1977).

Meredith, M. A., Nemitz, J. W. & Stein, B. E. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. J. Neurosci. 7, 3215–3229 (1987).

Calvert, G. A. et al. Activation of auditory cortex during silent lipreading. Science 276, 593–596 (1997).

Macaluso, E., Frith, C. D. & Driver, J. Modulation of human visual cortex by crossmodal spatial attention. Science 289, 1206–1208 (2000).

De Gelder, B., Bocker, K. B., Tuomainen, J., Hensen, M. & Vroomen, J. The combined perception of emotion from voice and face: early interaction revealed by human electric brain responses. Neurosci. Lett. 260, 133–136 (1999).

Goodale, M. A. & Milner, A. D. Separate visual pathways for perception and action. Trends Neurosci. 15, 20–25 (1992).

Goodale, M. A. & Haffenden, A. Frames of reference for perception and action in the human visual system. Neurosci. Biobehav. Rev. 22, 161–172 (1998).

Marotta, J. J., DeSouza, J. F., Haffenden, A. M. & Goodale, M. A. Does a monocularly presented size-contrast illusion influence grip aperture? Neuropsychologia 36, 491–497 (1998).

Plodowski, A. & Jackson, S. R. Vision: getting to grips with the Ebbinghaus illusion. Curr. Biol. 11, R304–R306 (2001).

Chance, B. Ophthalmology (Hafner, New York, 1962).

Hubel, D. H. & Wiesel, T. N. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol. (Lond.) 160, 106–154 (1962).

Livingstone, M. & Hubel, D. Segregation of form, color, movement, and depth: anatomy, physiology, and perception. Science 240, 740–749 (1988).

Ramachandran, V. S. & Gregory, R. L. Does colour provide an input to human motion perception? Nature 275, 55–56 (1978).

Lu, Z. L., Lesmes, L. A. & Sperling, G. Perceptual motion standstill in rapidly moving chromatic displays. Proc. Natl Acad. Sci. USA 96, 15374–15379 (1999).

Thiele, A., Dobkins, K. R. & Albright, T. D. Neural correlates of chromatic motion perception. Neuron 32, 351–358 (2001).

Purkinje, J. E. Beitrage zur naheren Kenntniss des Schwindels aus heautognostischen Daten. Medicinische Jahrbucher des kaiserlich-koniglichen osterreichischen Staates 6, 79–125 (1820).

Addams, R. An account of a peculiar optical phenomenon seen after having looked at a moving body, etc. Lond. Edinb. Phil. Mag. J. Sci. 5, 373–374 (1834).

Wheatstone, C. On some remarkable, and hitherto unresolved, phenomena of binocular vision. Phil. Trans. R. Soc. Lond. 128, 371–394 (1838).

Steinman, R. M., Pizlo, Z. & Pizlo, F. J. Phi is not beta, and why Wertheimer's discovery launched the Gestalt revolution. Vision Res. 40, 2257–2264 (2000).

Rubin, E. Synoplevede Figurer (Gyldendalske, Copenhagen, 1915).

Frohlich, F. W. Uber die Messung der Empfindungszeit. Zeitschrift für Sinnesphysiologie 54, 58–78 (1923).

Julesz, B. Binocular depth perception of computer-generated patterns. Bell System Tech. J. 39, 1125–1162 (1960).

Barlow, H. B., Blakemore, C. & Pettigrew, J. D. The neural mechanism of binocular depth discrimination. J. Physiol. (Lond.) 193, 327–342 (1967).

Belliveau, J. W., Cohen, M. S., Weisskoff, R. M., Buchbinder, B. R. & Rosen, B. R. Functional studies of the human brain using high-speed magnetic resonance imaging. J. Neuroimaging 1, 36–41 (1991).

Duncan, R. O., Albright, T. D. & Stoner, G. R. Occlusion and the interpretation of visual motion: perceptual and neuronal effects of context. J. Neurosci. 20, 5885–5897 (2000).

He, S. & MacLeod, D. I. Orientation-selective adaptation and tilt after-effect from invisible patterns. Nature 411, 473–476 (2001).

Bradley, D. R. & Petry, H. M. Organizational determinants of subjective contour: the subjective Necker cube. Am. J. Psychol. 90, 253–262 (1977).


References

Asch, S. E. (1952). Social psychology. Englewood Cliffs, NJ: Prentice Hall.

Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs, 70(9, Whole No. 416).

Baron, R. S., Vandello, J. A. , & Brunsman, B. (1996). The forgotten variable in conformity research: Impact of task importance on social influence. Journal of Personality & Social Psychology, 71, 915–927.

Berns, G. S., Chappelow, J., Zink, C. F., Pagnoni, G., Martin-Skurski, M. E. , & Richards, J. (2005). Neurobiological correlates of social conformity and independence during mental rotation. Biological Psychiatry, 58, 245–253.

Bisley, J. W., Zaksas, D., Droll, J. A. , & Pasternak, T. (2004). Activity of neurons in cortical area MT during a memory for motion task. Journal of Neurophysiology, 91, 286–300.

Bonaccio, S. , & Dalal, R. S. (2006). Advice taking and decisionmaking: An integrative literature review, and implications for the organizational sciences. Organizational Behavior & Human Decision Processes, 101, 127–151.

Bonnie, K. E. , & de Waal, F. B. M. (2007). Copying without rewards: Socially influenced foraging decisions among brown capuchin monkeys. Animal Cognition, 10, 283–292.

Britten, K. H., Newsome, W. T., Shadlen, M. N., Celebrini, S. , & Movshon, J. A. (1996). A relationship between behavioral choice and the visual responses of neurons in macaque MT. Visual Neuroscience, 13, 87–100.

Britten, K. H., Shadlen, M. N., Newsome, W. T. , & Movshon, J. A. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. Journal of Neuroscience, 12, 4745–4765.

Brosnan, S. F. , & de Waal, F. B. M. (2003). Monkeys reject unequal pay. Nature, 425, 297–299.

Cialdini, R. B. , & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55, 591–621.

DeAngelis, G. C., Cumming, B. G. , & Newsome, W. T. (1998). Cortical area MT and the perception of stereoscopic depth. Nature, 394, 677–680.

De Dreu, C. K. W. , & Beersma, B. (2001). Minority dissent in organizational teams: Implications for group innovation. In C. K. W. De Dreu & N. K. De Vries (Eds.), Group consensus and minority influence: Implications for innovation (pp. 258–283). Oxford: Blackwell.

Deutsch, M. , & Gerard, H. B. (1955). A study of normative and informational influences upon individual judgment. Journal of Abnormal & Social Psychology, 51, 629–636.

de Waal, F. B. M. (1998). No imitation without identification. Behavioral & Brain Sciences, 21, 689.

Dodd, J. V., Krug, K., Cumming, B. G. , & Parker, A. J. (2001). Perceptually bistable three-dimensional figures evoke high choice probabilities in cortical area MT. Journal of Neuroscience, 21, 4809–4821.

Doms, M. , & Van Avermaet, E. (1980). Majority influence, minority influence and conversion behavior: A replication. Journal of Experimental Social Psychology, 16, 283–292.

Dorris, M. C. , & Glimcher, P. W. (2004). Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron, 44, 365–378.

Fehr, E. , & Camerer, C. F. (2007). Social neuroeconomics: The neural circuitry of social preferences. Trends in Cognitive Sciences, 11, 419–427.

Fragaszy, D. , & Visalberghi, E. (2004). Socially biased learning in monkeys. Learning & Behavior, 32, 24–35.

Fujii, N., Hihara, S. , & Iriki, A. (2007). Dynamic social adaptation of motion-related neurons in primate parietal cortex. PLoS ONE, 2, e397.

Glimcher, P. W. (2003). Decisions, uncertainty, and the brain: The science of neuroeconomics. Cambridge, MA: MIT Press.

Gold, J. I. , & Shadlen, M. N. (2007). The neural basis of decision making. Annual Reviews of Neuroscience, 30, 535–574.

Gomez-Beldarrain, M., Harries, C., Garcia-Monco, J. C., Ballus, E. , & Grafman, J. (2004). Patients with right frontal lesions are unable to assess and use advice to make predictive judgments. Journal of Cognitive Neuroscience, 16, 74–89.

Hanks, T. D., Ditterich, J. , & Shadlen, M. N. (2006). Microstimulation of macaque area LIP affects decision-making in a motion discrimination task. Nature Neuroscience, 9, 682–689.

Hastie, R. , & Kameda, T. (2005). The robust beauty of majority rules in group decisions. Psychological Review, 112, 494–508.

Heekeren, H. R., Marrett, S. , & Ungerleider, L. G. (2008). The neural systems that mediate human perceptual decision making. Nature Reviews Neuroscience, 9, 467–479.

Jordan, K., Heinze, H.-J., Lutz, K., Kanowski, M. , & Jäncke, L. (2001). Cortical activations during the mental rotation of different visual objects. NeuroImage, 13, 143–152.

Kerr, N. L. , & Tindale, R. S. (2004). Group performance and decision making. Annual Review of Psychology, 55, 623–656.

Knoch, D. , & Fehr, E. (2007). Resisting the power of temptations. The right prefrontal cortex and self control. Annals of the New York Academy of Sciences, 1104, 123–134.

Krug, K. (2004). A common neuronal code for perceptual processes in visual cortex? Comparing choice and attentional correlates in V5/MT. Philosophical Transactions of the Royal Society B, 359, 929–941.

Krug, K., Cumming, B. G. , & Parker, A. J. (2004). Comparing perceptual signals of V5/MT neurons in two binocular depth tasks. Journal of Neurophysiology, 92, 1586–1596.

Larson, J. R., Christensen, C., Abbott, A. S. , & Franz, T. M. (1996). Diagnosing groups: Charting the flow of information in medical decision-making teams. Journal of Personality & Social Psychology, 71, 315–330.

Lee, D., Conroy, M. L., McGreevy, B. P. , & Barraclough, D. J. (2004). Reinforcement learning and decision making in monkeys during a competitive game. Cognitive Brain Research, 22, 45–58.

Levine, J. M. , & Kaarbo, J. (2001). Minority influence in political decision-making groups. In C. K. W. De Dreu & N. K. De Vries (Eds.), Group consensus and minority influence: Implications for innovation (pp. 229–257). London: Blackwell.

Loewenstein, G., Rick, S. , & Cohen, J. D. (2008). Neuroeconomics. Annual Reviews of Psychology, 59, 647–672.

Martin, R. (1998). Majority and minority influence using the afterimage paradigm: A series of attempted replications. Journal of Experimental Social Psychology, 34, 1–26.

Martin, R. , & Hewstone, M. (2003). Social influence processes of control and change: Conformity, obedience to authority, and innovation. In M. A. Hogg & J. Cooper (Eds.), Sage handbook of social psychology (pp. 347–366). London: Sage.

Meunier, M., Monfardini, E. , & Boussaoud, D. (2007). Learning by observation in rhesus monkeys. Neurobiology of Learning & Memory, 88, 243–248.

Mojzisch, A., Schulz-Hardt, S., Kerschreiter, R., Brodbeck, F. C. , & Frey, D. (2008). Social validation in group decision making: Differential effects on the decisional impact of preference-consistent and preference-inconsistent information. Journal of Experimental Social Psychology, 44, 1477–1490.

Moscovici, S. (1976). Social influence and social change. London: Academic Press.

Moscovici, S. (1980). Toward a theory of conversion behavior. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 13, pp. 209–239). New York: Academic Press.

Moscovici, S., Lage, E. , & Naffrechoux, M. (1969). Influence of a consistent minority on the responses of a majority in a color perception task. Sociometry, 32, 365–380.

Moscovici, S. , & Personnaz, B. (1980). Studies in social influence: V. Minority influence and conversion behavior in a perceptual task. Journal of Experimental Social Psychology, 16, 270–282.

Myers, W. A. (1970). Observational learning in monkeys. Journal of the Experimental Analysis of Behavior, 14, 225–235.

Newsome, W. T., Britten, K. H. , & Movshon, J. A. (1989). Neuronal correlates of a perceptual decision. Nature, 341, 52–54.

Newsome, W. T. , & Paré, E. B. (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (MT). Journal of Neuroscience, 8, 2201–2211.

Parker, A. J., Curnow, T. L., Cumming, B. G. , & Krug, K. (2002). Temporal dynamics of choice-related neuronal firing in macaque V5/ MT. Society for Neuroscience Abstracts, 28, 121.10.

Roitman, J. D. , & Shadlen, M. N. (2002). Response of neurons in posterior parietal cortex (area LIP) during a combined reaction-time direction-discrimination task. Journal of Neuroscience, 22, 9475–9489.

Rudebeck, P. H., Buckley, M. J., Walton, M. E. , & Rushworth, M. F. (2006). A role for the macaque anterior cingulate gyrus in social valuation. Science, 313, 1310–1312.

Rushworth, M. F., Behrens, T. E., Rudebeck, P. H. , & Walton, M. E. (2007). Contrasting roles for cingulate and orbitofrontal cortex in decisions and social behaviour. Trends in Cognitive Sciences, 11, 168–176.

Salzman, C. D., Britten, K. H. , & Newsome, W. T. (1990). Cortical microstimulation influences perceptual judgments of motion direction. Nature, 346, 174–177.

Sanfey, A. G. (2007). Social decision-making: Insights from game theory and neuroscience. Science, 318, 598–602.

Schulz-Hardt, S., Brodbeck, F. C., Mojzisch, A., Kerschreiter, R. , & Frey, D. (2006). Group decision making in hidden profile situations: Dissent as a facilitator for decision quality. Journal of Personality & Social Psychology, 91, 1080–1093.

Shadlen, M. N. , & Newsome, W. T. (1996). Motion perception: Seeing and deciding. Proceedings of the National Academy of Sciences, 93, 628–633.

Shadlen, M. N. , & Newsome, W. T. (2001). Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. Journal of Neurophysiology, 86, 1916–1936.

Shepard, R. N. , & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701–703.

Sherif, M. (1936). The psychology of social norms. New York: Harper & Row.

Somerville, L. H., Heatherton, T. F. , & Kelley, W. M. (2006). Anterior cingulate cortex responds differentially to expectancy violation and social feedback. Nature Neuroscience, 9, 1007–1008.

Sorrentino, R. M., King, G. , & Leo, G. (1980). The influence of the minority on perception: A note on a possible alternative explanation. Journal of Experimental Social Psychology, 16, 293–301.

Subiaul, F., Cantlon, J. F., Holloway, R. L. , & Terrace, H. S. (2004). Cognitive imitation in rhesus macaques. Science, 305, 407–410.

Sugrue, L. P., Corrado, G. S. , & Newsome, W. T. (2004). Matching behavior and the encoding of value in parietal cortex. Science, 304, 1782–1787.

Sugrue, L. P., Corrado, G. S. , & Newsome, W. T. (2005). Choosing the greater of two goods: Neural currencies for valuation and decision making. Nature Reviews Neuroscience, 6, 363–375.

Treue, S. , & Maunsell, J. H. (1999). Effects of attention on the processing of motion in macaque middle temporal and medial superior temporal visual cortical areas. Journal of Neuroscience, 19, 7591–7602.

Uka, T. , & DeAngelis, G. C. (2004). Contribution of area MT to stereoscopic depth perception: Choice-related response modulations reflect task strategy. Neuron, 42, 297–310.

Visalberghi, E. , & Fragaszy, D. M. (2002). Do monkeys ape? Ten years after. In K. Dautenhahn & C. Nehaniv (Eds.), Imitation in animals and artifacts (pp. 471–499). Cambridge, MA: MIT Press.

von Neumann, J. , & Morgenstern, O. (1944). Theory of games and economic behavior. New York: Wiley.

Yaniv, I. (2004). The benefit of additional opinions. Current Directions in Psychological Science, 13, 75–78.

Zuberbühler, K., Gygax, L., Harley, N. , & Kummer, H. (1996). Stimulus enhancement and spread of a spontaneous tool use in a colony of long-tailed macaques. Primates, 37, 1–12.


Contextual Influences on Visual Processing

AbstractThe visual image formed on the retina represents an amalgam of visual scene properties, including the reflectances of surfaces, their relative positions, and the type of illumination. The challenge facing the visual system is to extract the “meaning” of the image by decomposing it into its environmental causes. For each local region of the image, that extraction of meaning is only possible if information from other regions is taken into account. Of particular importance is a set of image cues revealing surface occlusion and/or lighting conditions. These information-rich cues direct the perceptual interpretation of other more ambiguous image regions. This context-dependent transformation from image to perception has profound—but frequently under-appreciated—implications for neurophysiological studies of visual processing: To demonstrate that neuronal responses are correlated with perception of visual scene properties, rather than visual image features, neuronal sensitivity must be assessed in varied contexts that differentially influence perceptual interpretation. We review a number of recent studies that have used this context-based approach to explore the neuronal bases of visual scene perception.


AP Psychology Study Resource: About Somatosensory Cortex

Have you ever stopped to think about how we all feel or experience certain things in the same way as others? How do you know the color you perceive as being “red” is the same “red” as the person next to you? What if their red is your green? While we can’t answer these mind-boggling &hellip Continue reading “AP Psychology Study Resource: About Somatosensory Cortex”

Have you ever stopped to think about how we all feel or experience certain things in the same way as others?

How do you know the color you perceive as being “red” is the same “red” as the person next to you?

What if their red is your green?

While we can’t answer these mind-boggling questions completely, we can explore the brain’s role in processing external stimuli, like colors, textures, sounds, and so on.

This is where your samatosensory cortex (sometimes referred to as the somatosensory cortex, instead) comes into play.

Responsible for processing external stimuli (or sensations), it plays an integral role in our day-to-day lives.

Below, we will explore this cortex in more detail, including how it works and what role it potentially plays in prosocial behavior.


Watch the video: . Primary Visual Cortex V1 (January 2022).