Publications

These articles are not displayed in chronological order, but arranged by subject-type. Have a look at the CV page if you want a list of articles in chronological order.

Attention and Consciousness

Introduction to research topic: attention and consciousness in different senses
Tsuchiya, N., van Boxtel, J.J.A.
Frontiers in Psychology 4:204 pp. 1-4 (2013)
Consciousness and Attention: On sufficiency and necessity
van Boxtel J.J.A. *, Tsuchiya, N.*  &  Koch. C   * = equal contribution
Frontiers in Psychology 1:217 pp. 1-13 (2010)
Is recurrent processing necessary and/or sufficient for consciousness? [Commentary]
Tsuchiya, N., &   van Boxtel J.J.A.
Cognitive Neuroscience 1(3):230-31 (2010)
Opposing effects of attention and consciousness on afterimages
van Boxtel J.J.A., Tsuchiya, N., &   Koch, C.
Proc. Nat. Acac. Sci. USA 107(19):8883-8 (2010)
A Dissociation of Attention and Awareness in Phase-sensitive but Not Phase-insensitive Visual Channels
Brascamp JW, van Boxtel JJ, Knapen T, &   Blake R.
Journal of Cognitive Neuroscience 22(10) 2326-2344 (2010)
Multisensory congruency as a mechanism for attentional control over perceptual selection
Raymond van Ee,  J.J.A. van Boxtel,  Amanda L. Parker, &   David Alais
Journal of Neuroscience 29(37):11641-11649 (2009)
Attention and Consciousnes or Awareness to Rivalry

Dichoptic Masking and Visual Rivalry

Visual Rivalry without spatial conflict
J.J.A. van Boxtel,  &   Koch, C.
Psychological Science 23(4):410-418 (2012)
Attending to auditory signals slows visual alternations in binocular rivalry
Alais, D, J.J.A. van Boxtel, Parker, A,   &   van Ee, R.
Vision Research 50(10):929-935 (2010)
Removal of monocular interactions equates rivalry behavior for monocular, binocular, and stimulus rivalries
J.J.A. van Boxtel,   Knapen, T.,   C.J. Erkelens,   &   van Ee, R.
Journal of Vision 8(15):13, pp. 1-17 (2008)
Retinotopic and non-retinotopic stimulus encoding in binocular rivalry and the involvement of feedback
J.J.A. van Boxtel,   Alais, D.,   &   van Ee, R.
Journal of Vision 8(5):17, 1-10 (2008)
The role of temporally coarse form processing during binocular rivalry
J.J.A. van Boxtel,   Alais, D.,   Erkelens, C.J.,   &   van Ee, R.
PLoS ONE 3(1):e1429 (2008)
Dichoptic masking and Binocular rivalry share common perceptual dynamics
J.J.A. van Boxtel,   van Ee, R.,   &   Erkelens, CJ.
Journal of Vision 7(14):3, 1-11 (2007)
Distance in feature space determines exclusivity in visual rivalry
Knapen, T.H.J.,   Kanai, R. ,   Brascamp, J.W.,  van Boxtel, J.J.A.   &   van Ee, R.
Vision Research 47(26):3269-3275 (2007)
Rivalry to Motion

Motion and Biological Motion perception

Intact recognition, but attenuated adaptation, for biological motion in youth with autism spectrum disorder
van Boxtel, J.J.A., Dapretto, M., Lu, H.
Autism Research(2015)
Joints and their relations as critical features in action discrimination: evidence from a classification image method
van Boxtel, J.J.A., Lu, H.
Journal of Vision 15(1):20 (2015)
A biological motion toolbox for reading, displaying, and manipulating motion capture data in research settings
van Boxtel, J.J.A., Lu, H.
Journal of Vision 13(12): 7, 1-16 (2013)
Impaired global, and compensatory local, biological motion processing in people with high levels of autistic traits
van Boxtel, J.J.A., Lu, H.
Frontiers in Psychology 4:209 pp. 1-10 (2013)
A predictive coding perspective on autism spectrum disorders
van Boxtel, JJ,   Lu, H.
Frontiers in Psychology 4:19, pp. 1-3 (2013)
Signature movements lead to efficient search for threatening actions
van Boxtel, JJ,   Lu, H.
PLoS One. 2012;7(5):e37085 (2012)
Visual search by action category
van Boxtel, JJ,   Lu, H.
Journal of Vision 11 (7) article 19 (2011)
A single motion system suffices for global-motion perception
van Boxtel, JJ,   Erkelens, CJ.
Vision Research Dec;46(28): 4634-45 (2006)
A single system explains human speed perception
van Boxtel, JJ,  van Ee R,  &   Erkelens, CJ.
Journal of Cognitive Neuroscience 18(11):1808-1819 (2006)
Motion to Action

Action-perception cycle | action for perception

Parallel programming of saccades during natural scene viewing: Evidence from eye movement positions
Wu, E.X, Gilani S.O., van Boxtel J.J.A., Amihai, I., Chua, F.K. & Yen, S.C.
Journal of Vision, 13(12):17, 1-14 (2013)
Depth perception by the active observer
Wexler M,  &   van Boxtel, JJ
Trends Cogn Sci. Sep;9(9):431-8 (2005)
Perception of plane orientation from self-generated and passively observed optic flow
van Boxtel, JJ,   Wexler M,  &   Droulez J.
Journal of Vision 3(5):318-32 (2003)
Action-quit
Introduction to research topic: attention and consciousness in different senses

Frontiers in Psychology 4:204 pp. 1-4 (2013)

The question of the origin of consciousness has engaged scientists and philosophers for centuries. Early scholars relied on introspection, leading some to conclude that attention is necessary of consciousness, in some cases equating attention and consciousness. Such a tight relationship between attention and consciousness has also been proposed by many modern theorists (Posner 1994; Merikle and Joordens 1997; Mack and Rock 1998; Chun and Wolfe 2000; O'Regan and Noe 2001; Mole 2008; De Brigard and Prinz 2010; Prinz 2011; Cohen, Cavanagh et al. 2012). With development of neuroscientific methods, the relationship between attention and consciousness has come under increasing scrutiny. These studies often operationally defined the effects of attention (e.g., reduced reaction time, improved performance, etc) and consciousness (e.g., objective invisibility, subjective visibility and confidence report (Seth, Dienes et al. 2008; Sandberg, Bibby et al. 2011)) using a variety of methods to manipulate attention (e.g., cueing, divided attention, etc) and consciousness (e.g., masking, crowding, and binocular rivalry (Kim and Blake 2005)). These empirical studies have culminated in recent proposals that attention and consciousness are supported by different neuronal processes and they do not need be tightly correlated all the time (Iwasaki 1993; Baars 1997; Hardcastle 1997; Kentridge, Heywood et al. 1999; Naccache, Blandin et al. 2002; Lamme 2003; Woodman and Luck 2003; Bachmann 2006; Koch and Tsuchiya 2007; van Boxtel, Tsuchiya et al. 2010). Our original motivation to edit this Special Issue was threefold: (1) to gather and collect current, diverse views on the relationship between consciousness and attention, (2) to invite reviews on consciousness and attention in non-vision modalities, (3) and to invite empirical studies of consciousness, noting its implication from the viewpoint of attention or vice versa. As summarized below, our goals are largely achieved thanks to 17 contributions to this issue.

Consciousness and Attention: On sufficiency and necessity

Frontiers in Psychology 1:217 pp. 1-13 (2010)

Recent research has slowly corroded a belief that selective attention and consciousness are so tightly entangled that they cannot be individually examined. In this review, we summarize psychophysical and neurophysiological evidence for a dissociation between top-down attention and consciousness. The evidence includes recent findings that show subjects can attend to perceptually invisible objects. More contentious is the finding that subjects can become conscious of an isolated object, or the gist of the scene in the near absence of top-down attention; we critically re-examine the possibility of ‘complete’ absence of top-down attention. We also cover the recent flurry of studies that utilized independent manipulation of attention and consciousness. These studies have shown paradoxical effects of attention, including examples where top-down attention and consciousness have opposing effects, leading us to strengthen and revise our previous views. Neuroimaging studies with EEG, MEG and fMRI are uncovering the distinct neuronal correlates of selective attention and consciousness in dissociative paradigms. These findings point to a functional dissociation: attention as analyzer and consciousness as synthesizer. Separating the effects of selective visual attention from those of visual consciousness is of paramount importance to untangle the neural substrates of consciousness from those for attention.

Is recurrent processing necessary and/or sufficient for consciousness? [Commentary]

Cognitive Neuroscience 1(3):230-31 (2010)

While we agree with Lamme’s general framework, we are not so convinced by his mapping between psychological concepts with their underlying neuronal mechanisms. Specifically, we doubt if recurrent processing is either necessary or sufficient for consciousness. A gist of a scene may be consciously perceived by purely feedforward, without recurrent, processing. Neurophysiological studies of perceptual suppression show recurrent processing in visual cortex for consciously invisible objects. While the neuronal correlates of attention and consciousness remain to be clarified, we agree with Lamme that these two processes are independent, evinced by our recent demonstration of opposing effects of attention and consciousness. [Commentary on Lamme. V. How neuroscience will change our view on consciousness. Cognitive Neuroscience 1(3)]

Opposing effects of attention and consciousness on afterimages

Proc. Nat. Acac. Sci. USA 107(19):8883-8 (2010)

The brain’s ability to handle sensory information is influenced by both selective attention and consciousness. There is no consensus on the exact relationship between these two processes and whether or not they are distinct. So far, no experiment simultaneously manipulated both. We carried out the first full factorial 2x2 study of the simultaneous influences of attention and consciousness (as assayed by visibility) on perception, correcting for possible concurrent changes in attention and consciousness. We investigated the duration of afterimages for all four combinations of high versus low attention and visible versus invisible grating. We demonstrate that selective attention and visual consciousness have opposite effects: paying attention to the grating decreases the duration of its afterimage, while consciously seeing the grating increases the afterimage duratin. These data provide clear evidence for distinctive influences of selective attention and consciousness on visual perception

A Dissociation of Attention and Awareness in Phase-sensitive but Not Phase-insensitive Visual Channels

Journal of Cognitive Neuroscience 22(10) 2326-2344 (2010)

The elements most vivid in our conscious awareness are the ones to which we direct our attention. Scientific study confirms the impression of a close bond between selective attention and visual awareness, yet the nature of this association remains elusive. Using visual afterimages as an index, we investigate neural processing of stimuli as they enter awareness and as they become the object of attention. We find evidence of response enhancement accompanying both attention and awareness, both in the phase-sensitive neural channels characteristic of early processing stages and in the phase-insensitive channels typical of higher cortical areas. The effects of attention and awareness on phase-insensitive responses are positively correlated, but in the same experiments, we observe no correlation between the effects on phase-sensitive responses. This indicates independent signatures of attention and awareness in early visual areas yet a convergence of their effects at more advanced processing stages.

Multisensory congruency as a mechanism for attentional control over perceptual selection

Journal of Neuroscience 29(37):11641-11649 (2009)

The neural mechanisms underlying attentional selection of competing neural signals for awareness remains an unresolved issue. We studied attentional selection, employing perceptually ambiguous stimuli in anovel multisensory paradigm that combined competing auditory and competing visual stimuli. We demonstrate that the ability to select, and attentively hold, one of the competing alternatives in either sensory modality is greatly enhanced when there is a matching crossmodal stimulus. Intriguingly, this multimodal enhancement of attentional selection seems to require a conscious act of attention, as passively experiencing the multisensory stimuli did not enhance control over the stimulus. We also demonstrate that congruent auditory or tactile information, and combined auditory-tactile information, aids attentional control over competing visual stimuli, and visa versa. Our data suggest a functional role for recently found neurons that combine voluntarily initiated attentional functions across sensory modalities. We argue that these units provide a mechanism for structuring multisensory inputs that are then used to selectively modulate early (unimodal) cortical processing, boosting the gain of task-relevant features for willful control over perceptual awareness.

Visual Rivalry without spatial conflict

Psychological Science 23(4):410-418 (2012)

Visual rivalry has been characterized extensively. It has been shown to require spatial conflict, even in studies that show nonspatial (i.e. non-retinal) influences on rivalry. Unexpectedly, we identified visual rivalry formation in the complete absence of spatial conflict. Visual rivalry ensued when we placed a non-ambiguous motion quartet in a non-spatial (in our case object-based) reference frame. Moreover, a motion quartet that is displaced within a non-spatial reference frame does not induce rivalry despite the presence of spatial conflict. This shows that non-spatial, object-based processing can overrule retinotopic processing and prevent rivalry from occurring when it is unambiguous in an object-based reference frame. Together, our results identify a potent high-level conflict resolution stage independent of spatial low-level visual conflict. This independence of spatial overlap provides an advantage to the visual system, allowing conflict processing when an object is non-stationary on the retina, e.g. during frequently-occurring eye movements.

Attending to auditory signals slows visual alternations in binocular rivalry

Vision Research 50(10):929-935 (2010)

A previous study has shown that diverting attention from binocular rivalry to a visual distractor task results in a slowing of rivalry alternation rate between simple orthogonal orientations. Here, we investigate whether the slowing of visual perceptual alternations will occur when attention is diverted to an auditory distractor task, and we extend the investigation by testing this for two kinds of binocular rivalry stimuli and for the Necker cube. Our results show that doing the auditory attention task does indeed slow visual perceptual alternations, that the slowing effect is a graded function of attentional load, and that the attentional slowing effect is less pronounced for grating rivalry than for house/face rivalry and for the Necker cube. These results are explained in terms of supramodal attentional resources modulating a high-level interpretative process in perceptual ambiguity, together with a role for feedback to early visual processes in the case of binocular rivalry.

Removal of monocular interactions equates rivalry behavior for monocular, binocular, and stimulus rivalries

Journal of Vision 8(15):13, pp. 1-17 (2008)

When the two eyes are presented with conflicting stimuli, perception starts to fluctuate over time (i.e., binocular rivalry). A similar fluctuation occurs when two patterns are presented to a single eye (i.e., monocular rivalry), or when they are swapped rapidly and repeatedly between the eyes (i.e., stimulus rivalry). Although all these cases lead to rivalry, in quantitative terms these modes of rivalry are generally found to differ significantly. We studied these different modes of rivalry with identical intermittently shown stimuli while varying the temporal layout of stimulation. We show that the quantitative differences between the modes of rivalry are caused by the presence of monocular interactions between the rivaling patterns; the introduction of a blank period just before a stimulus swap changed the number of rivalry reports to the extent that monocular and stimulus rivalries were inducible over ranges of spatial frequency content and contrast values that were nearly identical to binocular rivalry. Moreover when monocular interactions did not occur the perceptual dynamics of monocular, binocular, and stimulus rivalries were statistically indistinguishable. This range of identical behavior exhibited a monocular (∼50 ms) and a binocular (∼350 ms) limit. We argue that a common binocular, or pattern-based, mechanism determines the temporal constraints for these modes of rivalry.

Retinotopic and non-retinotopic stimulus encoding in binocular rivalry and the involvement of feedback

Journal of Vision 8(5):17, 1-10 (2008)

Adaptation is one of the key constituents of the perceptual alternation process during binocular rivalry, as it has been shown that preadapting one of the rivaling pairs before rivalry onset biases perception away from the adapted stimulus during rivalry. We investigated the influence of retinotopic and spatiotopic preadaptation on binocular rivalry. We show that for grating stimuli, preadaptation only influences rivalry when adaptation and rivalry locations are retinotopically matched. With more complex house and face stimuli, effects of preadaptation are found for both retinotopic and spatiotopic preadaptation, showing the importance of spatiotopic encoding in binocular rivalry. We show, furthermore, that adaptation to phase-scrambled faces results in retinotopic effects only, demonstrating the importance of form content for spatiotopic adaptation effects, as opposed to spatial frequency content. Are the spatiotopic adaptation influences on rivalry caused by direct spatiotopic stimulus interactions, or instead are they due to altered feedback from the adapted spatiotopic representations to the retinotopic representations that are involved in rivalry? By using rivaling face and grating stimuli that minimize rivalry between spatiotopic representations while still engaging these representations in stimulus encoding, we show that at least part of the preadaptation effects with face stimuli depend on feedback information.

The role of temporally coarse form processing during binocular rivalry

PLoS ONE 3(1):e1429 (2008)

Presenting the eyes with spatially mismatched images causes a phenomenon known as binocular rivalry—a fluctuation of awareness whereby each eye's image alternately determines perception. Binocular rivalry is used to study interocular conflict resolution and the formation of conscious awareness from retinal images. Although the spatial determinants of rivalry have been well-characterized, the temporal determinants are still largely unstudied. We confirm a previous observation that conflicting images do not need to be presented continuously or simultaneously to elicit binocular rivalry. This process has a temporal limit of about 350 ms, which is an order of magnitude larger than the visual system's temporal resolution. We characterize this temporal limit of binocular rivalry by showing that it is independent of low-level information such as interocular timing differences, contrast-reversals, stimulus energy, and eye-of-origin information. This suggests the temporal factors maintaining rivalry relate more to higher-level form information, than to low-level visual information. Systematically comparing the role of form and motion—the processing of which may be assigned to ventral and dorsal visual pathways, respectively—reveals that this temporal limit is determined by form conflict rather than motion conflict. Together, our findings demonstrate that binocular conflict resolution depends on temporally coarse form-based processing, possibly originating in the ventral visual pathway.

Dichoptic masking and Binocular rivalry share common perceptual dynamics

Journal of Vision 7(14):3, 1-11 (2007)

Two of the strongest tools to manipulate visual awareness of potentially salient stimuli are binocular rivalry and dichoptic masking. Binocular rivalry is induced by presenting incompatible images to the two eyes over prolonged periods of time, leading to an alternating perception of the two images. Dichoptic masking is induced when two images are presented once in rapid succession, leading to the perception of just one of the images. Although these phenomena share some key characteristics, most notably the ability to erase from awareness potentially very salient stimuli, their relationship is poorly understood. We investigated the perceptual dynamics during long-lasting dynamic stimulation leading to binocular rivalry or dichoptic masking. We show that the perceptual dynamics during dichoptic masking conditions meet the classifiers used to classify a process as binocular rivalry; that is, (1) Levelt's 2nd proposition is obeyed; (2) perceptual dominance durations follow a gamma distribution; and (3) dominance durations are sequentially independent. We suggest that binocular rivalry and dichoptic masking may be mediated by the same inhibitory mechanisms.

Distance in feature space determines exclusivity in visual rivalry

Vision Research 47(26):3269-3275 (2007)

Visual rivalry is thought to be a distributed process that simultaneously takes place at multiple levels in the visual processing hierarchy. Also, the different types of rivalry, such as binocular and monocular rivalry, are thought to engage shared underlying mechanisms. We hypothesized that the amount of perceptual suppression during rivalry as measured by the total duration of fully exclusive perceptual dominance is determined by a distance in a neurally represented feature space. This hypothesis can be contrasted with the possibility that the brain constructs an internal model of the outside world using full-fledged object representations, and that perceptual suppression is due to an appraisal of the likelihood of the particular stimulus configuration at a high, object-based level. We applied color and stereo-depth differences between monocular rivalry stimulus gratings, and manipulated color and eye-of-origin information in binocular rivalry using the flicker & switch presentation paradigm. Our data show that exclusivity in visual rivalry increases with increased difference in feature space without regard for real-world constraints, and that eye-of-origin information may be regarded as a segregating feature that functions in a manner similar to color and stereo-depth information. Moreover, distances defined in multiple feature dimensions additively and independently increase the amount of perceptual exclusivity and coherence in both monocular and binocular rivalry. We conclude that exclusivity in visual rivalry is determined by a distance in feature space that is subtended by multiple stimulus features.

Intact recognition, but attenuated adaptation, for biological motion in youth with autism spectrum disorder

Autism Research (2015)

Given the ecological importance of biological motion and its relevance to social cognition, considerable effort has been devoted over the past decade to studying biological motion perception in autism. However, previous studies have asked observers to detect or recognize briefly presented human actions placed in isolation, without spatial or temporal context. Research on typical populations has shown the influence of temporal context in biological motion perception: prolonged exposure to one action gives rise to an aftereffect that biases perception of a subsequently displayed action. Whether people with autism spectrum disorders (ASD) show such adaptation effects for biological motion stimuli remains unknown. To address this question, the present study examined how well youth with ASD recognize ambiguous actions and adapt to recently-observed actions. Compared to typically-developing (TD) controls, youth with ASD showed no differences in perceptual boundaries between actions categories, indicating intact ability in recognizing actions. However, children with ASD showed weakened adaptation to biological motion. It is unlikely that the reduced action adaptability in autism was due to delayed developmental trajectory, since older children with ASD showed weaker adaptation to actions than younger children with ASD. Our results further suggest that high-level (i.e., action) processing weakens with age for children with ASD, but this change may be accompanied by a potentially compensatory mechanism based on more involvement of low-level (i.e., motion) processing.

Joints and their relations as critical features in action discrimination: evidence from a classification image method

Journal of Vision 15(1):15.1.20 (2015)

Classifying an action as a runner or a walker is a seemingly effortless process. However, it is difficult to determine which features are used with hypothesis-driven research, because biological motion stimuli generally consist of about a dozen joints, yielding an enormous number of potential relationships among them. Here, we develop a hypothesis-free approach based on a classification image method, using experimental data from relatively few trials (∼1,000 trials per subject). Employing ambiguous actions morphed between a walker and a runner, we identified three types of features that play important roles in discriminating bipedal locomotion presented in a side view: (a) critical joint feature, supported by the finding that the similarity of the movements of feet and wrists to prototypical movements of these joints were most reliably used across all participants; (b) structural features, indicated by contributions from almost all other joints, potentially through a form-based analysis; and (c) relational features, revealed by statistical correlations between joint contributions, specifically relations between the two feet, and relations between the wrists/elbow and the hips. When the actions were inverted, only critical joint features remained to significantly influence discrimination responses. When actions were presented with continuous depth rotation, critical joint features and relational features associated strongly with responses. Using a double-pass paradigm, we estimated that the internal noise is about twice as large as the external noise, consistent with previous findings. Overall, our novel design revealed a rich set of critical features that are used in action discrimination. The visual system flexibly selects a subset of features depending on viewing conditions.

A biological motion toolbox for reading, displaying, and manipulating motion capture data in research settings

Journal of Vision 13(12): 7, 1-16 (2013)

Biological motion research is an increasingly active field, with a great potential to contribute to a wide range of applications, such as behavioral monitoring/motion detection in surveillance situations, intention inference in social interactions, and diagnostic tools in autism research. In recent years, a large amount of motion capture data has become freely available online, potentially providing rich stimulus sets for biological motion research. However, there currently does not exist an easy-to-use tool to extract, present and manipulate motion capture data in the MATLAB environment, which many researchers use to program their experiments. We have developed the Biomotion Toolbox, which allows researchers to import motion capture data in a variety of formats, to display actions using Psychtoolbox 3, and to manipulate action displays in specific ways (e.g., inversion, three-dimensional rotation, spatial scrambling, phase-scrambling, and limited lifetime). The toolbox was designed to allow researchers with a minimal level of MATLAB programming skills to code experiments using biological motion stimuli.

Impaired global, and compensatory local, biological motion processing in people with high levels of autistic traits

Frontiers in Psychology 4:209 pp. 1-10 (2013)

People with Autism Spectrum Disorder (ASD) are hypothesized to have poor high-level processing but superior low-level processing, causing impaired social recognition, and a focus on non-social stimulus contingencies. Biological motion perception provides an ideal domain to investigate exactly how ASD modulates the interaction between low and high-level processing, because it involves multiple processing stages, and carries many important social cues. We investigated individual differences among typically developing observers in biological motion processing, and whether such individual differences associate with the number of autistic traits. In Experiment 1, we found that individuals with fewer autistic traits were automatically and involuntarily attracted to global biological motion information, whereas individuals with more autistic traits did not show this pre-attentional distraction. We employed an action adaptation paradigm in the second study to show that individuals with more autistic traits were able to compensate for deficits in global processing with an increased involvement in local processing. Our findings can be interpreted within a predictive coding framework, which characterizes the functional relationship between local and global processing stages, and explains how these stages contribute to the perceptual difficulties associated with ASD.

A predictive coding perspective on autism spectrum disorders

Frontiers in Psychology 4:19, pp. 1-3 (2013)

In a recent article entitled “When the world becomes ‘too real’: Bayesian explanation of autistic perception”, Elizabeth Pellicano and David Burr (Pellicano   Burr, 2012b) introduce an intriguing new hypothesis, a Bayesian account, concerning the possible origins of perceptual deficits in Autism Spectrum Disorder (ASD). This Bayesian account explains why ASD impacts perception in systematic ways, but it does not clearly explain how. Most prominently, the Bayesian account lacks connections to the neural computation performed by the brain, and does not provide mechanistic explanations for ASD (Colombo   Series, 2012; Rust   Stocker, 2010). Nor does the Bayesian account explain what the biological origin is of the ‘prior’—the essential addition of the Bayesian models. In Marr’s terminology (Marr, 1982), Pellicano and Burr paper proposes a computational-level explanation for ASD, but not an account for the other two levels, representation and implementation. We propose that a predictive coding framework (schematized in Figure 1) may fill the gap and generate a testable framework open to further experimental investigations. [This article is a peer-reviewed comment on “When the world becomes ‘too real’: Bayesian explanation of autistic perception”, Elizabeth Pellicano and David Burr, in Trends in Cognitive Sciences.]

Signature movements lead to efficient search for threatening actions

PLoS One. 2012;7(5):e37085 (2012)

The ability to find and evade fighting persons in a crowd is potentially life-saving. To investigate how the visual system processes threatening actions, we employed a visual search paradigm with threatening boxer targets among emotionally-neutral walker distractors, and vice versa. We found that a boxer popped out for both intact and scrambled actions, whereas walkers did not. A reverse correlation analysis revealed that observers’ responses clustered around the time of the “punch”, a signature movement of boxing actions, but not around specific movements of the walker. These findings support the existence of a detector for signature movements in action perception. This detector helps in rapidly detecting aggressive behavior in a crowd, potentially through an expedited (sub)cortical threat-detection mechanism.

Visual search by action category

Journal of Vision 11 (7) article 19 (2011)

Humans are sensitive to different categories of actions due to their importance in social interactions. However, biological motion research has been heavily tilted toward the use of walking figures. Employing point-light animations (PLAs) derived from motion capture data, we investigated how different activities (boxing, dancing, running, and walking) related to each other during action perception, using a visual search task. We found that differentiating between actions requires attention in general. However, a search asymmetry was revealed between boxers and walkers, i.e., searching for a boxer among walkers is more efficient than searching for a walker among boxers, suggesting the existence of a critical feature for categorizing these two actions. The similarities among the various actions were derived from hierarchical clustering of search slopes. Walking and running proved to be most related, followed by dancing and then boxing. Signal detection theory was used to conduct a non-parametric ROC analysis, revealing that human performance in visual search is not fully explained by low-level motion information.

A single motion system suffices for global-motion perception

Vision Research Dec;46(28): 4634-45 (2006)

Global-motion perception is the perception of coherent motion in a noisy motion stimulus. Thresholds for coherent motion perception were measured for different combinations of signal and noise speeds. Previous research showed that thresholds were elevated when signal and noise speeds were similar, but not when they were different. The regions of increased threshold values for low and high signal speeds showed little overlap. On the basis of this evidence two independent speed-tuned systems were proposed: one for slow and one for fast-motion. However, in those studies only two signal speeds were used. We expanded the results by measuring threshold-curves for four different signal speeds. Considerable overlap of the threshold-curves was found between conditions. These results speak against a bipartite global-motion system. Model simulations indicate that present and previous experimental results can be produced by a single motion system providing that the mechanisms within it are speed-tuned.

A single system explains human speed perception

Journal of Cognitive Neuroscience 18(11):1808-1819 (2006)

Motion is fully described by a direction and a speed. The processing of direction information by the visual system has been extensively studied; much less is known, however, about the processing of speed. Although it is generally accepted that the direction of motion is processed by a single motion system, no such consensus exists for speed. Psychophysical data from humans suggest two separate systems processing luminance-based fast and slow speeds, whereas neurophysiological recordings in monkeys generally show continuous speed representation, hinting at a single system. Although the neurophysiological findings hint at a single system, they remain inconclusive as only a limited amount of cells can be measured per study and, possibly, the putative different motion systems are anatomically separate. In three psychophysical motion adaptation experiments, we show that predictions on the basis of the two-motion system hypothesis are not met. Instead, concurrent modeling showed that both here-presented and previous data are consistent with a single system subserving human speed perception. These findings have important implications for computational models of motion processing and the low-level organization of the process.

Parallel programming of saccades during natural scene viewing: Evidence from eye movement positions.

Journal of Vision, 13(12):17, 1-14 (2013)

Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

Depth perception by the active observer

Trends Cogn Sci. Sep;9(9):431-8 (2005)

The connection between perception and action has classically been studied in one direction only: the effect of perception on subsequent action. Although our actions can modify our perceptions externally, by modifying the world or our view of it, it has recently become clear that even without this external feedback the preparation and execution of a variety of motor actions can have an effect on three-dimensional perceptual processes. Here, we review the ways in which an observer's motor actions--locomotion, head and eye movements, and object manipulation--affect his or her perception and representation of three-dimensional objects and space. Allowing observers to act can drastically change the way they perceive the third dimension, as well as how scientists view depth perception.

Perception of plane orientation from self-generated and passively observed optic flow

Journal of Vision 3(5):318-32 (2003)

We investigated the perception of three-dimensional plane orientation--focusing on the perception of tilt--from optic flow generated by the observer's active movement around a simulated stationary object, and compared the performance to that of an immobile observer receiving a replay of the same optic flow. We found that perception of plane orientation is more precise in the active than in the immobile case. In particular, in the case of the immobile observer, the presence of shear in optic flow drastically diminishes the precision of tilt perception, whereas in the active observer, this decrease in performance is greatly reduced. The difference between active and immobile observers appears to be due to random rather than systematic errors. Furthermore, perceived slant is better correlated with simulated slant in the active observer. We conclude with a discussion of various theoretical explanations for our results.