######### Card Hero LETTERS #########
Letters to the editors

Vol. 7, NO. 3 / October 2022

To the editors:

The temporal aspects of consciousness are a book with seven seals. Often, scientific findings clash with our intuitions. We usually believe, for example, that consciousness is continuous. We perceive the trajectory of a diver on her way down in the ocean seemingly at each moment of time. However, simple considerations and experiments show that perception is discrete rather than continuous—that is, consciousness occurs only at certain moments of time.1

How can discrete perception be explained? The classic idea is that there are moments of time during which perception is constant, similar to the frames in a movie where each frame shows only a snapshot of the world. In this scenario, motion is detected by comparing the differences between frames. There is no motion, no change, no processing within a single frame. It is often proposed that perception is not only discrete, but also rhythmic. Perception occurs with a fixed sampling rate of, for example, 10Hz, meaning that there is a new frame every 100ms. Staying with the movie metaphor: think of a surveillance camera that takes snapshots at a fixed rate. What happens in between the snapshots is lost, except for parts that may be carried over as artifacts on the camera’s sensors—so-called neural persistence—and bleed into the next frame. Rufin VanRullen has championed these ideas in the last decades with a series of elegant experiments.

In contrast to this view, we have recently shown that discrete perception may be updated in the range of 400ms, much slower than every 100ms. This provokes deep questions, as VanRullen points out in his review. How should we perceive a movie, for example, if 16 frames fall into a single 400ms discrete period? We would only perceive the frame occurring at the moment of the conscious update, and the other frames would simply be lost. Additionally, how can there be free will if we are conscious only two or three times per second? It would seem that conscious decisions do not determine our actions, but that consciousness happens after we have acted, just an afterthought.

If one tries to reconcile our findings with a classic discrete “snapshot” framework, as championed by VanRullen, these questions become even more challenging. But the 2-stage model we propose is quite different from the classic ideas of discrete perception. We propose that only consciousness is discrete. Unconscious processing, to the contrary, is continuous, sophisticated, and has high spatiotemporal resolution. This is very different from simplistic neural persistence—which would add up all information during a perceptual moment into a useless superimposed image, similar to when the shutter of a camera opens for too long. In our model, motion is processed unconsciously by motion detectors, of which only the output is consciously perceived at the end of an unconscious processing period. Thus, we do not perceive motion while it happens in the world, nor while it is computed by motion detectors. Instead, we perceive the output of the motion detectors much later. Returning to the diver example, we do not perceive the motion exactly when it happens in the world, which is obvious, since neural processing takes time. We do not perceive motion exactly when neurons are reflecting the diver’s position. Rather, information about the diver’s trajectory is integrated over a substantial period of time, and we perceive the resulting motion after these computations are completed. The same happens in the movie example given by VanRullen: information from all the different frames is processed unconsciously for a while, and we perceive the outcome of the unconscious processing, which “summarizes” what happened in all the frames. Hence, the problems raised by VanRullen dissipate.

We propose that entire events—e.g., a diver who jumped from a high cliff following a parabolic trajectory in the blue sky, or a gull in the background flying towards the left—are processed unconsciously, and rendered conscious at one discrete moment of time. Thus, there is no comparison across frames. In addition, unconscious processing does not stop at the end of a moment, it goes on continuously without breaks. Only the conscious “readouts” occur at certain moments of time. Importantly, our model is not rhythmic. We have shown that the duration of a conscious percept depends on the processing load: the higher, the longer.2

Our model has clear methodological implications for how we measure the duration of discrete percepts. Classically, the duration of a moment is estimated by psychophysical experiments. For example, two bars are presented one after the other. For short delays, the two are perceived as simultaneous, but for delays larger than 40ms they are perceived as non-simultaneous. Based on this timescale, it is often argued that a perceptual moment lasts 40ms. Instead, we suggest that the 40ms reflect the temporal resolution of an unconscious motion or simultaneity detector. While the experiment does not tell us about the duration of conscious percepts, it does provide a lower bound for the time of the conscious readout. The readout cannot happen before 40ms in this experiment because otherwise we would see first the first bar consciously, and then the second, instead of seeing them simultaneously.

As VanRullen correctly points out, our model implies that most of actions are executed unconsciously. We do not see any problem here concerning free will, which we argue operates on much longer time scales. For example, we want to engage in a soccer game. This is free will. We chose to start running upfield. This is also free will. In contrast, fast reactive actions during the game are usually executed unconsciously—in accordance with our will. If the unconsciously triggered action goes against our will, we consciously perceive the mistaken action and that it was against our will—at the next conscious update. In general, we propose that actions become part of the following conscious percept. Actions are part of the event that we perceive. As another example, when we catch a falling plate before it hits the ground, and in other situations where we make ultra-quick decisions, we cannot usually tell why, when, or how the action was triggered. This is because processing was unconscious.

To reconcile our findings of long discrete unconscious periods with the shorter periods he and others have found, VanRullen proposes a 3-stage model: stage 1 corresponds to unconscious processing, stage 2 to classic discrete snapshots of 100ms corresponding to short instable conscious percepts, and stage 3 is a consolidated percept that comes several hundreds of milliseconds later. The instable percept of stage 2 can be overwritten and may be lost in the consolidated memory of stage 3. VanRullen identifies stage 2 with phenomenological consciousness, and stage 3 with access consciousness. Although we cannot rule out such a model, and are even partly sympathetic to it, we propose that there are no snapshots, but ongoing unconscious processing instead. In addition, the fleeting, overwritable consciousness of stage 2 is not only hard to test empirically, it also remains conceptually vague, with all the pitfalls of inaccessible phenomenal consciousness. Why is it conscious at all and not only a form of unconscious processing? Phenomenal consciousness, for example, may simply reflect the unconscious processing of objects, which are not (yet) conscious, but can strongly influence actions. Whatever the final answer is, a conceptual distinction between short term processing in the range of 100ms and longer discrete updates occurring every few hundred ms seems necessary to account for current empirical evidence regarding the temporal structure of consciousness.


  1. Michael Herzog, Leila Drissi-Daoudi, and Adrien Doerig, “All in Good Time: Long-Lasting Postdictive Effects Reveal Discrete Perception,” Trends in Cognitive Sciences 24, no. 10 (2020): 826–37, doi:10.1016/j.tics.2020.07.001. 
  2. Lukas Vogelsang, Leila Drissi-Daoudi, and Michael Herzog, “What Determines the Temporal Extent of Unconscious Feature Integration?,” Journal of Vision 21, no. 9 (2021), doi:10.1167/jov.21.9.2323. 

Michael Herzog is Professor of Psychophysics at the Brain Mind Institute at the EPFL in Lausanne.

Adrien Doerig is a postdoc student in Cognitive Neuroscience and Machine Learning at the Institute of Cognitive Science, University of Osnabrück.

More Letters for this Article


Endmark

Copyright © Inference 2024

ISSN #2576–4403