It is not just that the neural correlates of consciousness may never be found—because they do not exist, it is that seeking them would continue to setback consciousness science research.
Consciousness is supposed to be different from functions, which should have ruled out that there are correlates. Neuroscience has identified several clusters of neurons responsible for functions, like place cells, grid cells, CA1, CA2, HONs, medial/lateral, and so forth. They are said to be areas of activity during respective functions.
But, if there were neural correlates of consciousness, would it not mean that consciousness is in one location? If the argument is that there are general and specific neural correlates, then why should it be said that consciousness exists in some areas of the brain and not in others like in the cerebellum, since local correlates should be there?
Also, if there are general and specific correlates in some areas only, then what is the guarantee that what makes specific correlates somewhere cannot make it anywhere else, or why should it be absent elsewhere? Also, is there no consciousness for the functions of the cerebellum? If the answer is that the consciousness for the cerebellum exists elsewhere, then what is the name of the transport or relay for consciousness only? Is it the same or different for those of [say] one memory to the other or one emotion to the next?
Even with neurons that specify functions, or the so-called neural representations, can neurons actually set up any function? If a neuron is active, what exactly is active about the neuron that configures how any function is specified?
Activity around a neural cluster, or activity by some neurons during a function, does not indicate that neurons are by themselves doing what it takes to structure that function. For example, for an emotion, what is the difference between the architecture of some neurons for that emotion and another, or for an emotion and a feeling?
Whenever some neurons are active, there are several parts of those neurons that might be active, including genes. However, specifically for functions, the two most [directly] deterministic are electrical and chemical signals, conceptually.
Simply, no neuron is ever active without electrical and chemical signals, directly. This means that while genes and others can be active, there is no gene that represents a window, chair, or house, so to speak. When genes play a role, the wheels are signals or functions are directly mechanized, conceptually, by electrical and chemical signals.
So, to move consciousness research forward, would it not be better to seek out how electrical and chemical signals might be responsible? If this is sought, it would have implications for mental health, addiction care, degenerative diseases, and much more.
The insistence on neural correlates of consciousness has resulted in several stray takes, based on neurons, without what the minimum possible responsibility factor for the functions of neurons—the signals.
Microtubules, quantum entanglement, quantum superposition, qubits, workspace, integration, prediction, and several others are probably anything but how consciousness works. Even if there were any possibility to assume these takes, there is nothing that indicates that they can ever be useful in solving mental disorders, present in the same mind that their takes are supposed to define how it works.
Electrical signals are responsible for relays or distributions across clusters of neurons. Electrical signals have a direct sequential relationship with chemical signals. Electrical signals do not have a shape or say structural relationship with neurons. Ions channels—on axons—let them in and out, either with voltage-gated or leak channels. Electrical signals are not taking the structure of the whole neuron at any point and passing it along.
This refutes that neural representations mechanize functions. Simply, electrical signals do not copy or pass any structure of any neuron at any point, so there cannot be a neural representation of functions because those representations define nothing and cannot be passed.
Electrical signals, conceptually, are passing what they obtain from chemical signals. They are not doing this alone but as a set or loop. This is theorized as a reason that neurons are in clusters.
So, if electrical signals, in sets, are passing what they obtained from chemical signals, in sets, then the basis of functions can be postulated to be underscored by the configurations of electrical and chemical signals in sets.
This basis takes off any mechanistic power from neurons, at least directly, which then refutes neural representations or neural correlates.
Simply, even with several nuclei and ganglia [neural clusters in the CNS and PNS], their functions are not possible because the neurons are there, but because of the sets of electrical and chemical signals. So, all clusters of neurons for functions or activity directly mean that the functions and activities are by the signals while the neurons are hosts.
Some stray takes on consciousness also suggest that computers cannot be conscious, while there are suggestions that organoids, neuromorphic, or quantum computers might be conscious. If quantum computers can be conscious because of whatever entanglement or superposition, then AI can have sentience. If organoids can be conscious, then it is because of the sets of electrical and chemical signals they bear.
Consciousness is often defined as subjectivity, but there is no subjectivity for any function that is not in attention or at least awareness. Simply, consciousness is at least a subjective experience in attention or a subjective experience in awareness. There may also be intent. Attention, awareness, intent, and subjectivity are qualifiers or graders of functions.
This means consciousness can be redefined or better described as the amount of qualifiers that apply to a function in any instance. Movement, balance, digestion, respiration, sensations, perceptions, and so on are functions that are graded or qualified by the collections that make up consciousness.
It is postulated that graders or qualifiers of functions are found in the same sets of [electrical and chemical] signals that makeup functions. Simply, if a function is not obtained in a set of signals somewhere [in a cluster of neurons], then the graders that make that function conscious are obtained elsewhere. They are all mechanized in the same set.
This makes it possible that there might be hidden or covert consciousness in some states since some graders may still apply to some functions. Also, it also makes it possible for any function anywhere, including the cerebellum, can have consciousness. If consciousness is lost, then the function is lost. There is no function somewhere, working, and its consciousness collective elsewhere—unavailable.
There are other qualifiers of functions aside from those four that define relays or distributions from one set to the other. Even if neural representations were right, and clusters were labeled, what are the names of the specific pathways and distribution mechanisms from one part of the brain to the other?
If consciousness science research would advance, it would not be those seeking neural correlates or those answering with neurons.
There is a recent paper in The New England Journal of Medicine, Cognitive Motor Dissociation in Disorders of Consciousness, stating that, "Patients with brain injury who are unresponsive to commands may perform cognitive tasks that are detected on functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). This phenomenon, known as cognitive motor dissociation, has not been systematically studied in a large cohort of persons with disorders of consciousness. Data from fMRI only or EEG only were available for 65% of the participants, and data from both fMRI and EEG were available for 35%. The median age of the participants was 37.9 years, the median time between brain injury and assessment with the CRS-R was 7.9 months (25% of the participants were assessed with the CRS-R within 28 days after injury), and brain trauma was an etiologic factor in 50%. We detected cognitive motor dissociation in 60 of the 241 participants (25%) without an observable response to commands, of whom 11 had been assessed with the use of fMRI only, 13 with the use of EEG only, and 36 with the use of both techniques. Approximately one in four participants without an observable response to commands performed a cognitive task on fMRI or EEG as compared with one in three participants with an observable response to commands."
There is a recent feature in Noema Magazine, Who Knows What Consciousness Is?, stating that, "The amount of variance in this neural circuitry is very large. Certain circuits get selected over others because they fit better with whatever is being presented by the environment. It is the activity of this vast web of networks that entails consciousness by means of what we call “reentrant interactions” that help to organize “reality” into patterns. Because each loop reaches closure by completing its circuit through the varying paths from the thalamus to the cortex and back, the brain can “fill in” and provide knowledge beyond that which you immediately hear, see or smell. The resulting discriminations are known in philosophy as qualia. These discriminations account for the intangible awareness of mood, and they define the greenness of green and the warmness of warmth. Together, qualia make up what we call consciousness."
There is a recent preprint on arXiv, Neuromorphic Correlates of Artificial Consciousness, stating that, "The concept of neural correlates of consciousness (NCC), which suggests that specific neural activities are linked to conscious experiences, has gained widespread acceptance. This acceptance is based on a wealth of evidence from experimental studies, brain imaging techniques such as fMRI and EEG, and theoretical frameworks like integrated information theory (IIT) within neuroscience and the philosophy of mind. This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations. It proposes the Neuromorphic Correlates of Artificial Consciousness (NCAC) as a theoretical framework. While the debate on artificial consciousness remains contentious due to our incomplete grasp of consciousness, this work may raise eyebrows and invite criticism. Nevertheless, this optimistic and forward-thinking approach is fueled by insights from the Human Brain Project, advancements in brain imaging like EEG and fMRI, and recent strides in AI and computing, including quantum and neuromorphic designs. Additionally, this paper outlines how machine learning can play a role in crafting artificial consciousness, aiming to realise machine consciousness and awareness in the future."
There is a recent paper in Nature, Orexin neurons mediate temptation-resistant voluntary exercise, stating that, "Causal manipulations and correlative analyses of appetitive and consummatory processes revealed this preference for wheel running to be instantiated by hypothalamic hypocretin/orexin neurons (HONs). The effect of HON manipulations on wheel running and eating was strongly context-dependent, being the largest in the scenario where both options were available. Overall, these data suggest that HON activity enables an eat–run arbitration that results in choosing exercise over food."
Feature image source