On Rhythms of the Brain: Jhanas, Local Field Potentials, and Electromagnetic Theories of Consciousness

Of potential interest to readers: here’s part of an email exchange I recently had with Scott Alexander about Rhythms of the Brain by György Buzsáki, a book I recommended he read to learn more about the neuroscience of brainwaves. This is an essay he published about it; I had a chance to read a pre-publication draft to check whether he was describing the science and my positions accurately. This is part of my feedback on the draft (lightly edited for clarity and consistent formatting):


Andrés – Oct/13/2022

First of all, thank you again for writing a review of Rhythms Of The Brain. As I mentioned, I think your review is spot-on. It’s already really great as it is. But I think the following pieces of information might help you answer some of the questions you pose and enrich the mental model you have about brainwaves. I should also mention that I’m still learning a lot on the topic from a number of angles and my model still has quite a few moving parts.

Without further ado, here are 5 key points I’d like to share:

(1) I think that Susan Pockett‘s Consciousness Is a Thing, Not a Process (link to PDF) is very relevant here. She argues based on neurophysiological and behavioral evidence that conscious perception only happens when Local Field Potentials (LFPs) are generated. The timing, functional correlates, and location of events of conscious perception of sensory stimuli seem to agree with this (pgs. 4-5):

Here’s how I think about this: 

Have you wondered why brainwaves track levels of wakefulness? See, in principle you can have a great deal of neural activity without any brainwaves. Raster plots of spike neural networks could in principle look like white noise… which in turn would generate no brainwaves at all because the oscillations in the electric field would cancel each other out at the macroscopic level. Recall that perfectly compressed information is indistinguishable from noise. So, in principle, an optimal use of the state-space of neural activity would look totally like white noise and lack brainwaves.

Susan Pockett would say that the non-conscious parts of neural activity can be like this… greatly optimized in a certain sense. But they will lack consciousness. The advantage of the coherence (which comes at the cost of greatly reduced information content) is distributed representations. In turn, this may solve the binding problem.

(2) Johnjoe McFadden‘s Conscious Electromagnetic Field (CEMI) theory is worth digging into. 

The “LFPs as mediators of consciousness” story has a lot going for it. In particular, it is quite elegant in how it can help us make sense of our phenomenological relation to our brain and nervous system. Brainwaves and LFPs are be highly correlated. Coherent neural activity causes LFPs, which in turn mediate/bias activity in neurons, with a causal structure like this:

If “we are” the patchwork of interlaced LFPs the brain is generating, in some sense we could say that we “have a brain” rather than that we “are the brain” (loosely speaking). Without putting any strong metaphysical import on the concept of free will, the phenomenology of it seems to me at least to make more sense when you identify with the field rather than the neurons per se (see clues 1 and 3 in his paper). In this view, we are like the “ghost in the machine”, capable of biasing neural activity here and there. But at the same time, we need the coherent neural activity to be booted up. So we are sort of “riding the brain” while the brain is giving us our foundation. Perhaps this gives us another angle to think about the “elephant taming” metaphor for the progression of the meditative path:

(3) The work of Stephen Grossberg (Adaptive Resonant Theory, and more recently his book Conscious Mind, Resonant Brain) as well as that of his student Steven Lehar, have macroscopic resonance as a key computational step. Arguably this is something you can simulate with classical neural networks. But using the EM field would potentially produce a significant computational speedup. Talking to Lehar, he used an interesting analogy, where in which he described “neurons spiking as a kind of sand blasting of the electric field” in order to activate internal representations. Recent research seems to confirm that the information content of internal representations is better captured by the structure of the electric field than by the neurons that sustain it (“Neurons are fickle. Electric fields are more reliable for information.“).

NOTE: One of the contributions to the conversation that QRI is aiming to make (essentially by publishing in academia what’s already discussed in our website) is that while these field theories of consciousness do address the binding problem, they now have to contend with the boundary problem. Our solution is “topological segmentation”, which itself comes with empirically testable predictions. Topological pockets allow for holistic field behavior *and* for solving the boundary problem at the same time, finally rendering bound consciousness both causally efficacious and objectively bounded. [In your essay] you could point out that I claim that resonance is necessary but not sufficient to solve the phenomenal binding problem. So even if AIs were using brainwaves, that might not be enough for them to be conscious, though it would go in the right direction. More on this on our website soonish.  

(4) I think that we can use the Symmetry Theory of Valence (STV) to explain the hedonic properties of different network topologies. This would be responsible for the “intrinsic valence” of a given brain region. You write:

> Why this combination of tasks? Rhythms sort of suggests that brain areas are less about specific tasks than about specific graph-theoretic arrangements, which are convenient for specific algorithms, which are convenient for specific tasks.

Yes! This is a great way of putting it. I think that having diverse network topologies available is one of the key ingredients of a general intelligence like ours. A learning algorithm that patches together the right sections to produce the right kind of structure for internal representations with holistic properties seems like a natural way to construct a mind. More so, some of these patches will cause dysphoric waves and others euphoric waves. The dysphoric parts of the brain, if STV is in the right direction, would have a network topology that work as a sort of frustration generator. The waves generated by these parts sort of “hate themselves”: activating them causes internal dissonance and stress that is then radiated out as waves with unfriendly ADSR envelopes to the rest of the brain. In contrast, the euphoric parts would produce highly aligned waves with soft ADSR envelopes and the right level of impedance matching to harmonize with other wave generators. 

(5) Merging with God as a kind of global coherence:

> Andres suggests all of this is a good match for oscillatory coupling between brain regions.

Perhaps add something akin to “which according to him ‘dissolves internal boundaries'”

> Andres thinks this is part of what’s behind “spiritual” or “mystical” experiences, where you suddenly feel like you’ve lost the boundaries of yourself and are at one with God and Nature and Everything.

My strongest phenomenological evidence here is the difference between DMT and 5-MeO-DMT (video): competing clusters of coherence feel like “a lot of entities in an ecosystem of patterns” whereas global coherence feels like “union with God, Everything, and Everyone”. Hence the terms “spirit molecule” for DMT and “God molecule” for 5-MeO-DMT. The effect size of this difference is extremely large and reliable. I’ve yet to find someone who has experience with both substances who doesn’t immediately agree with this characterization. [This can be empirically tested] by blinding whether one takes DMT or 5-MeO-DMT and then reporting on the valence characteristics, “competing vs. global” coherence characteristics, and on whether one gets a patchwork of entities or one feels like one is merging with the universe.

With classic psychedelics, which stand somewhere between DMT and 5-MeO-DMT in their level of global coherence, you always go through an annealing process before finally “snapping” into global coherence and “becoming one with God”. That coherence is the signature of these mystical experiences becomes rather self-evident once you pay attention to annealing signatures (i.e. noticing how incompatible metronomes slowly start synchronizing and forming larger and larger structures until one megastructure swallows it all and dissolves the self-other boundary in the process of doing so).

You will not find academic publications describing this process (because their psychological scales are not detailed enough, aren’t focused on structure, and aren’t informed by actual practice). Nor will you find psychonauts talking much about this, because they tend to focus on the semantic content of the experience rather than on the phenomenal texture [see our guide]. Naturally, one is typically socially rewarded for providing an entertaining story about one’s trip… not a detailed *technical* report of phenomenal texture. Therefore, right now you’ll only find QRI content explaining all of this. But I’m fairly confident about this after talking to very experienced. So I think this will significantly shape the conversation in a couple of years once we start getting some consensus on it.

I could share much more, but I have to restrain myself (taming the elephant!). Let me know if you need anything else.

Thank you!!

Infinite bliss!


Scott – Oct/13/2022

Thanks. […] two questions:

> Susan Pockett would say that the non-conscious parts of neural activity can be like this… greatly optimized in a certain sense. But they will lack consciousness. The advantage of the coherence (which comes at the cost of greatly reduced information content) is distributed representations. In turn, this may solve the binding problem.

Not sure I understand this. Aren’t there clear examples of unconscious brain waves (eg delta waves during sleep)? Can you explain more about what you mean by distributed representations and why they’re linked to consciousness?

> If “we are” the patchwork of interlaced LFPs the brain is generating, in some sense we could say that we “have a brain” rather than that we “are the brain” (loosely speaking). Without putting any strong metaphysical import on the concept of free will, the phenomenology of it seems to me at least to make more sense when you identify with the field rather than the neurons per se (see clues 1 and 3 in his paper). In this view, we are like the “ghost in the machine”, capable of biasing neural activity here and there.

Confused by this too. My model for thinking about brain waves has been cellular automata – in this case, there would be no difference between the pattern and the machinery, and it wouldn’t make sense to say that the pattern is able to bias the activity here or there. Is this a bad model? Can you explain more what you mean by “us” (by which I’m assuming you mean consciousness) “biasing” activity (by which I assume you mean causing brain activity different from what you would expect by lower-level laws)?


Andrés Oct/15/2022

Hey Scott!

> Thanks […] two questions:

(I’ll answer your questions in a different order than how you asked them, on the basis that my answer to the first one is much more weird and less credible… In other words, I’m answering more or less in order of how weird my responses are so that you are not put off by my first answers. This way you can choose when to stop reading without missing anything useful for your essay):

> My model for thinking about brain waves has been cellular automata – in this case, there would be no difference between the pattern and the machinery, and it wouldn’t make sense to say that the pattern is able to bias the activity here or there. Is this a bad model? 

I think that “brainwaves can be explained as emergent patterns of a cellular automata” is a very good starting model, and it has a lot of explanatory power. But there are empirical and experiential facts that would go against it as a complete explanation. And perhaps, it misses the most important hint for a theory of consciousness that satisfies all of the necessary criteria I consider such a theory must satisfy. And that is, that binding has non-trivial computational effects. I.e. At some level, patterns of organization exert “weak downward causation” on the substrate that gave rise to them. This does not mean there is “strong emergence” or that we’re going against the laws of physics. On the contrary, a key guiding principle for QRI is to be strict physicalists. The laws of physics are causally closed and complete (or at least as good as it gets; the Standard Model can be taken at face value for the time being, until something better comes along). Without violating physicalism, we nonetheless still see instances of weak downward causation in the physical world.

As an intuition, consider the fact that something like TMS can change neural activity. In fact, TMS, and especially rTMS, can cause seizures. This suggests that at a sufficiently high dose, EM oscillations can exert top-down influence on neuronal firing thresholds and phase coherence, and more so when they come in repetitive waves rather than pulses. In the case of LFPs, which are far more localized and less energetic, the influence isn’t huge. But it is there. As far as I understand the neuroscience literature on LFPs (and ephatic coupling more generally), the fact that LFPs change firing thresholds is uncontroversial. The question is “by how much”. Most studies find small effects (otoh between 1% and 20% of the variance, but I can look up more precise and recent figures – e.g. see: Ephaptic coupling of cortical neurons).

The more interesting and perhaps significant effect that LFPs have is to change the degree of coherence between neurons. In other words, they may not change much their probability of firing, but do change a lot their probability of firing in phase. You can see how this would lead to interesting self-reinforcing effects. Namely, if neural coherence causes LFPs, and LFPs increase neural coherence, there might be attractors of hypercoherent neural firings coupled with strong and very orderly LFPs. I believe this explains the Jhanas.

Now, can’t you just expand your cellular automata to include LFPs and call it a day? Well, yes, in a theoretical but rather impractical sense. Building a cellular automata that simulates a simple neural network is easy. Building one that simulates water is more tricky. By the time you are constructing cellular automata to simulate EM fields you get into trouble. It’s possible, but you need all sorts of tricks, shortcuts, and handling complex edge cases (e.g. topological segmentation!). Can you construct a cellular automata that simulates physics? Quantum mechanics proper? Yes… if you are Wolfram. But recall that his explorations invoke cellular automata with unusual mathematical primitives. We are no longer in the territory of simple grid-like graphs. We are in Ruliad-space, with hypergraphs and exotic rulesets. Quantum coherent states behave in a very holistic fashion (where the “next step” is the result of solving Shrödinger’s equation in configuration space). So while it’s possible to use cellular automata to think of physics at this level, it isn’t a very natural choice. Rather, I posit that thinking of it in terms of universal principles like energy minimization, extremas, and the preservation of zero information is what takes us closer to the phenomenon at hand. These principles are, by their very nature, holistic. An electron, as Feynman would put it, can sort of “smell its surroundings” to decide where to go. It somehow explores all possibilities at once and “chooses” the one that balances the minimization of energy and maximization of entropy. A truly holistic sort of phenomenon.

Source: A Class of Models with the Potential to Represent Fundamental Physics by Stephen Wolfram

I think that if at that point one uses a cellular automata to represent this, one has actually reintroduced the very thing the cellular automata conceptual framework was trying to avoid. And that is, the computational power of holism. This is because even though the Ruliad that simulates physics is in some way a cellular automata, the ruleset itself requires a kind of God-like capacity to integrate pieces of information and “see all at once” entire regions of the (hyper)graph and decide what to do next. My claim is that at this point one has “pushed” the undesired holism to the ruleset in order to avoid seeing it directly. It’s a reductionist sleight of hand.

Now, I’m not saying consciousness is quantum mechanical. What I’m pointing out is that EM waves are sort of in the spectrum between simple cellular automatas and QM, where the waves interacting with one another have all kinds of peculiar holistic effects. Binding, if it involves EM waves, turns out to be computationally non-trivial.

In this model, the brain is physically providing a soil that can instantiate EM waves with many different kinds of properties. Some behave linearly, some non-linearly. And together, they give rise to the vast zoo of possible internal representations, many kinds of binding, topologies, and dynamics we experience (such as the strangeness of “fire meditation“).

> Can you explain more what you mean by “us” (by which I’m assuming you mean consciousness) “biasing” activity (by which I assume you mean causing brain activity different from what you would expect by lower-level laws)?

You can’t voluntarily shut down your brain with conscious control. At least not immediately. But you can direct your attention to two parts of your experience at once, and the resonances in those two regions will slowly but surely begin to synchronize. In other words, from an EE point of view, spreading your attention over a given region of your experience increases the impedance matching between the metronomes in those regions. This, I think, is the influence of LFPs (or similar) on neural activity. This may be subtle, but over enough time and neural rewiring, the process can lead to very interesting effects. Hyperconcentrated states of consciousness, starting with access concentration all the way to single-pointed attention and ultimately to the formless Jhanas are obtained through mental moves that slowly by surely “unify the mind” (i.e. brings coherence between disparate metronomes in the nervous system). This is “us” learning to influence “our brain”.

> Not sure I understand this. Aren’t there clear examples of unconscious brain waves (eg delta waves during sleep)?

Two quick things here. The first is that we think brainwaves (macroscopic oscillations in the EM field more generally) are necessary but not sufficient for consciousness. They still need to form a topological pocket, or they will remain unclosed eddies that cannot contain information nor maintain a boundary with their surroundings. The second is that the main point is that the brainwaves track the texture of degrees of wakefulness. More so, it’s not just the spectral power distribution, but also the patterns of spatiotemporal cross-frequency coherence. Thus, two states might look the same in terms of their spectrum, but carry significantly different internal textures since one of them has a high degree of, say, gamma coherence and the other doesn’t.

> Can you explain more about what you mean by distributed representations and why they’re linked to consciousness?

One of the key insights from Stevan Lehar is that using a dynamic, smooth, spatial medium of representation allows us to run spatial algorithms on our representations. One example is the incredibly general reverse grassfire & reverse shock schaffold algorithms that explain a wide range of visual illusions (discussed in The Constructive Aspect of Visual Perception / as well as in his magnum opus video Harmonic Gestalt). Based on the fact that these algorithms generalize to things like breakthrough level DMT experiences and that they apply to hyperdimensional phenomenal objects and their resonant modes, I’m fairly convinced that the local cellular automata view doesn’t explain the facts. The structures that exist in those states follow law-like energy minimization properties reminiscent of fluid dynamics in higher dimensions. To me they seem to necessitate something like Maxwell’s equations; a cellular automaton would need a lot of training and fine-tuning to be able to instantly generate those dynamics right and seamlessly. Combine this with the (not fully verified but tentative) observation that DMT states are phenomenologically similar to those induced by high-dose Fire Kasina. I believe that the mechanism is actually fairly simple: both methods energize the visual field to the point where it transitions from a linear and partially linear state into a fully nonlinear regime. The phenomenon is better seen as what happens when you energize a non-linear optical computer than, say, the effect of changing the ruleset of a cellular automaton.

I know this lacks credibility for the time being […]. I aim to identify crisp and experimentally verifiable demonstrations of this that trained physicists and neuroscientists can both agree on.

In the long-term, I expect humans to figure out ways to use high-energy states of consciousness to tap into the EM field as a computational substrate. Not only will this entail a revolution in consciousness, but also, interestingly, in how we think of computation. The Turing Paradigm will turn out to be a tiny special case of… qualia computing.

Alright, I hope that wasn’t too much, haha.

Thank you again, and happy to answer more questions.

Infinite bliss!


See also:

Leave a Reply