Research

This page contains a brief description of each of my research projects, listed in reverse chronological order.


Encoding in Grid Cells: Uniqueness, Redundancy and Synergy

An important contribution of my earlier work on information flow lies in recognizing that accounting for synergy is essential if we want a measure of information flow that can guarantee the ability to track a message as it flows through a computational system. I presented a poster at Cosyne this year, which supports and extends my information flow result in several key directions:

  1. First, it demonstrates how ideas such as uniqueness, redundancy and synergy are highly relevant in neural systems, through a case study on entorhinal grid cells;
  2. Second, it shows how recent technical advances in information theory provide new ways to quantify uniqueness, redundancy and synergy in such systems, giving neuroscientists a new tool, and the ability to pose and answer experimental questions in a completely new way (e.g., how much of the information about an animal’s location is redundantly encoded between grid cells and place cells?);
  3. Thirdly, it shows that synergy can arise in unexpected ways—indeed, synergy sometimes appears when we ask a different question. We show that each grid module encodes unique information about an animal’s precise location; however, when we ask how these modules encode information about the animal’s location at a very coarse spatial scale, the same modules encode coarse location information synergistically.

This work thus highlights how synergy may be common, and that it depends on how one looks at the same information. I chose to highlight this work in spite of it being an abstract because it fits well into my future vision: I seek to discover, through collaborations, more examples of neural systems that are ripe for information-theoretic exploration.

 Cosyne 2020 Abstract  Cosyne 2020 Poster

Defining and Inferring Flows of Information in Neural Circuits

While studying existing methods for inferring information flow in neuroscience, we recognized that Granger Causality does not always capture information flow about a specific message or stimulus. We were able to construct a counterexample wherein Granger Causality inferred the incorrect flow of a message, even when all nodes were observed and when there was no measurement noise. This counterexample also applied to generalizations of Granger Causality, such as Transfer Entropy and Directed Information. We believe that an important reason for the failure of Granger Causality-based tools in this context was the lack of a formal definition for information flow pertaining to a specific message (which could be the stimulus or response in a neuroscientific experiment). The development of such a definition, in turn, was impeded by the absence of a concrete theoretical framework that linked information flow about a message to empirical measurements. We addressed this fundamental gap in our understanding by proposing a new computational model of the brain, and by providing a new definition for information flow about a message within this model.

A central contribution of our work lies in recognizing that tracking the flow of information about a specific message requires that we also account for synergistic flows of information. To understand why, note that a message $M$ may be represented, in combination with some independent “noise” variable $Z$, as $M{+}Z$ and $Z$, across two different brain areas. Represented in this form, if the noise $Z$ is large, it can be very hard to ascertain the presence of information flow about $M$ in the combination $[M{+}Z, Z]$, unless we account for possible synergy between these two brain areas. Accounting for synergy, using conditional mutual information for instance, is thus crucial for tracking information flow in arbitrary computational systems. Relatedly, I have also begun exploring how synergy, along with other partial information measures such as uniqueness and redundancy, arise and interact in neural encoding using entorhinal grid cells as a case study (Venkatesh and Grover, Cosyne, 2020).

Cannot track information flow without accounting for synergy

Our new measure of information flow does not have the same pitfalls as Granger Causality, while satisfying our intuition in several canonical examples of computational systems. The full impact of this work continues to be realized, as we explore new measures in greater depth, demonstrate and validate these methods on artificial neural networks, and examine the causal implications of making interventions based on information flow inferences.

 Main paper in Transactions on Info Theory  2015 Granger Causality Counterexample paper  2020 ISIT paper on alternative definitions

Systematically questioning the self-fulfilling prophecy of EEG’s low resolution

Electroencephalography (EEG) has long been believed to be a low-resolution brain imaging modality. There is a pervasive view in the community of EEG users that increasing the number of EEG electrodes will not improve imaging resolution. We have worked towards systematically understanding the reasons this view exists, and towards changing perceptions about EEG.

The heart of our argument is explained in our recent paper titled “An Information-theoretic View of EEG Sensing”. We examined previous estimates of the number of sensors needed to recover the EEG signal. These estimates relied on computing the “spatial Nyquist rate” of scalp EEG, and it turned out that they underestimated the number of sensors required to reliably recover the continuous scalp potential. We also showed, through simulations and intuitive arguments, that the number of EEG sensors required to recover the signal within the brain could be even higher than the number prescribed by the spatial Nyquist rate.

Naturally, the follow-up question is: “What are the fundamental theoretical limits of the imaging resolution achievable by EEG, and how does this resolution improve with increasing numbers of sensors?” While this paper gives a first-pass at understanding these limits, we followed up with a paper that gave the first analytical results showing that EEG’s imaging resolution could improve with an increase in the number of EEG sensors.

We also gave information-theoretic techniques to help save power in a future implementation of an ultra-high-density EEG system. This last point, which focuses on the practical aspects of instrumenting such a high-density EEG system was first discussed in Allerton 2015, where we introduced a “hierarchical referencing mechanism” to save power and circuit area.

Towards showing experimentally and practically that high density EEG, and specifically, super-Nyquist sampling is more informative than conventional EEG, we recently conducted some experiments in collaboration with Amanda Robinson, Marlene Behrmann and Mike Tarr at CMU’s Psychology department, which show that high-density EEG has better classification accuracy than low-density EEG in classifying between visual stimuli of different spatial frequencies.

 Link to paper  Link to poster

Directions of information flow and Granger Causality

Granger causality is an established measure of the “causal influence” that one statistical process has on another. It has been used extensively in neuroscience to infer statistical causal influences. Recently, however, many works in the neuroscience literature have begun to compare Granger causal influences along forward and reverse links of a feedback network in order to determine the direction of information flow in this network.

Greater GC can be opposite the direction of Info flow

We asked whether comparing Granger causal influences correctly captures the direction of information flow in a simple feedback network. We discovered, using simple theoretical experiments, that comparison of Granger causal influences can, in fact, yield an answer that is opposite to the true direction of information flow.

 Link to paper  Link to poster