Europe’s beleaguered billion-dollar brain project just handed their term paper in for the jealously circling minions of neuroscience have-nots to review. While a few little snippets of information have been dribbled out over the years, Blue Brain has now fully codified exactly what it always said it would do.
Publishing their results in the journal Cell, Henry Markram and company describe their model of a tiny 0.3 mm^3 smidgen of rodent somatosensory cortex. While many folks might recognize the full enormity of the task, we need to take a deeper look inside, to fully appreciate the massive responsibility shouldered in trying to handcraft from scratch what now must assume the role of the de facto description of wetware.
Blue Brain is more than a connectome-style reconstruction of a few dendrites and their local coterie of synapses. That’s not to take anything away from the fantastic anatomical reproductions of projects like Eyewire and Jeff Lichtman’s Brainbow. In fact we might be remiss not to at least link to Lichtman’s recent paper (also in Cell) and show for comparison’s sake an image of what is possible by 3D rendering slices of tissue. The little crumb of tissue above, excised from the full 40um electron micrographic Lichtman cube, stunningly reveals the distributions of individual synaptic vesicles and sub-cellular organelles like mitochondria.
Although Markram’s model captures the performance of molecular scale ion pumps and channels — proteins even below the detail we see in Lightman’s imagery — these features get algorithmically lumped together by dynamic equations. Similarly, the connection details in the Blue Brain model are also generated more-or-less automatically from raw data.
But what exactly was the initial form of this data, and how did they first generate it? For that, the team used patch clamp electrodes to record the activity of over 14,000 individual cells in slices of cortex. The cells were also stained so that they could see the actual shape and structure, and then classify them accordingly into 207 unique ‘morpho-electrical’ cell types.
The main things you want to know about a cell type is whether it is excitatory or inhibitory, how many synapses it makes, how influential those synapses are, and finally how they adapt over time. Although many higher-level observables, like average and maximum firing rate may be intrinsically controlled by the individual cell properties, the network modes that those cells participate in are constrained principally by the ratio of excitation and inhibition at different spatial scales.
As an example, imagine patching together a fresh network from scratch. If it doesn’t fire after you hit go, or alternatively, if all the cells always fire in exact synchrony, you must not have done something right in how you linked them up. To these points, perhaps the most memorable terminology that the paper reintroduces paper is the idea of soloists and choristors. While the soloist neurons tend to march to the beat of their own drum no matter what the larger network was doing, the chorists show spiking activity that is correlated to the population average.
Your browser does not support the video tag.
This brings us to the question of location. Namely, how should you distribute all this excitation and inhibition so as to get realistic performance, as opposed to some kind of epileptic fit? The researchers weren’t just going in blind here. They used some existing information about the densities of neurons in different layers of cortex, and their organization into structured microcolumns.
One convenient hallmark of the somatosensory cortex is that you can readily map which area of the body projects to it. In rodents, the microstructure of the cortex ennervated by the whiskers has a unique format, where each whisker generates its own cortical ‘barrel’ of closely linked cells. Although more is probably known about the properties of the barrel region than for any other any other area of rodent cortex, it’s probably fair to say there is too much information about them for the purposes of the Blue Brain model.
The researchers therefore choose a hindpaw region, rather than the familiar whisker region, likely to eliminate any preconceived notions or bias in how a model of cortex should operate. That, and the additional fact that humans (bearded or not) don’t have any barrel cortex. The network volume was populated with 31,000 neurons making 37 million synapses with each other.
The devil in the details here, and something that neuroscience has only recently come to see as a feature than a bug, is that synapses can not be said to be ‘fungible’ with connections in the same way that pork bellies are fungible with gold. In other words, they cannot be substituted directly for each other. The ‘connection’ of one neuron to another is not composed of just one synapse, but rather the entire locus of synapses that it makes on that neuron. When that fact is taken into account, the model actually only has 8 million connections.
What this all really means is that multi-synapse contacts (and for that matter the autapses that a neuron makes onto itself) are not mere side-effects to be written off as unavoidable consequences of the imperfect cellular toolkit that nature has provided. Rather, this asymmetric multi-synaptic embrace must be recognized as the principle design point of any neural partnership.
The above applies not just to real neuron networks, but any that would be said to operate on neural principles. Therefore it is safe to say that the artificial neural networks used in Siri or Google Translate are not actually neurally inspired in any true sense of the word. If we have learned anything from founding father Ramon Cajal, or for that matter from the Eyewire-style refinements to his original hand-drawn retinal maps, it is that the message that one neuron delivers to its target is not a simple analog value that can be captured along with a timestamp a few bits long. Instead, the message is a neighborhood-wide physical perturbation that permeates a sizable portion of a target’s dendritic tree and percolates down within it all the way to the atomic scale.
What the Blue Brain project now does better than anything else to date is to hammer into iconic stone tablet form what it is that we are really talking about when we say something is an ‘anatomically-inspired’ neural network. That is to say, you start with a finite module of some piece of brain that is a fully-constrained network unto itself, and canonically define input and output conditions at its boundaries such that it can readily be adapted to larger scales.
Here, the boundaries are essentially the connections that enter and leave the network. In the Blue Brain model, not only do they robustly quantify intrinsic versus extrinsic connections and synapses, but also the actual physical lengths of the axons and dendrites comprising those links. They also provision specific ‘thalamic’ channels to drive the network out of ‘spontaneous’ activity and into a more physiologically relevant ‘evoked’ activity as would be seen upon real sensory activation. By contrast, practical implementations of anatomically inspired networks bent to some specific computational end have more artificially defined input and output layers, which must eventually time out and converge on an answer. Real brain networks, as we are aware, are open-ended in nearly every regard.
Something the project might additionally come to offer is a consensus template for the organization of what has come to be known in the business as a Brain Activity Map. Talk of these ‘BAMs’ was central to many of the funding arguments waged by the winners and losers in parallel BRAINI Initiative in the US. Although many of the technologies then on the table were way ahead of their time, the basic issue of the best way to capture, store, and later present the essential features of spikes in a large network has yet to be determined. Sure, you can track them all, but that could quickly get out of control. While a Blue Brain simulation might run a billion calculations every 25us, a more advanced human-level model could run a billion-fold more. That would be some serious supercomputing. In fact you need a fairly sophisticated rig just to try and run the Blue Brain model in its current form. The good news is that it is right there on the Swiss EPFL website if you want to try.
One thing the Blue Brain model doesn’t seem to have is any glial cells — and why would it if it is really just an electrical model? But real brains are much more than that, even their spikes alone are much more. That is a shortcoming which may need to be addressed sooner or later in order to move significantly forward.
As an example here, to try to model the full suite of larger mechanical phenomena that are now known to be intimately associated the generation and propagation of spikes would probably require all the mechanically-active elements in the brain to be included. Perhaps the saving grace in taking into consideration the whole brain, including things like the vasculature, is that you gain additional constraints that can be applied at the top level of the model. In other words, the limitations acquired by necessitating a blood supply for delivering things like oxygen or nutrient, and removing heat or waste, are also features: They impose an energy equation on the whole operation — arguably the most powerful of methods available to simplify complicated physics problems.
One reason we made the early segue above to mention Lichtman’s connectome work is that, as indicated, neuroscience is now at the level of what can only be called ‘mitochondrial accounting’. If you are looking for the most compact way to describe a given volume of neural tissue, and presumably predict its recent and future activity, consider two options: A) You could do exactly what is now being attempted in many projects around the world — namely, capture all the detailed membrane geometry of those impossibly contorted miles of neural wiring and their terminal synapses, or B) you could just capture than the specifics of what really matters in neurons — namely the fuss and maneuver of their controlling mitochondrial endosymbionts.
The thing to realize here, is that if you choose option B, you pretty much get most of option A for free. A sufficiently detailed time lapse of the highly motile mitochondria would very quickly trace out every neurite and synapse. (Only rarely do they seem to get expelled and taken up across cell boundaries.) Furthermore, such imaging would also profile the energetic status of entire interior of the cell, which as we mentioned, may be the important thing for constraining big models. If you could look at the brain in real time and zoom in anywhere, just like it was Google Earth, the main activity you would see are these mitochondria. Their movements, and changes in things like membrane potential and calcium concentration, may not be fast enough to generate and respond directly to high speed spikes. Nonetheless, they constitute the bread and butter of neural activity.
There was only one other real disappointment with the Blue Brain report. Although they used many cell types with unique and dignified names — cells of the ilk of deep layer Martinotti cells, von Economo neurons, and the giant pyramidal cells of Betz just to name a few — we did not see many of the famed ‘Markram cells’ deployed in the model. Perhaps that will be an essential feature to look forward to in a forthcoming subcortical component.
As far as any real world applications for Blue Brain, there may already be a few. If they were to deposit their model over at NIST, it would rapidly become the new benchmark for testing out supercomputers. And if it can solve any AI problems it might even capture the attention of Google — perhaps even helping them out a bit with that pesky little ‘latency’ problem they seem to be having with their self-driving cars.