If the SFN Neuroscience Conference is a big city, the CNS conference is a small village where everybody knows everybody else. Although computational neuroscience is maturing as a field and becoming increasingly part of normal neuroscience, it still retains that small-scale feeling. Part of it may be due to the deliberate culture creation of the EU Advanced Neuroscience Course and the Woods Hole courses, part might be that the PIs know each other from when the field was even smaller, but to a large degree it is due to the CNS conferences. A sizeable percentage of the researchers in the field go there every year and it is a good way of seeing what is going in.
Here are some of my notes about interesting talks and presentations at the conference.
Mary Kennedy opened with a talk about simulation of the biochemical signalling in synaptic spines. We are now close to characterising most major proteins involved in the signalling cascade in the spine that underlies the induction of synaptic change and hence much of learning. The downside is that even in what looks like fairly straightforward cascades many surprising interactions can occur. She brought up one of my favourites: calmodulin binds up to four calcium ions, but it seems that it has different effects depending on the number of ions (1-2 or 3-4) since it will activate different parts of the cascade due to various subtle cooperativities and competitive processes. Even worse, all the usual assumptions about Michaelis-Menten dynamics (everything is well mixed, there is a large number of molecules and they are many compared to the number of enzymes) are likely violated in the spine. However, there were one poster by someone that had done Monte Carlo simulations of the spine, apparently reaching the conclusion that MM is still a good approximation. Let’s wish that remains true. In any case, a good talk that gave me the feeling that we might be reaching the inflexion point in the knowledge sigmoid about the synaptic spine.
Beth L. Chen held a wonderful talk titled ”Why are most neurons in the head?”. She had explored what happens if neurons are distributed so as to minimise the cost of axon connections between them and the sensory organs and muscles they subserve. The sensors and actuators place boundary conditions, and the cost was in her model quadratic (essentially it was a spring layout algorithm, without any node repulsion). The model system was C. Elegans, and her model did a surprisingly good job at predicting where neurons should go. She then went on to study the outliers, the neurons her model predicted a different location from their real one. It turned out that in all cases there was a clear explanation: either they were pioneer neurons that helped others grow in the right direction during development, or command neurons for forward/backward movement. Very nice.
Her advisor Dmiti B. Chklovskii did another good talk about synaptic connectivity and neuron morphology. He started out with the wiring problem: how to wire a fixed number of neurons to each other using as little connections as possible. Using straight connectivity (on axon per pair) with axons the total axon volume ended up 30,000 times larger than reality. With branching axons the number ended up a 100 times as large. With branching axons and dendrites just twice as large, and with synaptic spines the total volume ended up 60% lower, which also gives room for glia and blood vessels. Spines appear to be rather dynamic compared to axons and dendrites, appearing and disappearing at their near-crossings.
Ruggero Scorcioni described reconstruction of axon arbors from scans; most of the reconstructed neurons up until today have been mainly soma and dendrites. Adding the axons makes them far more impressive, and hints at tricky interconnections. Overall the work in Giorgio Ascoli’s group at the Krasnow Institute at GMU at reconstructing neurons and then processing their morphology is producing some interesting results. Alexei Samsonovitch demonstrated very plausible models of granule and pyramidal cells based on hidden Markov models. They could also show statistically that dendrites grow away as if repulsed by the cell body. This is rather tricky to explain biologically (if it was a chemical repellent the dendrites would be affected by all other cells too, if it was cytoskeletal rigidity the dendrites would not be able to bend and then regain their original direction as they do). They suggest that one way of achieving it could be that cell spiking produces external pH gradients that guide the cell, which would be a rather impressive phenomenon.
Yael Niv presented some simulations relevant to the debate about what the dopamine signal represents in the brain. According to the results of Schultz et al. the signal encodes the temporal difference between reward and expected reward, just as in TD-learning: an unexpected reward creates a rise, the absence of reward a decrease. But more recent results (Schultz et al.2003) have suggested that it could be just a signal of uncertainty. Yael presented simulations that show how their results could be reprodued in the normal TD-learning case; it is all about distinguishing inter/intra-trial phenomena.
Erik Fransén showed how co-release (some synapses release two neurotransmittors, and sometimes the postsynaptic neurons signals back!) and conditioning enables some synapses to switch from inhibitory to excitatory during certain conditions. A very odd phenomenon that could be a curiosity or very deep and important.
Terry Sejnowski did an overview of his greatest hits, or perhaps rather the greatest spike train hits. Neurons really seem to have a high degree of fidelity: give them the same input signals, and they respond in the same way. Even when they miss certain spikes they leave them out in neat patterns. The precise patterns seem to repeat even across individuals (Reinagel & Reid 2000)! As he remarked, this is not neural coding but anti-coding: different codes for the same stimulus, very similar but leaving out different parts. He suggested that what we are seeing are temporal attractors, trajectories that are fairly stable to disruptions. One function could be to handle noisy and uncertain situations, where the temporal structure matters, while clear-cut and strong stimuli are just rate codes.
Alexander Grushkin presented a model of the emergence of hemispheric asymmetry, where a genetic algorithm got to evolve connection weights under different fitness functions. It appears that lateralisation occurs when one needs to do trade-offs like between accuracy, response time or energy consumption.
Eugene Izhikevich did a great talk where he compared different neuron models. He plotted computational complexity vs. number of features (like being able to burst, show Calcium spikes etc) and did a kind of consumer report. He especially picked apart why the integrate-and-fire neurons are bad, and then went on to show how in most neuron models the important thing is to get the vicinity around the fixed points corresponding to the resting state and the spiking threshold right. He then showed a general form that could reproduce all the ~30 features he had listed while keeping the computation demands extremely low. The audience could not tell his simulated neurons apart from real neurons. Matlab files can be found at www.izhikevich.com
As a finale, Miguel Nicolelis presented his work on neural recording from monkeys and using it to control robot arms. Doing up to 300 simultaneous recordings is now feasible, and one can do fascinating data mining: this is really large-scale neuroscience. One application was tracking the appearance of Parkinson symptoms in DAT-KO knockout mice over the span of a few hours and during their recovery when given l-dopa – a very good look into a complex dynamical process. Another application is real time neurophysiology where the signals can be used as inputs to computational models that can then be used to compare output with brain activity and behaviour. One result of his monkey work seems to be that the brain changes to incorporate any tool we use to manipulate the environment with into our body representation (as he put it, to Pelé the football was an extension of the foot). Another wild and stimulating claim was that there is no neural code: we may be constantly changing our internal representations. During the workshop on internal representations this was further debated; both from a semantic perspective (what we call a neural ’code’ is often just a neural correlation between a signal and a behaviour/stimuli) and from practical perspectives. Regardless of the representation issue it is clear (as Krishna Shenoy showed in his lecture) that we are getting better and better at building decoders that translate neural data into real-world action that fits reasonably well with the desired result. Maybe we will figure out the representations long after the neural prosthetics and implants have become commonplace.
Among the posters I found a few titbits:
Andreas Knoblauch had done a bit of statistical analysis on the weight distribution in Hebbian learning, naming the particularly nasty non-monotonous weight distribution one gets with a relatively sparse network the ’Willshaw distribution’. Doing a bit of statistics with this enabled him to construct a very neat spiking Willshaw-like network with an almost suspiciously high capacity and noise tolerance. Perhaps not very biologically plausible, but good for hardware implementation.
A poster Roberto Fernández Galán demonstrated the formation of what looked like Hebbian cell assemblies in the honeybee olfactory cortex: cells that became correlated by the smell remained correlated afterwards, just as the cells that became anti-correlated with each other. Nice to know that the same dynamics seems to recur from insects to mammals, when one compares with the McNaughton and Buszaki experiments in rats and monkeys.
I presented a poster about STDP together with Erik Fransén. Our main question was how to achieve a high degree of temporal precision in the STDP (milliseconds) curve when the molecules involved in the process have reaction time constants on the order of hundreds of milliseconds. Our approach was to use autocatalytic amplification to scale up the minute differences caused by different calcium traces: after a few hundred milliseconds they were big enough to start irreversible synaptic changes, and yet these changes now reliably detected very fast transitions.
This model remains rather conceptual, so it was fun to compare with the talk given Mary Kennedy and try to map the known chemistry onto our variables. In our cases we get the prediction that there should be some autocatalytic enhancement of the calcineurin system. Richard Gerkin also presented a conceptual STDP model that did rather well; their addition of a ”veto-variable” to prevent overshoot fits with some of the ideas Erik and I have discussed. In our model we just assume that the synapse knows how much calcium is from the presynaptic spike and how much from the backpropagating action potential. While this could be done using proteins located near different ion channels, a more plausible approach is to have proteins with different affinities that activate the different autocatalytic chains. But then one needs to prevent an erroneous LTD when the calcium concentration declines after the spikes, and here some form of deactivation is needed.
Finally, Anatoli Gorchetnikov demonstrated a simple rule of multiplying the membrane current and potential could produce workable STDP in a cell model. So regardless of what one thinks STDP is good for (there were lots of posters about that), it seems that we are starting to figure out its implementation, from the phenomenology to the biochemistry.
All in all, a very stimulating conference. The only letdown was the lack of the Rock’n-roll jam session from previous years. The field may mature, but it would be boring if the researchers did.
Posted by Anders2 at July 28, 2004 05:42 PMMonday, 02 August 2004
A very nice report from the conference, Anders. Thank you for posting it. I hope you do the same for Transvision, esp. since I cannot attend this year.
For those readers who, like me, didn't know the acronym "STDP", it means Spike Timing Dependent Plasticity. You can read the Wikipedia entry here: http://en.wikipedia.org/wiki/Spike_timing_dependent_plasticity
Posted by: Jay Dugger at August 2, 2004 04:40 PMThanks! You all know you can comment if one of my explanations or terms doesn't make sense, and then I'll add a bit more explanation.
Posted by: Anders at August 3, 2004 03:25 AM