For many decades, neurons were considered to be the elementary computational units of the brain and were assumed to summate incoming signals and elicit action potentials only in response to suprathreshold stimuli. Although modelling studies predicted that single neurons constitute a much more powerful computational entity, able to perform an array of nonlinear calculations, this possibility was not explored experimentally until the discovery of active mechanisms in the dendrites of most neuron types. Here, we review several modelling studies that have addressed information processing in single neurons, starting with those characterizing the arithmetic of different dendritic components, to those tackling neuronal integration at the cell body and, finally, those analysing the computational abilities of the axon. We present modelling predictions along with supporting experimental data in an effort to highlight the significant contribution of modelling work to enhancing our understanding of single‐neuron arithmetic.
Understanding how the brain works remains one of the most exciting and intricate challenges of modern biology. Despite the wealth of information that has accumulated during the past years about the molecular and biophysical mechanisms that underlie neuronal activity, similar advances have yet to be made in understanding the rules that govern information processing and the relationship between the structure and function of a neuron.
Computational models provide a theoretical framework together with a technological platform for enhancing our understanding of nervous system functions. Certain tools are suitable for efficiently analysing and interpreting complex data sets, such as multi‐channel recordings from hundreds of neurons, whereas others are used to simulate the activity of single cells, neural networks or systems of networks at various levels of abstraction. The development and application of such modelling tools enable researchers to quantitatively investigate several hypotheses by using interactive models of the systems under study. When used in conjunction with experimental techniques, these models facilitate hypothesis testing and help to identify key follow‐up experiments.
In this review, we discuss several computational studies in which realistic biophysical models have been used to elucidate the computational tasks performed by a neuron. We focus on single‐neuron models that incorporate a significant level of detail and compare modelling predictions with experimental findings. Although a great amount of work has also been devoted to modelling neural components as well as neuronal assemblies at a more abstract level, reviewing these studies is not the purpose of this article.
Whether incredibly simple as bipolar cells in the retina or immensely complex as Purkinje cells in the cerebellum (Ramon y Cajal, 1933), most neurons are composed of three main structural units: the dendrites, the soma (cell body) and the axon. For the past few decades, axons and dendrites have been considered to be simple transmitting devices that communicate signals to and from the soma in which thresholded computations take place. As a result, neuronal cells were initially represented as spherical point neurons—consisting only of a cell body—and information transfer was thought to lie entirely in their average firing rates (McCulloch & Pitts, 1943). However, primarily computational, and more recently physiological, studies have shown that variations in the morphology and ionic conductance composition of different neurons provide the cell with enhanced computational abilities far outreaching those captured by a point neuron.
Computing with dendrites: new roles for old structures
The old view that dendrites are merely passive cables that relay incoming signals to the cell body no longer holds true. In the light of accumulating evidence highlighting the active role of dendrites in signal integration, these structures seem able to perform a variety of computational tasks, including temporal integration, signal amplification and attenuation, and detection of coincident incoming inputs (for a recent review, see London & Hausser, 2005). In this section, we further elucidate the role of dendrites in the information processing capacity of the neuron by focusing on insights gained primarily from modelling studies and by using a bottom‐up approach: starting from the smallest dendritic subunit—the spine—up to the effect of network activity on dendritic and, subsequently, neuronal function.
Computations carried out by excitable spines
Dendritic spines were anatomically identified by Ramon y Cajal in 1911, who referred to them as espinas due to their resemblance to thorns on flower stems (for a review, see Segal, 2002). Theoretical findings first indicated that the anatomical characteristics of spines, as well as the possible presence of voltage‐gated ion channels, allow for compartmentalized gain modulation of synaptic inputs in spine heads—that is, information can be combined nonlinearly from two or more sources (Segev & Rall, 1998). Models predicted that strong inputs are able to initiate local dendritic spikes (Baer & Rinzel, 1991), which in turn could activate additional nearby spines (Shepherd et al, 1985) and therefore locally amplify incoming inputs. As schematically depicted in Fig 1, individual spines can perform nonlinear integration of incoming coincident signals, and their interaction results in a spatially restricted enhancement of dendritic events. By contrast, if synaptic stimulation is sparse, the added resistance and capacitance load provided by the spine membrane coupled with the narrow spine neck would act as a local filter, promoting linear integration (Yuste & Urban, 2004). In a recent compartmental modelling study, spine geometry together with a high density of sodium ion (Na+) channels on spines might explain the efficacy of somatic action potentials for invading apical dendrites in CA1 pyramids (Tsay & Yuste, 2002, 2004). This implies a role for these structures in controlling back‐propagating signals and perhaps in coincidence detection (Tsay & Yuste, 2002, 2004). Taken together, these studies reveal the exciting possibility that dendritic sections containing small clusters of spines act as individual computational units. Interestingly, in a recent experimental study, short (around 40 μm) basal dendritic compartments in layer V pyramidal neurons were shown to summate local signals as individually thresholded sigmoidal units (Polsky et al, 2004), which supports this hypothesis.
If sparse and clustered synaptic inputs can be differentially sensed by a dendritic section, is it possible that different spatial arrangements of synaptic inputs are differentially perceived by the cell? Early neuron models incorporating passive dendritic properties and applying the cable theory—that is, how voltage changes are propagated along dendritic segments—indicated a possible linear summation of inputs arriving at separate parts of the dendritic tree (Rall, 1959). This implied that location is not important. By contrast, inputs that are close together were thought to combine sublinearly due to the activation of a shunting current (Rall et al, 1967). Despite the simplicity of the early models and the presence of various active membrane mechanisms that could in principle support supralinear dendritic integration, experimental evidence in various neuron types has mostly reinforced the linear or sublinear summation rule (Cash & Yuste, 1999; Magee & Cook, 2000; Tamas et al, 2002). However, at least two studies have shown the presence of powerful thresholding events isolated in the thin dendrites of neocortical (Schiller et al, 2000) and CA1 (Wei et al, 2001) pyramidal cells.
This idea of spatial compartmentalization in the neuron and its role in information processing was explored further with the use of a detailed CA1 pyramidal neuron model (Poirazi et al, 2003a,b). According to the model, each apical oblique dendrite—or part of it—acts as an independent computational unit that summates inputs using a sigmoidal activation function. Different branch outputs are then linearly combined at the cell body. Both of these predictions were verified experimentally in a layer V pyramidal neuron (Polsky et al, 2004), and a recent study confirmed that radial oblique dendrites of CA1 pyramidal neurons function as single integrative compartments (Losonczy & Magee, 2006). As shown in Fig 2, layer V neocortical neurons linearly summate between‐branch excitatory postsynaptic potentials (EPSPs), but implement a sigmoidal activation function for within‐branch EPSPs (Fig 2B), as predicted by models of CA1 neurons (Fig 2A). In other words, thin dendritic branches seem to be able to combine incoming signals according to a thresholding nonlinearity, in a similar way to a typical point neuron. Interestingly, when synaptic inputs vary in both their temporal and spatial distribution, the distal apical trunk of a CA1 pyramidal cell operates in two fundamentally distinct integration forms (Gasparini & Magee, 2006). Asynchronous or spatially distributed synaptic inputs—similar to those occurring during theta oscillations—summate linearly. Synchronous and clustered inputs—similar to those occurring during sharp waves—summate according to a steep sigmoidal nonlinearity. This bimodal dendritic integration code allows a single cell to perform two different state‐dependent computations: input strength encoding during theta states and feature detection during sharp waves (Gasparini & Magee, 2006).
Computing with regenerative dendritic events
What would be the benefit of a cell consisting of dendrites that are able to isolate complex events? Early physiological studies identified the presence of active dendritic events in neurons (Kandel & Spencer, 1961); however, they were not further investigated experimentally at that time. Modelling studies predicted a role for Na+‐mediated dendritic spikes—initially modelled with H‐H conductances—in allowing the back‐propagation of action potentials into the dendritic tree (Rall & Shepherd, 1968), as well as enabling coincidence detection between proximal and distal inputs (Softky & Koch, 1993).
A closer evaluation of dendritic events revealed that these spikes can be mediated by Na+ (Golding & Spruston, 1998) or calcium ion (Ca2+; Yuste et al, 1994) channels in the distal dendrites, as well as by N‐methyl D‐aspartate (NMDA) channels (Schiller et al, 2000) or a combination of Na+ and NMDA channels (Ariav et al, 2003) in the basal dendrites of pyramidal neurons. The quest is now to reveal their role in neuronal function. Although these spikes can be confined to their dendritic site of origin (Wei et al, 2001; Schiller et al, 2000; Zhu, 2000), physiologically relevant situations such as large, suprathreshold synaptic stimuli in the distal dendrites or the activation of several branches together allow these spikes to act globally and modulate neuronal output (Zhu, 2000; Larkum et al, 2001). In a compartmental CA1 neuron model, dendritic spike initiation in response to perforant path (PP) stimulation, which is confined in the distal tuft, could be transferred to the soma by activation of the Schaffer collateral (SC) pathway (Jarsky et al, 2005). Similarly, dendritic Ca2+ conductances in the distal tuft of a layer V cortical neuron model, which are unable to initiate an action potential at the soma, could be amplified by coincident back‐propagating action potentials (Larkum et al, 2004); in addition, dendritic Ca2+ spikes have been suggested to convert single spikes into bursts in CA3 pyramidal neuron models (Traub et al, 1991), leading to an enhanced neuronal response.
Although the above studies indicate that dendritic events might amplify neuronal gain and facilitate coincident detection of inputs, simulated experiments in a detailed CA1 pyramidal neuron model indicate that distal dendritic activation could bidirectionally gate suprathreshold SC input (E.K.P. and P.P., unpublished data). In particular, PP theta‐burst stimulation coincident with or slightly preceding regular SC input, initiates Ca2+ spikes in the distal dendrites and transforms a previously regular firing response into bursting (Fig 3A,B). By contrast, when subthreshold PP stimulation precedes the SC input by a few hundreds of milliseconds, the distal dendrites are hyperpolarized due to enhanced γ‐aminobutyric acid (GABA) transmission. This results in a reduced gain of neuronal output, as seen by the blocking of the spikes induced by SC input only (Fig 3A,C). This PP‐induced ‘spike blocking’ was previously reported experimentally (Dvorak‐Carbone & Schuman, 1999). Collectively, dendritic regenerative events provide a means by which several localized events are combined to modulate neuronal output, thus expanding the response flexibility of single neurons.
Normalizing effects on synaptic integration
Whereas dendritic regenerative events allow gain modulation of neuronal output, passive properties and K+ conductances in the dendrites act to spatially normalize and temporally integrate inputs, which provides the neuron with a different mode of information processing.
These passive properties (such as membrane time constant, input resistance and dendritic length) allow for differential integration of synaptic inputs that arrive at distal or proximal parts of the dendritic tree owing to the ‘large voltage attenuation and significant temporal delay’ of propagated signals (Koch & Segev, 2000). At the same time, incorporation of a hyperpolarization‐activated, potassium ion (K+) conductance (Ih) in a CA1 compartmental model was shown to account for the experimentally observed normalization of EPSPs that originate from different parts of the dendritic tree so that all inputs induce similar depolarizations at the cell body (Golding et al, 2005; Magee & Cook, 2000). Additional modelling experiments indicated that Ih might be involved in setting the temporal window for input summation around subthreshold levels, thus enabling coincidence detection and minimizing the effectiveness of non‐synchronized inputs (Migliore et al, 2004; Migliore, 2003). Other dendritic K+ conductances could either support or counteract the effect of Ih on temporal summation (Day et al, 2005).
Finally, the long‐standing but quite neglected effect of background noise due to network activity on neuronal information processing should be considered. According to the work of Bernander and colleagues (1991), background activity in a passive neuron model dampens the effectiveness of asynchronous—but not synchronous—inputs in generating a somatic action potential. This in turn facilitates the distinction between ‘unimportant’ and ‘meaningful’ signals, respectively. In a more detailed cortical neuron model that incorporates active dendritic conductances, intense synaptic network activity was shown to increase the membrane conductance, promote the location‐independent effect of inputs arriving onto different dendritic regions (but see London & Segev, 2001) and increase the probability for dendritic spike initiation and its forward propagation to the axon (Rudolph & Destexhe, 2003). Thus, incoming background noise from network activity could greatly influence the integrative properties of a neuron—for example, by modulating the spatiotemporal window for dendritic nonlinearities (Azouz, 2005).
In conclusion, the neuron does not behave as a point neuron, but it might consist of many different point neurons in the form of a cluster of spines or a stretch of dendrite, each of which has its own integration rules according to its spatial location and temporal architecture of incoming inputs. When locally induced signals manage to escape their subunit, they are affected by global cellular parameters and are set by the overall network activity the cell receives, promoting a quasi‐linear interaction mode.
The overall picture emerging from this analysis is that a single neuron could be decomposed into a multi‐layer neural network, able to perform all sorts of nonlinear computations (London & Hausser, 2005). Interestingly, the average firing rate of a detailed CA1 model to hundreds of different input patterns was accurately predicted by a two‐layer neural network abstraction (Fig 2), in which individual oblique dendrites provided the first layer and the soma acted as the output layer (Poirazi et al, 2003b). This implies a much larger storage capacity than originally assumed for single neurons. According to another computational study, the pattern discrimination capacity of such a cell exceeds that of a point neuron by at least one order of magnitude (Poirazi & Mel, 2001). An even more complex single‐neuron unit proposed by Hausser & Mel (2003) entails a two‐compartment model in which the distal tuft acts as one compartment and the thin branches of the perisoma act as the other. Both compartments act as two‐layer networks whose outputs combine at the cell body, giving rise to an extra powerful, three‐layer computing unit.
Information processing at the cell body
Dendrites contribute to nonlinear summation of inputs, whereas the soma might support a different kind of information processing—that of enabling a persistent firing mode in the absence of stimulation. Recent experimental and modelling studies have highlighted the importance of somatic intrinsic membrane mechanisms in generating and maintaining persistent activity, in addition to the traditional network mechanisms (for a review, see Major & Tank, 2004). In vitro work in the entorhinal cortex (Egorov et al, 2002; Tahvildari & Alonso, 2005) showed that a single neuron is able to generate graded persistent activity under pharmacological stimulation of muscarinic acetylcholine receptors in response to somatic or synaptic stimulation, owing to activation of a slow Ca2+‐dependent mixed ionic (CAN) conductance. Modelling work has emphasized a possible involvement of the slow temporal decay of the EPSP (Lisman et al, 1998), the CAN conductance (Tegner et al, 2002) or the Ca2+‐induced Ca2+ release mechanism (Loewenstein & Sompolinsky, 2003) in the maintenance of a stable persistent state at low physiological frequencies (10–50 Hz).
Persistent activity in vivo has also been observed in the hippocampus (Wirth et al, 2003), although the underlying mechanisms are unclear. Ongoing work in our laboratory shows that persistent activity in a detailed CA1 pyramidal neuron model can be induced in response to theta‐burst synaptic stimulation as well as in response to somatic stimulation under the influence of cholinergic modulation. The model supports a role for a slow, Ca2+‐dependent tail current in maintaining sustained activity in hippocampal neurons (Poirazi, 2005). The ability of single cells to be persistently active further enhances their computational power by adding another mode to their repertoire of complex functions.
Computing with axons
Axons provide a medium through which information, in the form of action potentials, flows across neuronal assemblies. Several computational studies have provided insights into the ionic mechanism for action‐potential generation, particularly the seminal work of Hodgkin and Huxley, as well as the action‐potential initiation site (reviewed in Stuart et al, 1997). More recent studies focusing on the reliability and accuracy of action‐potential generation and propagation indicate that these properties, which are crucial for normal information processing, could be modified by changes in axonal geometry and ionic conductance composition (Segev & Schneidman, 1999; Debanne, 2004). The work of Goldstein & Rall (1974) in simulated axons with changing diameter and different branching patterns first implicated the above structural characteristics in modifying action‐potential curve and propagation speed. After quantifying the effect of such morphological changes on action‐potential propagation velocity, another computational study suggested that synaptic boutons in different terminals are activated asynchronously in a reconstructed axon of a layer V neuron in the somatosensory cortex (Manor et al, 1991). The density and type of ionic channels along the axon and branching points has also been suggested to gate action‐potential propagation. For example, activation of just a few clusters of A‐type K+ channels has been shown to gate axonal propagation of action potentials in CA3 neurons in both simulations and experiments (Debanne et al, 1997; Kopysova & Debanne, 1998). When combined, these findings imply that axons are much more than simple conducting devices for action potentials. Enriched with a variety of computational abilities, axons seem to be complex transmitting devices whose role in neuronal information processing is worth investigating thoroughly.
For several years, neuroscientists believed that the brain's transistor or fundamental processing unit was the neuron itself, which collects and processes incoming signals from neighbouring cells. In this review, we suggest that the morphological and ionic properties of the dendrites, the soma and the axon provide these structures with an array of computational abilities that might enable them to contribute differentially to neuronal function.
Dendrites seem to be key players in functions such as binocular disparity (Archie & Mel, 2000) and directional selectivity in the visual system of various species (Single & Borst, 1998; Euler et al, 2002; Vaney & Taylor, 2002), as well as in improving sound localization (Agmon‐Snir et al, 1998) and in supporting the transition between encoding and retrieval modes of associative memory systems (Hasselmo et al, 1996; Dvorak‐Carbone & Schuman, 1999). Conversely, persistent activity maintained by somatic mechanisms has been suggested to represent a cellular correlate of working memory functions (Goldman‐Rakic, 1995). Finally, propagation delays of the action potential along the axon have been attributed a role in precise temporal coding in the auditory system of the barn owl (Carr et al, 2001), whereas axonal Na+ channels have been suggested to act as a memory reservoir for previous activity levels (Segev & Schneidman, 1999).
Linking computational properties to behaviour is the ultimate challenge for both modelling and experimental studies of the future. Recent papers applying modelling, physiological, molecular, genetic and behavioural techniques in Drosophila and mice have shown the contribution of different voltage‐dependent K+ conductances in light processing by photoreceptors and in reversing age‐induced impairments in learning and memory, respectively (Vahasoyrinki et al, 2006; Murphy et al, 2004). Such multidisciplinary approaches—in which models are used to formulate experimentally testable predictions and experiments are used to verify the predictions and refine the models—will enable a more thorough investigation of how different neuronal components and the cell as a whole contribute to information processing capacity and behaviour.
The following open questions could provide fertile ground for collaborations among molecular biologists, geneticists, physiologists, modellers and behaviourists for further explorations of the mysteries of the brain. Do specific behaviours require certain neuronal computational tasks? Which parts of the neural circuit or the neuron itself are responsible for these tasks? What are the underlying molecular mechanisms for the distinct operating modes of neuronal integration? Such holistic approaches should lend support to the growing idea reinforced by this review: that something smaller than the cell lies at the heart of neural computation.
This work was supported by Alexander S. Onassis Public Benefit Foundation (K.S.), General Secretariat of Research and Technology, ΠENEΔ 01EΔ311 (E.K.P.) and the EMBO Young investigator Programme.
- Copyright © 2006 European Molecular Biology Organization
Eleftheria Kyriaki Pissadaki, Panayiota Poirazi (who is an EMBO Young Investigator) & Kyriaki Sidiropoulou