When Neurons Forget: The Ornstein-Uhlenbeck Process in Neuroscience

The brain is noisy. Individual neurons fire irregularly, synaptic transmission is unreliable, and the electrical potential across a cell membrane fluctuates continuously even in the absence of any external stimulus. For decades, this noise was treated as a nuisance — something to be averaged out to reveal the true signal underneath. It turned out to be a signal in its own right.

The mathematical tool that changed how neuroscientists think about this noise is a stochastic differential equation originally derived in 1930 to describe the motion of a Brownian particle in a viscous fluid. The **Ornstein-Uhlenbeck process** now sits at the heart of the leaky integrate-and-fire model, the most widely used model of single neuron dynamics. The connection is not an analogy. It is exact.


The Leaky Integrate-and-Fire Neuron

A neuron integrates incoming synaptic currents and fires an action potential when its membrane potential $V(t)$ crosses a threshold $V_{\text{th}}$. Between firings, the subthreshold dynamics are well approximated by a resistor-capacitor circuit:

$$\tau_m \frac{dV}{dt} = -(V – V_{\text{rest}}) + R\,I(t)$$

$(1)$

where $\tau_m = RC$ is the membrane time constant (typically 10–30 ms in cortical neurons), $V_{\text{rest}}$ is the resting potential, and $I(t)$ is the total synaptic input current. The term $-(V – V_{\text{rest}})$ is a leak: the membrane continuously bleeds charge, pulling the potential back toward rest. Without input, the neuron forgets any perturbation on a timescale $\tau_m$.

Now replace $I(t)$ with its stochastic approximation. A cortical neuron receives thousands of synaptic inputs per second. By a diffusion approximation (the central limit theorem applied to Poisson spike trains), the aggregate input current converges to a Gaussian white noise process. Equation (1) becomes:

$$dV_t = \frac{1}{\tau_m}(V_{\text{rest}} + \mu_{\text{syn}} – V_t)\,dt + \frac{\sigma_{\text{syn}}}{\tau_m}\,dW_t$$

$(2)$

This is the Ornstein-Uhlenbeck SDE with long-run mean $\mu = V_{\text{rest}} + \mu_{\text{syn}}$, speed of reversion $\theta = 1/\tau_m$, and noise amplitude $\sigma = \sigma_{\text{syn}}/\tau_m$. The leaky integrate-and-fire neuron, between spikes, is an OU process.


What the Stationary Distribution Tells You

Below threshold, and ignoring the reset mechanism for a moment, the OU process has a stationary distribution:

$$V_\infty \sim \mathcal{N}\!\left(\mu,\; \frac{\sigma_{\text{syn}}^2}{2\tau_m}\right)$$

$(3)$

This is not merely a mathematical convenience. It predicts the distribution of membrane potential fluctuations that can be measured directly in whole-cell patch clamp recordings. The ratio $\sigma_{\text{syn}}^2 / 2\tau_m$ has a concrete biological interpretation: it is the balance between how fast noise drives the membrane away from rest and how fast the leak pulls it back. A longer membrane time constant means a wider distribution — the neuron accumulates noise before it can dissipate it.

Equation (3) makes a sharp, testable prediction. In a landmark series of experiments, Destexhe and collaborators recorded intracellular membrane potential fluctuations in cortical neurons in vivo and showed that the distribution is well described by (3), with estimated parameters consistent with known biophysical values. The diffusion approximation works.


Firing Rate as a First Passage Problem

The neuron fires when $V_t$ hits $V_{\text{th}}$. The firing rate is therefore the rate of first passage of an OU process to a fixed barrier. This is the **Siegert formula** (1951), which gives the mean interspike interval as:

$$\langle T \rangle = \tau_m \sqrt{\pi} \int_{y_r}^{y_{\text{th}}} e^{u^2}\!\left(1 + \text{erf}(u)\right) du$$

$(4)$

where $y_{\text{th}} = (V_{\text{th}} – \mu)/(\sigma_{\text{syn}}\sqrt{2/\tau_m})$ and $y_r$ is the corresponding normalised reset potential. This integral has no closed form but is easily evaluated numerically. The key observation is that the firing rate depends on $\mu_{\text{syn}}$ and $\sigma_{\text{syn}}$ separately — meaning noise level alone can control how often a neuron fires, even if the mean input is held fixed.


Noise as a Computational Resource

This last point is the genuinely surprising one. In a deterministic integrate-and-fire neuron, a mean input below the threshold simply never produces a spike. In the stochastic (OU) version, fluctuations can push the membrane potential over threshold even when the mean drive is subthreshold. The neuron fires at a rate that depends continuously on $\sigma_{\text{syn}}$.

This turns noise from a nuisance into a mechanism. **Stochastic resonance** in neural systems — the phenomenon whereby adding noise to a subthreshold signal can increase detection reliability — is a direct consequence of this first-passage structure. The OU process makes the mechanism precise: the noise amplitude $\sigma_{\text{syn}}$ shifts the operating point of equation (4), trading off mean firing rate against variability in a way that can be optimised for signal transmission.

The brain, in this picture, does not merely tolerate noise. It uses it.


Takeaway

The Ornstein-Uhlenbeck process crossed from physics into neuroscience not because neuroscientists were looking for stochastic processes, but because the biophysics of membrane leak and synaptic bombardment independently produce exactly the SDE that Uhlenbeck and Ornstein derived in a fluid mechanics context. The mathematical structure is the same because the physical structure — a restoring force plus uncorrelated noise — is the same. That coincidence turned out to be one of the most productive in computational neuroscience.


Interested in applying these ideas to your work? Get in touch.