User:Joshua paik/sandbox

In computational neuroscience, a threshold linear network is a neural network used to model the dynamics of populations of neurons. The equations are given by

$$ \frac{dx}{dt} = -x + [Wx + b]_+, $$

where $$x \in \mathbb{R}^n $$, $$ W $$ is a n \times n matrix of real coefficients representing synaptic strengths, and $$ [\cdot]_+ = \max \{0,\cdot\}$$ is the threshold nonlinearity, also called a rectified linear unit. The basic idea is that if the sum of the neural units connected to neuron $$i$$ exceeds some threshold, then $$\dot{x}_i$$ will increase in value slightly. These have been well studied in computational and mathematical neuroscience -- notable works include inhibition stabilization by Tsodyks et al., balanced networks by Van Vreejswik and Sompolinsky, and Hahnlosser and Seung symmetric fixed points.