# Liquid state machine

A **liquid state machine** (**LSM**) is a particular kind of spiking neural network. An LSM consists of a large collection of units (called *nodes*, or *neurons*). Each node receives time varying input from external sources (the **inputs**) as well as from other nodes. Nodes are randomly connected to each other. The recurrent nature of the connections turns the time varying input into a spatio-temporal pattern of activations in the network nodes. The spatio-temporal patterns of activation are read out by linear discriminant units.

The soup of recurrently connected nodes will end up computing a large variety of nonlinear functions on the input. Given a large enough variety of such nonlinear functions, it is theoretically possible to obtain linear combinations (using the read out units) to perform whatever mathematical operation is needed to perform a certain task, such as speech recognition or computer vision.

The word liquid in the name comes from the analogy drawn to dropping a stone into a still body of water or other liquid. The falling stone will generate ripples in the liquid. The input (motion of the falling stone) has been converted into a spatio-temporal pattern of liquid displacement (ripples).

LSMs have been put forward as a way to explain the operation of brains. LSMs are argued to be an improvement over the theory of artificial neural networks because:

- Circuits are not hard coded to perform a specific task.
- Continuous time inputs are handled "naturally".
- Computations on various time scales can be done using the same network.
- The same network can perform multiple computations.

Criticisms of LSMs as used in computational neuroscience are that

- LSMs don't actually explain how the brain functions. At best they can replicate some parts of brain functionality.
- There is no guaranteed way to dissect a working network and figure out how or what computations are being performed.
- Very little control over the process.

## Universal function approximation[edit]

If a reservoir has **fading memory** and **input separability**, with help of a readout,
it can be proven the liquid state machine is a universal function approximator using Stone-Weierstrass theorem.^{[1]}

## See also[edit]

- Echo state network: similar concept in recurrent neural network.
- Reservoir computing: the conceptual framework.
- Self-organizing map

## Libraries[edit]

- LiquidC#: Implementation of topologically robust liquid state machine
^{[2]}with a neuronal network detector [1]

## References[edit]

**^**Maass, Wolfgang; Markram, Henry (2004), "On the Computational Power of Recurrent Circuits of Spiking Neurons",*Journal of Computer and System Sciences*,**69**(4): 593–616, doi:10.1016/j.jcss.2004.04.001**^**Hananel, Hazan; Larry, M., Manevit (2012), "Topological constraints and robustness in liquid state machines",*Expert Systems with Applications*,**39**(2): 1597–1606, doi:10.1016/j.eswa.2011.06.052.CS1 maint: Multiple names: authors list (link)

- Maass, Wolfgang; Natschläger, Thomas; Markram, Henry (November 2002), "Real-time computing without stable states: a new framework for neural computation based on perturbations" (PDF),
*Neural Comput*,**14**(11): 2531–60, CiteSeerX 10.1.1.183.2874, doi:10.1162/089976602760407955, PMID 12433288, Archived from the original on February 22, 2012.CS1 maint: Unfit url (link) - Wolfgang Maass; Thomas Natschläger; Henry Markram (2004), "Computational Models for Generic Cortical Microcircuits",
*In Computational Neuroscience: A Comprehensive Approach, Ch 18*,**18**: 575–605