Reservoir computing
Reservoir computing is a framework for computation that may be viewed as an extension of neural networks.[1] Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines[2] and echo state networks[3] are two major types of reservoir computing.[4]
Contents
Reservoir[edit]
The reservoir consists of a collection of recurrently connected units. The connectivity structure is usually random, and the units are usually non-linear. The overall dynamics of the reservoir are driven by the input, and also affected by the past. A rich collection of dynamical input-output mapping is a crucial advantage over time delay neural networks.
Readout[edit]
The readout is carried out using a linear transformation of the reservoir output. This transformation is adapted to the task of interest by using a linear regression or a Ridge regression using a teaching signal.
Types[edit]
Context reverberation network[edit]
An early example of reservoir computing was the context reverberation network.[5] In this architecture, an input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layer perceptron. Two kinds of dynamical system were described: a recurrent neural network with fixed random weights, and a continuous reaction-diffusion system inspired by Alan Turing’s model of morphogenesis. At the trainable layer, the perceptron associates current inputs with the signals that reverberate in the dynamical system; the latter were said to provide a dynamic "context" for the inputs. In the language of later work, the reaction-diffusion system served as the reservoir.
Echo state network[edit]
Backpropagation-decorrelation[edit]
Backpropagation-Decorrelation (BPDC)
Liquid-state machine[edit]
Nonlinear Transient Computation[edit]
Nonlinear Transient Computation [6] (NTC) models the reservoir as a chaotic dynamical system.
Reservoir computing for structured data[edit]
The Tree Echo State Network[7] (TreeESN) model represents a generalization of the reservoir computing framework to tree structured data.
Deep reservoir computing[edit]
The extension of the reservoir computing framework towards Deep Learning, with the introduction of Deep Reservoir Computing and of the Deep Echo State Network (DeepESN) model[8][9][10] allows to develop efficiently trained models for hierarchical processing of temporal data, at the same time enabling the investigation on the inherent role of layered composition in recurrent neural networks.
Current state of development[edit]
In late 2017, a research team from the University of Michigan implemented the reservoir computing principles in a chip and demonstrated its performance in a speech prediction task.[11][12]
Research Initiatives[edit]
IEEE Task Force on Reservoir Computing[edit]
In 2018, the IEEE Task Force on Reservoir Computing has been established with the purpose of promoting and stimulating the development of Reservoir Computing research under both theoretical and application perspectives.
See also[edit]
References[edit]
- ^ Schrauwen, Benjamin, David Verstraeten, and Jan Van Campenhout. "An overview of reservoir computing: theory, applications, and implementations." Proceedings of the European Symposium on Artificial Neural Networks ESANN 2007, pp. 471-482.
- ^ Mass, Wolfgang, T. Nachtschlaeger, and H. Markram. "Real-time computing without stable states: A new framework for neural computation based on perturbations." Neural Computation 14(11): 2531–2560 (2002).
- ^ Jaeger, Herbert, "The echo state approach to analyzing and training recurrent neural networks." Technical Report 154 (2001), German National Research Center for Information Technology.
- ^ Echo state network, Scholarpedia
- ^ Kirby, Kevin. "Context dynamics in neural sequential learning." Proceedings of the Florida Artificial Intelligence Research Symposium FLAIRS (1991), 66-70.
- ^ Crook, Nigel (2007). "Nonlinear Transient Computation". Neurocomputing. 70 (7–9): 1167–1176. doi:10.1016/j.neucom.2006.10.148.
- ^ Gallicchio, Claudio; Micheli, Alessio (2013). "Tree Echo State Networks". Neurocomputing. 101: 319–337. doi:10.1016/j.neucom.2012.08.017.
- ^ Gallicchio, Claudio; Micheli, Alessio; Pedrelli, Luca (2017-12-13). "Deep reservoir computing: A critical experimental analysis". Neurocomputing. 268: 87–99. doi:10.1016/j.neucom.2016.12.089.
- ^ Gallicchio, Claudio; Micheli, Alessio (2017-05-05). "Echo State Property of Deep Reservoir Computing Networks". Cognitive Computation. 9 (3): 337–350. doi:10.1007/s12559-017-9461-9. ISSN 1866-9956.
- ^ Gallicchio, Claudio; Micheli, Alessio; Pedrelli, Luca (December 2018). "Design of deep echo state networks". Neural Networks. 108: 33–47. doi:10.1016/j.neunet.2018.08.002. ISSN 0893-6080.
- ^ Du, Chao; Cai, Fuxi; Zidan, Mohammed A.; Ma, Wen; Lee, Seung Hwan; Lu, Wei D. (2017-12-19). "Reservoir computing using dynamic memristors for temporal information processing". Nature Communications. 8 (1). doi:10.1038/s41467-017-02337-y. ISSN 2041-1723. PMID 29259188.
- ^ "Memristors power quick-learning neural network". ScienceDaily. Retrieved 2018-01-10.
Further reading[edit]
- Reservoir Computing using delay systems, Nature Communications 2011
- Optoelectronic Reservoir Computing, Scientific Reports February 2012
- Optoelectronic Reservoir Computing, Optics Express 2012
- All-optical Reservoir Computing, Nature Communications 2013
- Memristor Models for Machine learning, Neural Computation 2014 arxiv