Recurrent neural network: Difference between revisions
No edit summary |
The sentence as a whole cites a reference at the end; however, that reference does not make any mention of a 49% performance improvement. |
||
Line 101: | Line 101: | ||
Today, many applications use stacks of LSTM RNNs<ref name="fernandez2007">Santiago Fernandez, Alex Graves, and [[Jürgen Schmidhuber]] (2007). Sequence labelling in structured domains with hierarchical recurrent neural networks. Proceedings of IJCAI.</ref> and train them by Connectionist Temporal Classification (CTC)<ref name="graves2006">Alex Graves, Santiago Fernandez, Faustino Gomez, and [[Jürgen Schmidhuber]] (2006). Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. Proceedings of ICML’06, pp. 369–376.</ref> to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition. Around 2007, LSTM started to revolutionise [[speech recognition]], outperforming traditional models in certain speech applications.<ref name="fernandez2007keyword">Santiago Fernandez, Alex Graves, and Jürgen Schmidhuber (2007). An application of recurrent neural networks to discriminative keyword spotting. Proceedings of ICANN (2), pp. 220–229.</ref> In 2009, CTC-trained LSTM was the first RNN to win pattern recognition contests, when it won several competitions in connected [[handwriting recognition]].<ref name="schmidhuber2015"/><ref>Graves, Alex; and Schmidhuber, Jürgen; ''Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks'', in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), ''Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC'', Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552</ref> In 2014, the Chinese search giant Baidu used CTC-trained RNNs to break the Switchboard Hub5'00 speech recognition benchmark, without using any traditional speech processing methods.<ref name="hannun2014">Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, [[Andrew Ng]] (2014). Deep Speech: Scaling up end-to-end speech recognition. [http://arxiv.org/abs/1412.5567 arXiv:1412.5567]</ref> |
Today, many applications use stacks of LSTM RNNs<ref name="fernandez2007">Santiago Fernandez, Alex Graves, and [[Jürgen Schmidhuber]] (2007). Sequence labelling in structured domains with hierarchical recurrent neural networks. Proceedings of IJCAI.</ref> and train them by Connectionist Temporal Classification (CTC)<ref name="graves2006">Alex Graves, Santiago Fernandez, Faustino Gomez, and [[Jürgen Schmidhuber]] (2006). Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. Proceedings of ICML’06, pp. 369–376.</ref> to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition. Around 2007, LSTM started to revolutionise [[speech recognition]], outperforming traditional models in certain speech applications.<ref name="fernandez2007keyword">Santiago Fernandez, Alex Graves, and Jürgen Schmidhuber (2007). An application of recurrent neural networks to discriminative keyword spotting. Proceedings of ICANN (2), pp. 220–229.</ref> In 2009, CTC-trained LSTM was the first RNN to win pattern recognition contests, when it won several competitions in connected [[handwriting recognition]].<ref name="schmidhuber2015"/><ref>Graves, Alex; and Schmidhuber, Jürgen; ''Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks'', in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), ''Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC'', Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552</ref> In 2014, the Chinese search giant Baidu used CTC-trained RNNs to break the Switchboard Hub5'00 speech recognition benchmark, without using any traditional speech processing methods.<ref name="hannun2014">Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, [[Andrew Ng]] (2014). Deep Speech: Scaling up end-to-end speech recognition. [http://arxiv.org/abs/1412.5567 arXiv:1412.5567]</ref> |
||
LSTM also improved large-vocabulary speech recognition,<ref name="sak2014">Hasim Sak and Andrew Senior and Francoise Beaufays (2014). Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling. Proceedings of Interspeech 2014.</ref><ref name="liwu2015">Xiangang Li, Xihong Wu (2015). Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition [http://arxiv.org/abs/1410.4281 arXiv:1410.4281]</ref> text-to-speech synthesis,<ref name="fan2015">Bo Fan, Lijuan Wang, Frank K. Soong, and Lei Xie (2015). Photo-Real Talking Head with Deep Bidirectional LSTM. In Proceedings of ICASSP 2015.</ref> also for Google Android,<ref name="schmidhuber2015"/><ref name="zen2015">Heiga Zen and Hasim Sak (2015). Unidirectional Long Short-Term Memory Recurrent Neural Network with Recurrent Output Layer for Low-Latency Speech Synthesis. In Proceedings of ICASSP, pp. 4470-4474.</ref> and photo-real talking heads.<ref name="fan2015"/> In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through [[Google Voice Search|Google voice search]] to all smartphone users.<ref name="sak2015">Haşim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays and Johan Schalkwyk (September 2015): [http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html Google voice search: faster and more accurate.]</ref> |
LSTM also improved large-vocabulary speech recognition,<ref name="sak2014">Hasim Sak and Andrew Senior and Francoise Beaufays (2014). Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling. Proceedings of Interspeech 2014.</ref><ref name="liwu2015">Xiangang Li, Xihong Wu (2015). Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition [http://arxiv.org/abs/1410.4281 arXiv:1410.4281]</ref> text-to-speech synthesis,<ref name="fan2015">Bo Fan, Lijuan Wang, Frank K. Soong, and Lei Xie (2015). Photo-Real Talking Head with Deep Bidirectional LSTM. In Proceedings of ICASSP 2015.</ref> also for Google Android,<ref name="schmidhuber2015"/><ref name="zen2015">Heiga Zen and Hasim Sak (2015). Unidirectional Long Short-Term Memory Recurrent Neural Network with Recurrent Output Layer for Low-Latency Speech Synthesis. In Proceedings of ICASSP, pp. 4470-4474.</ref> and photo-real talking heads.<ref name="fan2015"/> In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49%{{Citation needed}} through CTC-trained LSTM, which is now available through [[Google Voice Search|Google voice search]] to all smartphone users.<ref name="sak2015">Haşim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays and Johan Schalkwyk (September 2015): [http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html Google voice search: faster and more accurate.]</ref> |
||
LSTM has also become very popular in the field of [[natural language processing]]. |
LSTM has also become very popular in the field of [[natural language processing]]. |
Revision as of 14:58, 17 November 2016
Part of a series on |
Machine learning and data mining |
---|
A recurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. This makes them applicable to tasks such as unsegmented connected handwriting recognition[1] or speech recognition.[2]
Architectures
Fully recurrent network
This is the basic architecture developed in the 1980s: a network of neuron-like units, each with a directed connection to every other unit. Each unit has a time-varying real-valued activation. Each connection has a modifiable real-valued weight. Some of the nodes are called input nodes, some output nodes, the rest hidden nodes. Most architectures below are special cases.
For supervised learning in discrete time settings, training sequences of real-valued input vectors become sequences of activations of the input nodes, one input vector at a time. At any given time step, each non-input unit computes its current activation as a nonlinear function of the weighted sum of the activations of all units from which it receives connections. There may be teacher-given target activations for some of the output units at certain time steps. For example, if the input sequence is a speech signal corresponding to a spoken digit, the final target output at the end of the sequence may be a label classifying the digit. For each sequence, its error is the sum of the deviations of all target signals from the corresponding activations computed by the network. For a training set of numerous sequences, the total error is the sum of the errors of all individual sequences. Algorithms for minimizing this error are mentioned in the section on training algorithms below.
In reinforcement learning settings, there is no teacher providing target signals for the RNN, instead a fitness function or reward function is occasionally used to evaluate the RNN's performance, which is influencing its input stream through output units connected to actuators affecting the environment. Again, compare the section on training algorithms below.
Recursive neural networks
A recursive neural network[3] is created by applying the same set of weights recursively over a differentiable graph-like structure, by traversing the structure in topological order. Such networks are typically also trained by the reverse mode of automatic differentiation.[4][5] They were introduced to learn distributed representations of structure, such as logical terms. A special case of recursive neural networks is the RNN itself whose structure corresponds to a linear chain. Recursive neural networks have been applied to natural language processing.[6] The Recursive Neural Tensor Network uses a tensor-based composition function for all nodes in the tree.[7]
Hopfield network
The Hopfield network is of historic interest although it is not a general RNN, as it is not designed to process sequences of patterns. Instead it requires stationary inputs. It is a RNN in which all connections are symmetric. Invented by John Hopfield in 1982, it guarantees that its dynamics will converge. If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration.
A variation on the Hopfield network is the bidirectional associative memory (BAM). The BAM has two layers, either of which can be driven as an input, to recall an association and produce an output on the other layer.[8]
Elman networks and Jordan networks
The following special case of the basic architecture above was employed by Jeff Elman. A three-layer network is used (arranged vertically as x, y, and z in the illustration), with the addition of a set of "context units" (u in the illustration). There are connections from the middle (hidden) layer to these context units fixed with a weight of one.[9] At each time step, the input is propagated in a standard feed-forward fashion, and then a learning rule is applied. The fixed back connections result in the context units always maintaining a copy of the previous values of the hidden units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard multilayer perceptron.
Jordan networks, due to Michael I. Jordan, are similar to Elman networks. The context units are however fed from the output layer instead of the hidden layer. The context units in a Jordan network are also referred to as the state layer, and have a recurrent connection to themselves with no other nodes on this connection.[9]
Elman and Jordan networks are also known as "simple recurrent networks" (SRN).
Variables and functions
- : input vector
- : hidden layer vector
- : output vector
- , and : parameter matrices and vector
- and : Activation functions
Echo state network
The echo state network (ESN) is a recurrent neural network with a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change and be trained. ESN are good at reproducing certain time series.[12] A variant for spiking neurons is known as Liquid state machines.[13]
Neural history compressor
The vanishing gradient problem[14] of automatic differentiation or backpropagation in neural networks was partially overcome in 1992 by an early generative model called the neural history compressor, implemented as an unsupervised stack of recurrent neural networks (RNNs).[15] The RNN at the input level learns to predict its next input from the previous input history. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN which therefore recomputes its internal state only rarely. Each higher level RNN thus learns a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the sequence representation at the highest level. The system effectively minimises the description length or the negative logarithm of the probability of the data.[16] If there is a lot of learnable predictability in the incoming data sequence, then the highest level RNN can use supervised learning to easily classify even deep sequences with very long time intervals between important events. In 1993, such a system already solved a "Very Deep Learning" task that requires more than 1000 subsequent layers in an RNN unfolded in time.[17]
It is also possible to distill the entire RNN hierarchy into only two RNNs called the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level).[15] Once the chunker has learned to predict and compress inputs that are still unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through special additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across very long time intervals. This in turn helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining still unpredictable events, to compress the data even further.[15]
Long short-term memory
Numerous researchers now use a deep learning RNN called the long short-term memory (LSTM) network, published by Hochreiter & Schmidhuber in 1997.[18] It is a deep learning system that unlike traditional RNNs doesn't have the vanishing gradient problem (compare the section on training algorithms below). LSTM is normally augmented by recurrent gates called forget gates.[19] LSTM RNNs prevent backpropagated errors from vanishing or exploding.[14] Instead errors can flow backwards through unlimited numbers of virtual layers in LSTM RNNs unfolded in space. That is, LSTM can learn "Very Deep Learning" tasks[20] that require memories of events that happened thousands or even millions of discrete time steps ago. Problem-specific LSTM-like topologies can be evolved.[21] LSTM works even when there are long delays, and it can handle signals that have a mix of low and high frequency components.
Today, many applications use stacks of LSTM RNNs[22] and train them by Connectionist Temporal Classification (CTC)[23] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition. Around 2007, LSTM started to revolutionise speech recognition, outperforming traditional models in certain speech applications.[24] In 2009, CTC-trained LSTM was the first RNN to win pattern recognition contests, when it won several competitions in connected handwriting recognition.[20][25] In 2014, the Chinese search giant Baidu used CTC-trained RNNs to break the Switchboard Hub5'00 speech recognition benchmark, without using any traditional speech processing methods.[26] LSTM also improved large-vocabulary speech recognition,[27][28] text-to-speech synthesis,[29] also for Google Android,[20][30] and photo-real talking heads.[29] In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49%[citation needed] through CTC-trained LSTM, which is now available through Google voice search to all smartphone users.[31]
LSTM has also become very popular in the field of natural language processing. Unlike previous models based on HMMs and similar concepts, LSTM can learn to recognise context-sensitive languages.[32] LSTM improved machine translation,[33] Language Modeling[34] and Multilingual Language Processing.[35] LSTM combined with convolutional neural networks (CNNs) also improved automatic image captioning[36] and a plethora of other applications.
Bi-directional RNN
Invented by Schuster & Paliwal in 1997,[37] bi-directional RNN or BRNN use a finite sequence to predict or label each element of the sequence based on both the past and the future context of the element. This is done by concatenating the outputs of two RNN, one processing the sequence from left to right, the other one from right to left. The combined outputs are the predictions of the teacher-given target signals. This technique proved to be especially useful when combined with LSTM RNN.[38]
Continuous-time RNN
A continuous time recurrent neural network (CTRNN) is a dynamical systems model of biological neural networks. A CTRNN uses a system of ordinary differential equations to model the effects on a neuron of the incoming spike train.
For a neuron in the network with action potential the rate of change of activation is given by:
Where:
- : Time constant of postsynaptic node
- : Activation of postsynaptic node
- : Rate of change of activation of postsynaptic node
- : Weight of connection from pre to postsynaptic node
- : Sigmoid of x e.g. .
- : Activation of presynaptic node
- : Bias of presynaptic node
- : Input (if any) to node
CTRNNs have frequently been applied in the field of evolutionary robotics, where they have been used to address, for example, vision,[39] co-operation[40] and minimally cognitive behaviour.[41]
Note that by the Shannon sampling theorem, discrete time recurrent neural networks can be viewed as continuous time recurrent neural networks where the differential equation have transformed in an equivalent difference equation after that the postsynaptic node activation functions have been low-pass filtered prior to sampling.
Hierarchical RNN
There are many instances of hierarchical RNN whose elements are connected in various ways to decompose hierarchical behavior into useful subprograms.[15][42]
Recurrent multilayer perceptron
Generally, a Recurrent Multi-Layer Perceptron (RMLP) consists of a series of cascaded subnetworks, each of which consists of multiple layers of nodes. Each of these subnetworks is entirely feed-forward except for the last layer, which can have feedback connections among itself. Each of these subnets is connected only by feed forward connections.[43]
Second order RNN
Second order RNNs use higher order weights instead of the standard weights, and inputs and states can be a product. This allows a direct mapping to a finite state machine both in training, stability, and representation.[44][45] Long short-term memory is an example of this but has no such formal mappings or proof of stability.
Multiple timescales recurrent neural network (MTRNN) model
MTRNN is a possible neural-based computational model that imitates to some extent the activity of the brain.[46][47] It has the ability to simulate the functional hierarchy of the brain through self-organization that not only depends on spatial connection between neurons, but also on distinct types of neuron activities, each with distinct time properties. With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy has been discussed on the memory-prediction theory of brain function by Jeff Hawkins in his book On Intelligence.
Pollack's sequential cascaded networks
Neural Turing Machines
NTMs are a method of extending the capabilities of recurrent neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent.[48]
Neural network pushdown automata
NNPDAs are similar to NTMs but tapes are replaced by analogue stacks that are differentiable and which are trained to control. In this way they are similar in complexity to recognizers of context free grammars (CFGs).[49]
Bidirectional associative memory
First introduced by Kosko,[50] BAM neural networks store associative data as a vector. The bi-directionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using Markov stepping were optimized for increased network stability and relevance to real-world applications.[51]
Training
Gradient descent
To minimize total error, gradient descent can be used to change each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear activation functions are differentiable. Various methods for doing so were developed in the 1980s and early 1990s by Paul Werbos, Ronald J. Williams, Tony Robinson, Jürgen Schmidhuber, Sepp Hochreiter, Barak Pearlmutter, and others.
The standard method is called "backpropagation through time" or BPTT, and is a generalization of back-propagation for feed-forward networks,[52][53] and like that method, is an instance of automatic differentiation in the reverse accumulation mode or Pontryagin's minimum principle. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL,[54][55] which is an instance of Automatic differentiation in the forward accumulation mode with stacked tangent vectors. Unlike BPTT this algorithm is local in time but not local in space.[56][57]
There also is an online hybrid between BPTT and RTRL with intermediate complexity,[58][59] and there are variants for continuous time.[60] A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events.[14][61] The long short-term memory architecture together with a BPTT/RTRL hybrid learning method was introduced in an attempt to overcome these problems.[18]
Global optimization methods
Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared-difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function.
The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.[62][63][64]
Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link, henceforth; the whole network is represented as a single chromosome. The fitness function is evaluated as follows: 1) each weight encoded in the chromosome is assigned to the respective weight link of the network; 2) the training set of examples is then presented to the network which propagates the input signals forward; 3) the mean-squared-error is returned to the fitness function; 4) this function will then drive the genetic selection process.
There are many chromosomes that make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is: 1) when the neural network has learnt a certain percentage of the training data or 2) when the minimum value of the mean-squared-error is satisfied or 3) when the maximum number of training generations has been reached. The stopping criterion is evaluated by the fitness function as it gets the reciprocal of the mean-squared-error from each neural network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, hence, reduce the mean-squared-error.
Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights such as simulated annealing or particle swarm optimization.
Related fields and models
RNNs may behave chaotically. In such cases, dynamical systems theory may be used for analysis.
Recurrent neural networks are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
In particular, recurrent neural networks can appear as nonlinear versions of finite impulse response and infinite impulse response filters and also as a nonlinear autoregressive exogenous model (NARX).[65]
Common RNN libraries
- Apache Singa
- Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers.
- Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. A general-purpose deep learning library for the JVM production stack running on a C++ scientific computing engine. Allows the creation of custom layers. Integrates with Hadoop and Kafka.
- Keras
- Microsoft Cognitive Toolkit
- TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary TPU,[66] mobile
- Theano: The reference deep-learning library for Python with an API largely compatible with the popular NumPy library. Allows user to write symbolic mathematical expressions, then automatically generates their derivatives, saving the user from having to code gradients or backpropagation. These symbolic expressions are automatically compiled to CUDA code for a fast, on-the-GPU implementation.
- Torch (www.torch.ch): A scientific computing framework with wide support for machine learning algorithms, written in C and lua. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter.
References
- ^ A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, J. Schmidhuber. A Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, 2009.
- ^ H. Sak and A. W. Senior and F. Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Proc. Interspeech, pp338-342, Singapore, Sept. 201
- ^ Goller, C.; Küchler, A. "Learning task-dependent distributed representations by backpropagation through structure". Neural Networks, 1996., IEEE. doi:10.1109/ICNN.1996.548916.
- ^ Seppo Linnainmaa (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7.
- ^ Griewank, Andreas and Walther, A.. Principles and Techniques of Algorithmic Differentiation, Second Edition. SIAM, 2008.
- ^ Socher, Richard; Lin, Cliff; Ng, Andrew Y.; Manning, Christopher D. "Parsing Natural Scenes and Natural Language with Recursive Neural Networks". The 28th International Conference on Machine Learning (ICML 2011).
- ^ Socher, Richard; Perelygin, Alex; Y. Wu, Jean; Chuang, Jason; D. Manning, Christopher; Y. Ng, Andrew; Potts, Christopher. "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank" (PDF). EMNLP 2013.
- ^ Rául Rojas (1996). Neural networks: a systematic introduction. Springer. p. 336. ISBN 978-3-540-60505-8.
- ^ a b Cruse, Holk; Neural Networks as Cybernetic Systems, 2nd and revised edition
- ^ Elman, Jeffrey L. (1990). "Finding Structure in Time". Cognitive Science. 14 (2): 179–211. doi:10.1016/0364-0213(90)90002-E.
- ^ Jordan, Michael I. (1986). "Serial Order: A Parallel Distributed Processing Approach".
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ H. Jaeger. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304:78–80, 2004.
- ^ W. Maass, T. Natschläger, and H. Markram. A fresh look at real-time computation in generic recurrent neural circuits. Technical report, Institute for Theoretical Computer Science, TU Graz, 2002.
- ^ a b c Sepp Hochreiter (1991), Untersuchungen zu dynamischen neuronalen Netzen, Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber.
- ^ a b c d Jürgen Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234–242. Online
- ^ Jürgen Schmidhuber (2015). Deep Learning. Scholarpedia, 10(11):32832. Section on Unsupervised Pre-Training of RNNs and FNNs
- ^ Jürgen Schmidhuber (1993). Habilitation thesis, TUM, 1993. Page 150 ff demonstrates credit assignment across the equivalent of 1,200 layers in an unfolded RNN. Online
- ^ a b Hochreiter, Sepp; and Schmidhuber, Jürgen; Long Short-Term Memory, Neural Computation, 9(8):1735–1780, 1997
- ^ Felix Gers, Nicholas Schraudolph, and Jürgen Schmidhuber (2002). Learning precise timing with LSTM recurrent networks. Journal of Machine Learning Research 3:115–143.
- ^ a b c Jürgen Schmidhuber (2015). Deep learning in neural networks: An overview. Neural Networks 61 (2015): 85-117. ArXiv
- ^ Justin Bayer, Daan Wierstra, Julian Togelius, and Jürgen Schmidhuber (2009). Evolving memory cell structures for sequence learning. Proceedings of ICANN (2), pp. 755–764.
- ^ Santiago Fernandez, Alex Graves, and Jürgen Schmidhuber (2007). Sequence labelling in structured domains with hierarchical recurrent neural networks. Proceedings of IJCAI.
- ^ Alex Graves, Santiago Fernandez, Faustino Gomez, and Jürgen Schmidhuber (2006). Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. Proceedings of ICML’06, pp. 369–376.
- ^ Santiago Fernandez, Alex Graves, and Jürgen Schmidhuber (2007). An application of recurrent neural networks to discriminative keyword spotting. Proceedings of ICANN (2), pp. 220–229.
- ^ Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
- ^ Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Ng (2014). Deep Speech: Scaling up end-to-end speech recognition. arXiv:1412.5567
- ^ Hasim Sak and Andrew Senior and Francoise Beaufays (2014). Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling. Proceedings of Interspeech 2014.
- ^ Xiangang Li, Xihong Wu (2015). Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition arXiv:1410.4281
- ^ a b Bo Fan, Lijuan Wang, Frank K. Soong, and Lei Xie (2015). Photo-Real Talking Head with Deep Bidirectional LSTM. In Proceedings of ICASSP 2015.
- ^ Heiga Zen and Hasim Sak (2015). Unidirectional Long Short-Term Memory Recurrent Neural Network with Recurrent Output Layer for Low-Latency Speech Synthesis. In Proceedings of ICASSP, pp. 4470-4474.
- ^ Haşim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays and Johan Schalkwyk (September 2015): Google voice search: faster and more accurate.
- ^ Felix A. Gers and Jürgen Schmidhuber. LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages. IEEE TNN 12(6):1333–1340, 2001.
- ^ Ilya Sutskever, Oriol Vinyals, and Quoc V. Le (2014). Sequence to Sequence Learning with Neural Networks. arXiv
- ^ Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu (2016). Exploring the Limits of Language Modeling. arXiv
- ^ Dan Gillick, Cliff Brunk, Oriol Vinyals, Amarnag Subramanya (2015). Multilingual Language Processing From Bytes. arXiv
- ^ Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan (2015). Show and Tell: A Neural Image Caption Generator. arXiv
- ^ Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45:2673–81, November 1997.
- ^ A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18:602–610, 2005.
- ^ Harvey, Inman; Husbands, P.; Cliff, D. (1994). "Seeing the light: Artificial evolution, real vision". Proceedings of the third international conference on Simulation of adaptive behavior: from animals to animats 3: 392–401.
- ^ Quinn, Matthew (2001). "Evolving communication without dedicated communication channels". Advances in Artificial Life. Lecture Notes in Computer Science. 2159: 357–366. doi:10.1007/3-540-44811-X_38. ISBN 978-3-540-42567-0.
- ^ Beer, R.D. (1997). "The dynamics of adaptive behavior: A research program". Robotics and Autonomous Systems. 20 (2–4): 257–289. doi:10.1016/S0921-8890(96)00063-2.
- ^ R.W. Paine, J. Tani, "How hierarchical control self-organizes in artificial adaptive systems," Adaptive Behavior, 13(3), 211-225, 2005.
- ^ "CiteSeerX — Recurrent Multilayer Perceptrons for Identification and Control: The Road to Applications". Citeseerx.ist.psu.edu. Retrieved 2014-01-03.
- ^ C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee, "Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks," Neural Computation, 4(3), p. 393, 1992.
- ^ C.W. Omlin, C.L. Giles, "Constructing Deterministic Finite-State Automata in Recurrent Neural Networks," Journal of the ACM, 45(6), 937-972, 1996.
- ^ Y. Yamashita, J. Tani, "Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment," PLoS Computational Biology, 4(11), e1000220, 211-225, 2008. http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000220
- ^ Alnajjar F, Yamashita Y, Tani J (2013) The Hierarchical and Functional Connectivity of Higher-order Cognitive Mechanisms: Neurorobotic Model to Investigate the Stability and Flexibility of Working Memory, Frontiers in Neurorobotics, 7:2. doi: 10.3389/fnbot.2013.00002 PMID 3575058/
- ^ http://arxiv.org/pdf/1410.5401v2.pdf
- ^ Guo-Zheng Sun, C. Lee Giles, Hsing-Hen Chen, "The Neural Network Pushdown Automaton: Architecture, Dynamics and Training," Adaptive Processing of Sequences and Data Structures, Lecture Notes in Computer Science, Volume 1387, 296-345, 1998.
- ^ Kosko, B. (1988). "Bidirectional associative memories". IEEE Transactions on Systems, Man, and Cybernetics. 18 (1): 49–60. doi:10.1109/21.87054.
- ^ Rakkiyappan, R.; Chandrasekar, A.; Lakshmanan, S.; Park, Ju H. (2 January 2015). "Exponential stability for markovian jumping stochastic BAM neural networks with mode-dependent probabilistic time-varying delays and impulse control". Complexity. 20 (3): 39–65. doi:10.1002/cplx.21503.
- ^ P. J. Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1, 1988.
- ^ David E. Rumelhart; Geoffrey E. Hinton; Ronald J. Williams. Learning Internal Representations by Error Propagation.
- ^ A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.
- ^ R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994.
- ^ J. Schmidhuber. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403–412, 1989.
- ^ Neural and Adaptive Systems: Fundamentals through Simulation. J.C. Principe, N.R. Euliano, W.C. Lefebvre
- ^ J. Schmidhuber. A fixed size storage O(n3) time complexity learning algorithm for fully recurrent continually running networks. Neural Computation, 4(2):243–248, 1992.
- ^ R. J. Williams. Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27, Boston: Northeastern University, College of Computer Science, 1989.
- ^ B. A. Pearlmutter. Learning state space trajectories in recurrent neural networks. Neural Computation, 1(2):263–269, 1989.
- ^ S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, 2001.
- ^ F. J. Gomez and R. Miikkulainen. Solving non-Markovian control tasks with neuroevolution. Proc. IJCAI 99, Denver, CO, 1999. Morgan Kaufmann.
- ^ Applying Genetic Algorithms to Recurrent Neural Networks for Learning Network Parameters and Architecture. O. Syed, Y. Takefuji
- ^ F. Gomez, J. Schmidhuber, R. Miikkulainen. Accelerated Neural Evolution through Cooperatively Coevolved Synapses. Journal of Machine Learning Research (JMLR), 9:937-965, 2008.
- ^ Hava T. Siegelmann, Bill G. Horne, C. Lee Giles, "Computational capabilities of recurrent NARX neural networks," IEEE Transactions on Systems, Man, and Cybernetics, Part B 27(2): 208-215 (1997).
- ^ Cade Metz (May 18, 2016). "Google Built Its Very Own Chips to Power Its AI Bots". Wired.
- Mandic, D.; Chambers, J. (2001). Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley. ISBN 0-471-49517-4.
{{cite book}}
: Unknown parameter|lastauthoramp=
ignored (|name-list-style=
suggested) (help)
External links
- RNNSharp CRFs based on recurrent neural networks (C#, .NET)
- Recurrent Neural Networks with over 60 RNN papers by Jürgen Schmidhuber's group at Dalle Molle Institute for Artificial Intelligence Research
- Elman Neural Network implementation for WEKA
- Recurrent Neural Nets & LSTMs in Java