Time delay neural network: Difference between revisions
m Add link to Geoffrey_Hinton |
m task, replaced: IEEE Transactions on Acoustics, Speech and Signal Processing → IEEE Transactions on Acoustics, Speech, and Signal Processing (2) |
||
Line 1: | Line 1: | ||
[[File:TDNN Diagram.png|thumb|right|TDNN Diagram]] |
[[File:TDNN Diagram.png|thumb|right|TDNN Diagram]] |
||
'''Time delay neural network''' ('''TDNN''') <ref name="phoneme detection">[[Alex Waibel|Alexander Waibel]], Tashiyuki Hanazawa, [[Geoffrey Hinton]], Kiyohito Shikano, Kevin J. Lang, ''Phoneme Recognition Using Time-Delay Neural Networks'', IEEE Transactions on Acoustics, Speech and Signal Processing, Volume 37, No. 3, pp. 328. - 339 March 1989.</ref> is a multilayer [[artificial neural network]] architecture whose purpose is to 1) classify patterns with shift-invariance, and 2) model context at each layer of the network. |
'''Time delay neural network''' ('''TDNN''') <ref name="phoneme detection">[[Alex Waibel|Alexander Waibel]], Tashiyuki Hanazawa, [[Geoffrey Hinton]], Kiyohito Shikano, Kevin J. Lang, ''Phoneme Recognition Using Time-Delay Neural Networks'', IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. - 339 March 1989.</ref> is a multilayer [[artificial neural network]] architecture whose purpose is to 1) classify patterns with shift-invariance, and 2) model context at each layer of the network. |
||
Shift-invariant classification means that the classifier does not require explicit segmentation prior to classification. For the classification of a temporal pattern (such as speech), the TDNN thus avoids having to determine the beginning and end points of sounds before classifying them. |
Shift-invariant classification means that the classifier does not require explicit segmentation prior to classification. For the classification of a temporal pattern (such as speech), the TDNN thus avoids having to determine the beginning and end points of sounds before classifying them. |
||
Line 8: | Line 8: | ||
== History == |
== History == |
||
The TDNN was first proposed to classify [[phonemes]] in speech signals for automatic [[speech recognition]], where the automatic determination of precise segments or feature boundaries is difficult or impossible. Because the TDNN recognizes phonemes and their underlying acoustic/phonetic features, independent of position in time, it improved performance over static classification.<ref name="phoneme detection" /><ref name=":0">Alexander Waibel, ''Phoneme Recognition Using Time-Delay Neural Networks'', SP87-100, Meeting of the Institute of Electrical, Information and Communication Engineers (IEICE), December, 1987,Tokyo, Japan.</ref> It was also applied to two-dimensional signals (time-frequency patterns in speech |
The TDNN was first proposed to classify [[phonemes]] in speech signals for automatic [[speech recognition]], where the automatic determination of precise segments or feature boundaries is difficult or impossible. Because the TDNN recognizes phonemes and their underlying acoustic/phonetic features, independent of position in time, it improved performance over static classification.<ref name="phoneme detection" /><ref name=":0">Alexander Waibel, ''Phoneme Recognition Using Time-Delay Neural Networks'', SP87-100, Meeting of the Institute of Electrical, Information and Communication Engineers (IEICE), December, 1987,Tokyo, Japan.</ref> It was also applied to two-dimensional signals (time-frequency patterns in speech,<ref name=":1">John B. Hampshire and Alexander Waibel, ''Connectionist Architectures for Multi-Speaker Phoneme Recognition'', Advances in Neural Information Processing Systems, 1990, Morgan Kaufmann.</ref> and coordinate space pattern in OCR.<ref name=":2">Stefan Jaeger, Stefan Manke, Juergen Reichert, Alexander Waibel, ''Online handwriting recognition: the NPen++recognizer'', International Journal on Document Analysis and Recognition Vol. 3, Issue 3, March 2001</ref> |
||
==Overview== |
==Overview== |
||
The Time Delay Neural Network, like other neural networks, operates with multiple interconnected layers of [[ |
The Time Delay Neural Network, like other neural networks, operates with multiple interconnected layers of [[perceptron]]s, and is implemented as a [[feedforward neural network]]. All neurons (at each layer) of a TDNN receive inputs from the outputs of neurons at the layer below but with two differences: |
||
# Unlike regular [[Multilayer perceptron|Multi-Layer perceptrons]], all units in a TDNN, at each layer, obtain inputs from a contextual ''window'' of outputs from the layer below. For time varying signals (e.g. speech), each unit has connections to the output from units below but also to the time-delayed (past) outputs from these same units. This models the units' temporal pattern/trajectory. For two-dimensional signals (e.g. time-frequency patterns or images), a 2-D context window is observed at each layer. Higher layers have inputs from widening context windows than lower layers and thus generally model coarser levels of abstraction. |
# Unlike regular [[Multilayer perceptron|Multi-Layer perceptrons]], all units in a TDNN, at each layer, obtain inputs from a contextual ''window'' of outputs from the layer below. For time varying signals (e.g. speech), each unit has connections to the output from units below but also to the time-delayed (past) outputs from these same units. This models the units' temporal pattern/trajectory. For two-dimensional signals (e.g. time-frequency patterns or images), a 2-D context window is observed at each layer. Higher layers have inputs from widening context windows than lower layers and thus generally model coarser levels of abstraction. |
||
# Shift-invariance is achieved by explicitly removing position dependence during [[backpropagation]] training. This is done by making time-shifted copies of a network across the dimension of invariance (here: time). The error gradient is then computed by backpropagation through all these networks from an overall target vector, but before performing the weight update, the error gradients associated with shifted copies are averaged and thus shared and constraint to be equal. Thus, all position dependence from backpropagation training through the shifted copies is removed and the copied networks learn the most salient hidden features shift-invariantly, i.e. independent of their precise position in the input data. Shift-invariance is also readily extended to multiple dimensions by imposing similar weight-sharing across copies that are shifted along multiple dimensions |
# Shift-invariance is achieved by explicitly removing position dependence during [[backpropagation]] training. This is done by making time-shifted copies of a network across the dimension of invariance (here: time). The error gradient is then computed by backpropagation through all these networks from an overall target vector, but before performing the weight update, the error gradients associated with shifted copies are averaged and thus shared and constraint to be equal. Thus, all position dependence from backpropagation training through the shifted copies is removed and the copied networks learn the most salient hidden features shift-invariantly, i.e. independent of their precise position in the input data. Shift-invariance is also readily extended to multiple dimensions by imposing similar weight-sharing across copies that are shifted along multiple dimensions.<ref name=":1" /><ref name=":2" /> |
||
=== Example === |
=== Example === |
||
In the case of a speech signal, inputs are spectral coefficients over time. |
In the case of a speech signal, inputs are spectral coefficients over time. |
||
In order to learn critical acoustic-phonetic features (for example formant transitions, bursts, frication, etc.) without first requiring precise localization, the TDNN is trained time-shift-invariantly. Time-shift invariance is achieved through weight sharing across time during training: Time shifted copies of the TDNN are made over the input range (from left to right in Fig.1). Backpropagation is then performed from an overall classification target vector (see TDNN diagram, three phoneme class targets (/b/, /d/, /g/) are shown in the output layer), resulting in gradients that will generally vary for each of the time-shifted network copies. Since such time-shifted networks are only copies, however, the position dependence is removed by weight sharing. In this example, this is done by averaging the gradients from each time-shifted copy before performing the weight update. In speech, time-shift invariant training was shown to learn weight matrices that are independent of precise positioning of the input. The weight matrices could also be shown to detect important acoustic-phonetic features that are known to be important for human speech perception, such as formant transitions, bursts, etc. |
In order to learn critical acoustic-phonetic features (for example formant transitions, bursts, frication, etc.) without first requiring precise localization, the TDNN is trained time-shift-invariantly. Time-shift invariance is achieved through weight sharing across time during training: Time shifted copies of the TDNN are made over the input range (from left to right in Fig.1). Backpropagation is then performed from an overall classification target vector (see TDNN diagram, three phoneme class targets (/b/, /d/, /g/) are shown in the output layer), resulting in gradients that will generally vary for each of the time-shifted network copies. Since such time-shifted networks are only copies, however, the position dependence is removed by weight sharing. In this example, this is done by averaging the gradients from each time-shifted copy before performing the weight update. In speech, time-shift invariant training was shown to learn weight matrices that are independent of precise positioning of the input. The weight matrices could also be shown to detect important acoustic-phonetic features that are known to be important for human speech perception, such as formant transitions, bursts, etc.<ref name="phoneme detection" /> TDNN’s could also be combined or grown by way of pre-training.<ref name=":3">Alexander Waibel, Hidefumi Sawai, Kiyohiro Shikano, ''Modularity and Scaling in Large Phonemic Neural Networks'', IEEE Transactions on Acoustics, Speech, and Signal Processing, December, December 1989.</ref> |
||
=== Implementation === |
=== Implementation === |
||
The precise architecture of TDNNs (time-delays, number of layers) is mostly determined by the designer depending on the classification problem and the most useful context sizes. The delays or context windows are chosen specific to each application. Work has also been done to create adaptable time-delay TDNNs <ref>Christian Koehler and Joachim K. Anlauf, ''An adaptable time-delay neural-network algorithm for image sequence analysis'', IEEE Transactions on Neural Networks 10.6 (1999): 1531-1536</ref> where this manual tuning is eliminated. |
The precise architecture of TDNNs (time-delays, number of layers) is mostly determined by the designer depending on the classification problem and the most useful context sizes. The delays or context windows are chosen specific to each application. Work has also been done to create adaptable time-delay TDNNs <ref>Christian Koehler and Joachim K. Anlauf, ''An adaptable time-delay neural-network algorithm for image sequence analysis'', IEEE Transactions on Neural Networks 10.6 (1999): 1531-1536</ref> where this manual tuning is eliminated. |
||
=== State of the Art === |
=== State of the Art === |
||
TDNN based phoneme recognizers compared favourably in early comparisons with HMM based phone models |
TDNN based phoneme recognizers compared favourably in early comparisons with HMM based phone models.<ref name="phoneme detection" /><ref name=":3" /> Modern deep TDNN architectures include many more hidden layers and sub-sample or pool connections over broader contexts at higher layers. They achieve up to 50% word error reduction over [[Mixture model|GMM]] based acoustic models.<ref name=":4">Vijayaditya Peddinti, Daniel Povey, Sanjeev Khudanpur, ''A time delay neural network architecture for efficient modeling of long temporal contexts'', Proceedings of Interspeech 2015</ref><ref name=":5">David Snyder, Daniel Garcia-Romero, Daniel Povey, ''A Time-Delay Deep Neural Network-Based Universal Background Models for Speaker Recognition'', Proceedings of ASRU 2015.</ref> While the different layers of TDNN’s are intended to learn features of increasing context width, they do model local contexts. When longer distance relationships and pattern sequences have to be processed, learning states and state-sequences is important and TDNNs can be combined with other modelling techniques <ref name=":6">Patrick Haffner, Alexander Waibel, ''Multi-State Time Delay Neural Networks for Continuous Speech Recognition'', Advances in Neural Information Processing Systems, 1992, Morgan Kaufmann.</ref><ref name=":1" /><ref name=":2" /> |
||
==Applications== |
==Applications== |
||
Line 34: | Line 34: | ||
=== Speech Recognition === |
=== Speech Recognition === |
||
TDNNs used to solve problems in speech recognition that were introduced in 1987 <ref name=":0" /> and initially focused on shift-invariant phoneme recognition. Speech lends itself nicely to TDNNs as spoken sounds are rarely of uniform length and precise segmentation is difficult or impossible. By scanning a sound over past and future, the TDNN is able to construct a model for the key elements of that sound in a time-shift invariant manner. This is particularly useful as sounds are smeared out through reverberation |
TDNNs used to solve problems in speech recognition that were introduced in 1987 <ref name=":0" /> and initially focused on shift-invariant phoneme recognition. Speech lends itself nicely to TDNNs as spoken sounds are rarely of uniform length and precise segmentation is difficult or impossible. By scanning a sound over past and future, the TDNN is able to construct a model for the key elements of that sound in a time-shift invariant manner. This is particularly useful as sounds are smeared out through reverberation.<ref name=":4" /><ref name=":5" /> Large phonetic TDNN’s can be constructed modularly through pre-training and combining smaller networks.<ref name=":3" /> |
||
=== Large Vocabulary Speech Recognition === |
=== Large Vocabulary Speech Recognition === |
||
Line 42: | Line 42: | ||
=== Speaker Independence === |
=== Speaker Independence === |
||
Two-dimensional variants of the TDNN’s were proposed for speaker independence<ref name=":1" /> |
Two-dimensional variants of the TDNN’s were proposed for speaker independence.<ref name=":1" /> Here, shift-invariance is applied to the time ''as well as'' to the frequency axis in order to learn hidden features that are independent of precise location in time and in frequency (the latter being due to speaker variability). |
||
=== Reverberation === |
=== Reverberation === |
||
One of the persistent problems in speech recognition is recognizing speech when it is corrupted by echo and reverberation (as is the case in large rooms and distant microphones). Reverberation can be viewed as corrupting speech with delayed versions of itself. In general, it is difficult, however, to de-reverberate a signal as the impulse response function (and thus the convolutional noise experienced by the signal) is not known for any arbitrary space. The TDNN was shown to be effective to recognize speech robustly despite different levels of reverberation. |
One of the persistent problems in speech recognition is recognizing speech when it is corrupted by echo and reverberation (as is the case in large rooms and distant microphones). Reverberation can be viewed as corrupting speech with delayed versions of itself. In general, it is difficult, however, to de-reverberate a signal as the impulse response function (and thus the convolutional noise experienced by the signal) is not known for any arbitrary space. The TDNN was shown to be effective to recognize speech robustly despite different levels of reverberation.<ref name=":4" /><ref name=":5" /> |
||
=== Lip-reading – Audio-Visual Speech === |
=== Lip-reading – Audio-Visual Speech === |
||
TDNNs were also successfully used in early demonstrations of audio-visual speech, where the sounds of speech are complemented by visually reading lip movement |
TDNNs were also successfully used in early demonstrations of audio-visual speech, where the sounds of speech are complemented by visually reading lip movement.<ref name=":7" /> Here, TDNN based recognizers used visual and acoustic features jointly to achieve improved recognition accuracy, particularly in the presence of noise, where complementary information from an alternate modality could be fused nicely in a neural net. |
||
=== Handwriting Recognition === |
=== Handwriting Recognition === |
||
TDNNs have been used effectively in compact and high-performance handwriting recognition systems. Shift-invariance was also adapted to spatial patterns (x/y-axes) in image offline handwriting recognition. |
TDNNs have been used effectively in compact and high-performance handwriting recognition systems. Shift-invariance was also adapted to spatial patterns (x/y-axes) in image offline handwriting recognition.<ref name=":2" /> |
||
=== Video Analysis === |
=== Video Analysis === |
||
Line 66: | Line 66: | ||
=== Common Libraries === |
=== Common Libraries === |
||
*TDNNs can be implemented in virtually all machine learning frameworks using one dimensional [[ |
*TDNNs can be implemented in virtually all machine learning frameworks using one dimensional [[convolutional neural network]]s, due to the equivalence of the methods. |
||
*[[Matlab]]: The neural network toolbox has explicit functionality designed to produce a time delay neural network give the step size of time delays and an optional training function. The default training algorithm is a Supervised Learning back-propagation algorithm that updates filter weights based on the Levenberg-Marquardt optimizations. The function is timedelaynet(delays, hidden_layers, train_fnc) and returns a time-delay neural network architecture that a user can train and provide inputs to. |
*[[Matlab]]: The neural network toolbox has explicit functionality designed to produce a time delay neural network give the step size of time delays and an optional training function. The default training algorithm is a Supervised Learning back-propagation algorithm that updates filter weights based on the Levenberg-Marquardt optimizations. The function is timedelaynet(delays, hidden_layers, train_fnc) and returns a time-delay neural network architecture that a user can train and provide inputs to.<ref>''"Time Series and Dynamic Systems - MATLAB & Simulink".'' mathworks.com. Retrieved 21 June 2016.</ref> |
||
*The [[Kaldi (software)|Kaldi ASR Toolkit]] has an implementation of TDNNs with several optimizations for speech recognition <ref>Vijayaditya Peddinti, Guoguo Chen, Vimal Manohar, Tom Ko, Daniel Povey, Sanjeev Khudanpur, ''JHU ASpIRE system: Robust LVCSR with TDNNs i-vector Adaptation and RNN-LMs'', Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop, 2015.</ref> |
*The [[Kaldi (software)|Kaldi ASR Toolkit]] has an implementation of TDNNs with several optimizations for speech recognition <ref>Vijayaditya Peddinti, Guoguo Chen, Vimal Manohar, Tom Ko, Daniel Povey, Sanjeev Khudanpur, ''JHU ASpIRE system: Robust LVCSR with TDNNs i-vector Adaptation and RNN-LMs'', Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop, 2015.</ref> |
||
Revision as of 19:49, 28 October 2018
Time delay neural network (TDNN) [1] is a multilayer artificial neural network architecture whose purpose is to 1) classify patterns with shift-invariance, and 2) model context at each layer of the network.
Shift-invariant classification means that the classifier does not require explicit segmentation prior to classification. For the classification of a temporal pattern (such as speech), the TDNN thus avoids having to determine the beginning and end points of sounds before classifying them.
For contextual modelling in a TDNN, each neural unit at each layer receives input not only from activations/features at the layer below, but from a pattern of unit output and its context. For time signals each unit receives as input the activation patterns over time from units below. Applied to two-dimensional classification (images, time-frequency patterns), the TDNN can be trained with shift-invariance in the coordinate space and avoids precise segmentation in the coordinate space.
History
The TDNN was first proposed to classify phonemes in speech signals for automatic speech recognition, where the automatic determination of precise segments or feature boundaries is difficult or impossible. Because the TDNN recognizes phonemes and their underlying acoustic/phonetic features, independent of position in time, it improved performance over static classification.[1][2] It was also applied to two-dimensional signals (time-frequency patterns in speech,[3] and coordinate space pattern in OCR.[4]
Overview
The Time Delay Neural Network, like other neural networks, operates with multiple interconnected layers of perceptrons, and is implemented as a feedforward neural network. All neurons (at each layer) of a TDNN receive inputs from the outputs of neurons at the layer below but with two differences:
- Unlike regular Multi-Layer perceptrons, all units in a TDNN, at each layer, obtain inputs from a contextual window of outputs from the layer below. For time varying signals (e.g. speech), each unit has connections to the output from units below but also to the time-delayed (past) outputs from these same units. This models the units' temporal pattern/trajectory. For two-dimensional signals (e.g. time-frequency patterns or images), a 2-D context window is observed at each layer. Higher layers have inputs from widening context windows than lower layers and thus generally model coarser levels of abstraction.
- Shift-invariance is achieved by explicitly removing position dependence during backpropagation training. This is done by making time-shifted copies of a network across the dimension of invariance (here: time). The error gradient is then computed by backpropagation through all these networks from an overall target vector, but before performing the weight update, the error gradients associated with shifted copies are averaged and thus shared and constraint to be equal. Thus, all position dependence from backpropagation training through the shifted copies is removed and the copied networks learn the most salient hidden features shift-invariantly, i.e. independent of their precise position in the input data. Shift-invariance is also readily extended to multiple dimensions by imposing similar weight-sharing across copies that are shifted along multiple dimensions.[3][4]
Example
In the case of a speech signal, inputs are spectral coefficients over time.
In order to learn critical acoustic-phonetic features (for example formant transitions, bursts, frication, etc.) without first requiring precise localization, the TDNN is trained time-shift-invariantly. Time-shift invariance is achieved through weight sharing across time during training: Time shifted copies of the TDNN are made over the input range (from left to right in Fig.1). Backpropagation is then performed from an overall classification target vector (see TDNN diagram, three phoneme class targets (/b/, /d/, /g/) are shown in the output layer), resulting in gradients that will generally vary for each of the time-shifted network copies. Since such time-shifted networks are only copies, however, the position dependence is removed by weight sharing. In this example, this is done by averaging the gradients from each time-shifted copy before performing the weight update. In speech, time-shift invariant training was shown to learn weight matrices that are independent of precise positioning of the input. The weight matrices could also be shown to detect important acoustic-phonetic features that are known to be important for human speech perception, such as formant transitions, bursts, etc.[1] TDNN’s could also be combined or grown by way of pre-training.[5]
Implementation
The precise architecture of TDNNs (time-delays, number of layers) is mostly determined by the designer depending on the classification problem and the most useful context sizes. The delays or context windows are chosen specific to each application. Work has also been done to create adaptable time-delay TDNNs [6] where this manual tuning is eliminated.
State of the Art
TDNN based phoneme recognizers compared favourably in early comparisons with HMM based phone models.[1][5] Modern deep TDNN architectures include many more hidden layers and sub-sample or pool connections over broader contexts at higher layers. They achieve up to 50% word error reduction over GMM based acoustic models.[7][8] While the different layers of TDNN’s are intended to learn features of increasing context width, they do model local contexts. When longer distance relationships and pattern sequences have to be processed, learning states and state-sequences is important and TDNNs can be combined with other modelling techniques [9][3][4]
Applications
Speech Recognition
TDNNs used to solve problems in speech recognition that were introduced in 1987 [2] and initially focused on shift-invariant phoneme recognition. Speech lends itself nicely to TDNNs as spoken sounds are rarely of uniform length and precise segmentation is difficult or impossible. By scanning a sound over past and future, the TDNN is able to construct a model for the key elements of that sound in a time-shift invariant manner. This is particularly useful as sounds are smeared out through reverberation.[7][8] Large phonetic TDNN’s can be constructed modularly through pre-training and combining smaller networks.[5]
Large Vocabulary Speech Recognition
Large vocabulary speech recognition requires recognizing sequences of phonemes that make up words subject to the constraints of a large pronunciation vocabulary. Integration of TDNNs into large vocabulary speech recognizers is possible by introducing state transitions and search between phonemes that make up a word. The resulting Multi-State Time-Delay Neural Network (MS-TDNN) can be trained discriminative from the word level, thereby optimizing the entire arrangement toward word recognition instead of phoneme classification.[9][10][4]
Speaker Independence
Two-dimensional variants of the TDNN’s were proposed for speaker independence.[3] Here, shift-invariance is applied to the time as well as to the frequency axis in order to learn hidden features that are independent of precise location in time and in frequency (the latter being due to speaker variability).
Reverberation
One of the persistent problems in speech recognition is recognizing speech when it is corrupted by echo and reverberation (as is the case in large rooms and distant microphones). Reverberation can be viewed as corrupting speech with delayed versions of itself. In general, it is difficult, however, to de-reverberate a signal as the impulse response function (and thus the convolutional noise experienced by the signal) is not known for any arbitrary space. The TDNN was shown to be effective to recognize speech robustly despite different levels of reverberation.[7][8]
Lip-reading – Audio-Visual Speech
TDNNs were also successfully used in early demonstrations of audio-visual speech, where the sounds of speech are complemented by visually reading lip movement.[10] Here, TDNN based recognizers used visual and acoustic features jointly to achieve improved recognition accuracy, particularly in the presence of noise, where complementary information from an alternate modality could be fused nicely in a neural net.
Handwriting Recognition
TDNNs have been used effectively in compact and high-performance handwriting recognition systems. Shift-invariance was also adapted to spatial patterns (x/y-axes) in image offline handwriting recognition.[4]
Video Analysis
Video has a temporal dimension that makes a TDNN an ideal solution to analysing motion patterns. An example of this analysis is a combination of vehicle detection and recognizing pedestrians.[11] When examining videos, subsequent images are fed into the TDNN as input where each image is the next frame in the video. The strength of the TDNN comes from its ability to examine objects shifted in time forward and backward to define an object detectable as the time is altered. If an object can be recognized in this manner, an application can plan on that object to be found in the future and perform an optimal action.
Image Recognition
Two-dimensional TDNNs were later applied to other image recognition tasks under the name of “Convolutional Neural Networks”, where shift-invariant training is applied to the x/y axes of an image.
Common Libraries
- TDNNs can be implemented in virtually all machine learning frameworks using one dimensional convolutional neural networks, due to the equivalence of the methods.
- Matlab: The neural network toolbox has explicit functionality designed to produce a time delay neural network give the step size of time delays and an optional training function. The default training algorithm is a Supervised Learning back-propagation algorithm that updates filter weights based on the Levenberg-Marquardt optimizations. The function is timedelaynet(delays, hidden_layers, train_fnc) and returns a time-delay neural network architecture that a user can train and provide inputs to.[12]
- The Kaldi ASR Toolkit has an implementation of TDNNs with several optimizations for speech recognition [13]
See also
- Convolutional neural network - a convolutional neural net where the convolution is performed along the time axis of the data is very similar to a TDNN.
- Recurrent neural networks - a recurrent neural network also handles temporal data, albeit in a different manner. Instead of a time-varied input, RNNs maintain internal hidden layers to keep track of past (and in the case of Bi-directional RNNs, future) inputs.
References
- ^ a b c d Alexander Waibel, Tashiyuki Hanazawa, Geoffrey Hinton, Kiyohito Shikano, Kevin J. Lang, Phoneme Recognition Using Time-Delay Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. - 339 March 1989.
- ^ a b Alexander Waibel, Phoneme Recognition Using Time-Delay Neural Networks, SP87-100, Meeting of the Institute of Electrical, Information and Communication Engineers (IEICE), December, 1987,Tokyo, Japan.
- ^ a b c d John B. Hampshire and Alexander Waibel, Connectionist Architectures for Multi-Speaker Phoneme Recognition, Advances in Neural Information Processing Systems, 1990, Morgan Kaufmann.
- ^ a b c d e Stefan Jaeger, Stefan Manke, Juergen Reichert, Alexander Waibel, Online handwriting recognition: the NPen++recognizer, International Journal on Document Analysis and Recognition Vol. 3, Issue 3, March 2001
- ^ a b c Alexander Waibel, Hidefumi Sawai, Kiyohiro Shikano, Modularity and Scaling in Large Phonemic Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, December, December 1989.
- ^ Christian Koehler and Joachim K. Anlauf, An adaptable time-delay neural-network algorithm for image sequence analysis, IEEE Transactions on Neural Networks 10.6 (1999): 1531-1536
- ^ a b c Vijayaditya Peddinti, Daniel Povey, Sanjeev Khudanpur, A time delay neural network architecture for efficient modeling of long temporal contexts, Proceedings of Interspeech 2015
- ^ a b c David Snyder, Daniel Garcia-Romero, Daniel Povey, A Time-Delay Deep Neural Network-Based Universal Background Models for Speaker Recognition, Proceedings of ASRU 2015.
- ^ a b Patrick Haffner, Alexander Waibel, Multi-State Time Delay Neural Networks for Continuous Speech Recognition, Advances in Neural Information Processing Systems, 1992, Morgan Kaufmann.
- ^ a b Christoph Bregler, Hermann Hild, Stefan Manke, Alexander Waibel, Improving Connected Letter Recognition by Lipreading, IEEE Proceedings International Conference on Acoustics, Speech, and Signal Processing, Minneapolis, 1993.
- ^ Christian Woehler and Joachim K. Anlauf, Real-time object recognition on image sequences with the adaptable time delay neural network algorithm—applications for autonomous vehicles." Image and Vision Computing 19.9 (2001): 593-618.
- ^ "Time Series and Dynamic Systems - MATLAB & Simulink". mathworks.com. Retrieved 21 June 2016.
- ^ Vijayaditya Peddinti, Guoguo Chen, Vimal Manohar, Tom Ko, Daniel Povey, Sanjeev Khudanpur, JHU ASpIRE system: Robust LVCSR with TDNNs i-vector Adaptation and RNN-LMs, Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop, 2015.