Supervised Sequence Labelling with Recurrent Neural Networks
Recurrent neural networks are powerful sequence learners. They are able to incorporate context information in a flexible way, and are robust to localised distortions of the input data. These properties make them well suited to sequence labelling, where input sequences are transcribed with streams of labels. The aim of this thesis is to advance the state-of-the-art in supervised sequence labelling with recurrent networks. Its two main contributions are (1) a new type of output layer that allows recurrent networks to be trained directly for sequence labelling tasks where the aligmnent between the inputs and the labes is unknown, and (2) an extensio of the long short-term memory network architecture to multidimensional data, such as images and video sequences.