You are here

Publications

Export 5 results:
Author Title Type [ Year(Asc)]
2015
T. Merritt, Latorre, J., and King, S., Attributing modelling errors in HMM synthesis by stepping gradually from natural to modelled speech, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brisbane, 2015.
Z. Wu, Valentini-Botinhao, C., Watts, O., and King, S., Deep neural networks employing multi-task learning and stacked bottleneck features for speech synthesis, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2015.
P. Swietojanski and Renals, S., Differentiable Pooling for Unsupervised Speaker Adaptation, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2015.
Y. Liu, Karanasou, P., and Hain, T., An Investigation Into Speaker Informed DNN Front-end for LVCSR, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2015.
Q. Hu, Stylianou, Y., Maia, R., Richmond, K., and Yamagishi, J., Methods for applying dynamic sinusoidal models to statistical parametric speech synthesis, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2015.
B. Uria, Murray, I., Renals, S., and Valentini-Botinhao, C., Modelling acoustic feature dependencies with artificial neural networks: Trajectory-RNADE, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2015.
Z. Wu, Khodabakhsh, A., Demiroglu, C., Yamagishi, J., Saito, D., Toda, T., and King, S., SAS: A speaker verification spoofing database containing diverse attacks, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2015.
2014
P. Karanasou, Wang, Y., Gales, M., and Woodland, P., Adaptation of Deep Neural Network Acoustic Models Using Factorised I-vectors, in Proceedings of Interspeech’14, 2014.
I. Casanueva, Christensen, H., Hain, T., and Green, P., "Adaptive speech recognition and dialogue management for users with speech disorders, in Proceedings of Interspeech'14, 2014.
H. Christensen, Casanueva, I., Cunningham, S., Green, P., and Hain, T., Automatic Selection of Speakers for Improved Acoustic Modelling : Recognition of Disordered Speech with Sparse Data, in Spoken Language Technology Workshop, SLT'14, Lake Tahoe, 2014.
P. Swietojanski, Ghoshal, A., and Renals, S., Convolutional Neural Networks for Distant Speech Recognition, Signal Processing Letters, IEEE, vol. 21, pp. 1120-1124, 2014.
L. Lu, Ghoshal, A., and Renals, S., Cross-lingual subspace Gaussian mixture model for low-resource speech recognition, IEEE Transactions on Audio, Speech and Language Processing, 2014.
R. Dall, Wester, M., and Corley, M., The Effect of Filled Pauses and Speaking Rate on Speech Comprehension in Natural, Vocoded and Synthetic Speech, in Proceedings of Interspeech, 2014.
X. Liu, Wang, Y., Chen, X., Gales, M., and Woodland, P., EFFICIENT LATTICE RESCORING USING RECURRENT NEURAL NETWORK LANGUAGE MODELS, in IEEE ICASSP2014, Florence, Italy, 2014.
R. Dall, Tomalin, M., Wester, M., Byrne, W., and King, S., Investigating Automatic & Human Filled Pause Insertion for Speech Synthesis, in Proceedings of Interspeech, 2014.
T. Merritt, Raitio, T., and King, S., Investigating source and filter contributions, and their interaction, to statistical parametric speech synthesis, in Proc. Interspeech, Singapore, 2014, pp. 1509–1513.
P. Swietojanski and Renals, S., Learning Hidden Unit Contributions for Unsupervised Speaker Adaptation of Neural Network Acoustic Models, in Proc. IEEE Workshop on Spoken Language Technology, Lake Tahoe, USA, 2014.
G. E. Henter, Merritt, T., Shannon, M., Mayo, C., and King, S., Measuring the perceptual effects of modelling assumptions in speech synthesis using stimuli constructed from repeated natural speech, in Proceedings of Interspeech, Singapore, 2014.
P. Lanchantin, Gales, M. J. F., King, S., and Yamagishi, J., Multiple-Average-Voice-based Speech Synthesis, in Proc. ICASSP, 2014.
S. Renals and Swietojanski, P., Neural Networks for Distant Speech Recognition, in The 4th Joint Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA), 2014.
X. Liu, Gales, M., and Woodland, P., PARAPHRASTIC NEURAL NETWORK LANGUAGE MODELS, in IEEE ICASSP2014, Florence, Italy, 2014.
L. Lu and Renals, S., Probabilistic Linear Discriminant Analysis for Acoustic Modelling, IEEE Signal Processing Letters, vol. 21, pp. 702-706, 2014.
P. Zhang, Liu, Y., and Hain, T., Semi-Supervised DNN Training in Meeting Recognition, presented at the December, South Lake Tahoe, USA, 2014.
C. Zhang and Woodland, P. C., Standalone training of context-dependent deep neural network acoustic models, in IEEE ICASSP 2014, Florence, Italy, 2014.
O. Saz and Hain, T., Using Contextual Information in Joint Factor Eigenspace MLLR for Speech Recognition in Diverse Scenarios, in Proceedings of the 2014 ICASSP, Florence, Italy., 2014.

Pages