Publication Details

Deep Auto-encoder Based Multi-task Learning Using Probabilistic Transcriptions

DAS Amit, HASEGAWA-JOHNSON Mark and VESELÝ Karel. Deep Auto-encoder Based Multi-task Learning Using Probabilistic Transcriptions. In: Proceedings of Interspeech 2017. Stockholm: International Speech Communication Association, 2017, pp. 2073-2077. ISSN 1990-9772. Available from: http://www.isca-speech.org/archive/Interspeech_2017/pdfs/0582.PDF
Czech title
Multi-task trénování s pravděpodobnostními přepisy založené na hlubokém autoenkodéru
Type
conference paper
Language
english
Authors
Das Amit (UILLINOIS)
Hasegawa-Johnson Mark (UILLINOIS)
Veselý Karel, Ing., Ph.D. (DCGM FIT BUT)
URL
Keywords

cross-lingual speech recognition, probabilistic transcription, deep neural networks, multi-task learning

Abstract

This article is about deep auto-encoder based Multi-task Learning using probabilistic transcriptions.

Annotation

We examine a scenario where we have no access to native transcribers in the target language. This is typical of language communities that are under-resourced. However, turkers (online crowd workers) available in online marketplaces can serve as valuable alternative resources for providing transcripts in the target language. We assume that the turkers neither speak nor have any familiarity with the target language. Thus, they are unable to distinguish all phone pairs in the target language; their transcripts therefore specify, at best, a probability distribution called a probabilistic transcript (PT). Standard deep neural network (DNN) training using PTs do not necessarily improve error rates. Previously reported results have demonstrated some success by adopting the multi-task learning (MTL) approach. In this study, we report further improvements by introducing a deep auto-encoder based MTL. This method leverages large amounts of untranscribed data in the target language in addition to the PTs obtained from turkers. Furthermore, to encourage transfer learning in the feature space, we also examine the effect of using monophones from transcripts in well-resourced languages. We report consistent improvement in phone error rates (PER) for Swahili, Amharic, Dinka, and Mandarin.

Published
2017
Pages
2073-2077
Journal
Proceedings of Interspeech - on-line, vol. 2017, no. 8, ISSN 1990-9772
Proceedings
Proceedings of Interspeech 2017
Conference
Interspeech Conference, Stockholm, SE
Publisher
International Speech Communication Association
Place
Stockholm, SE
DOI
UT WoS
000457505000434
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB11585,
   author = "Amit Das and Mark Hasegawa-Johnson and Karel Vesel\'{y}",
   title = "Deep Auto-encoder Based Multi-task Learning Using Probabilistic Transcriptions",
   pages = "2073--2077",
   booktitle = "Proceedings of Interspeech 2017",
   journal = "Proceedings of Interspeech - on-line",
   volume = 2017,
   number = 08,
   year = 2017,
   location = "Stockholm, SE",
   publisher = "International Speech Communication Association",
   ISSN = "1990-9772",
   doi = "10.21437/Interspeech.2017-582",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/11585"
}
Back to top