Spoken dialogue managers have benefited from using stochastic planners such as Markov Decision Processes (MDPs). However, so far, MDPs do not handle well noisy and ambiguous speech utterances. We use a Partially Observable Markov Decision Process (POMDP)-style approach to generate dialogue strategies by inverting the notion of dialogue state; the state represents the user's intentions, rather than the system state. We demonstrate that under the same noisy conditions, a POMDP dialogue manager makes fewer mistakes than an MDP dialogue manager. Furthermore, as the quality of speech recognition degrades, the POMDP dialogue manager automatically adjusts the policy.
@INPROCEEDINGS{Roy00b, AUTHOR = {Roy, N. and Pineau, J. and Thrun, S.}, TITLE = {Spoken Dialogue Management Using Probabilistic Reasoning}, YEAR = {2000}, BOOKTITLE = {Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL-2000)}, ADDRESS = {Hong Kong} }