Speaker: Christopher Amato
Speaker Affiliation: Postdoctoral Associate; MIT Computer Science & Artificial Intelligence Laboratory (CSAIL)
Date: Friday July 27 at 11:00AM in 5-314
Abstract: Decision-making under uncertainty is a thriving research area in artificial intelligence. It is increasingly common that agents must deal with uncertain action outcomes and system information. To model these problems, the partially observable Markov decision process (POMDP) model is a popular and powerful choice. When multiple decision makers are present, collaboration under uncertainty is another common and important problem. These problems can be modeled as decentralized POMDPs (Dec-POMDPs). Both of these models present difficult research challenges that must be overcome before large, realistic problems can be solved effectively. In this talk, I discuss some of our recent work on increasing the scalability of solution methods for these models including taking advantage of problem structure and domain assumptions as well as improved search and sampling methods for general problems.