Learn how researchers are using new adaptive schemes for stochastic recursions at this seminar in the IE Decisions System Engineering Fall ’16 Seminar Series.
Adaptive Sampling Recursions for Simulation Optimization
Raghu Pasupathy, Associate Professor, Purdue University, Indiana
Friday, September 16, 2016
Brickyard Engineering (BYENG) 210, Tempe campus [map]
For roughly six decades since the seminal paper of Robbins and Monro (1951), Stochastic Approximation has dominated the landscape of algorithms for solving optimization problems with Monte Carlo observable functions. Recently, however, inspired by the rise in parallel computing and advances in nonlinear programming methods, there has been increasing interest in alternative sampling-based frameworks. Such frameworks are convenient in that they (could) use an existing recursive method, e.g., line-search or trust-region, with embedded Monte Carlo estimators of objects appearing within the recursion. In this talk, after reviewing existing results on optimal sampling rates, we consider the question of how to adaptively sample within stochastic recursions. Specifically, we will demonstrate that a simple adaptive scheme that has deep connections to proportional-width sequential confidence intervals endows stochastic recursions with convergence rates that are arbitrarily close to being optimal, while remaining practical enough for good finite-time implementation. Two illustrative recursions that embed line-search and a fixed step size will be presented. The adaptive sampling schemes we advertise were independently discovered by Byrd, Chin and Nocedal, but from the viewpoint of the need to estimate descent directions within such algorithms. This is joint work with Peter Glynn (Stanford University), Soumyadip Ghosh (IBM TJ Watson Research), and Fatemeh Hashemi (Virginia Tech).
Raghu Pasupathy is interested in questions related to Monte Carlo sampling and (statistical) efficiency within the context of stochastic simulation, optimization, and machine learning. A primary focus of his research has been developing methods for simulation optimization, that is, optimization contexts where the constituent functions can be observed only through a Monte Carlo oracle. Some of his recent work also includes “super-efficient” rare-event probability computation, constrained random vector generation, and function estimation under uncertainty. Raghu teaches Monte Carlo methods, probability, and optimization. Raghu is active on the editorial board of the Winter Simulation Conference. He is the Vice President/President Elect of the INFORMS Simulation Society, and also currently serves as an associate editor for Operations Research and INFORMS Journal on Computing, and as the Area Editor for the simulation desk at IIE Transactions. See more information including downloadable publications, teaching profile, and software.