Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
The Center for Control, Dynamical Systems, and Computation University of California at Santa Barbara
 

Summary: The Center for Control, Dynamical Systems, and Computation
University of California at Santa Barbara
Fall 2008 Seminar Series
Presents
Randomized optimization with an expected value
criterion: Finite sample bounds and applications
John Lygeros
ETH Zurich
Tuesday, October 14, 2008 4:00-5:00pm ESB 2001
Abstract:
Simulated annealing, Markov Chain Monte Carlo, and genetic algorithms are all randomized methods
that can be used in practice to solve (albeit approximately) complex optimization problems. They rely on
constructingappropriateMarkovchains,whosestationarydistributionconcentrateson"good"partsofthe
parameter space (i.e. near the optimizers). Many of these methods come with asymptotic convergence
guarantees, that establish conditions under which the Markov chain converges to a globally optimal
solution in an appropriate probabilistic sense. An interesting question that is usually not covered by
asymptotic convergence results is the rate of convergence: How long should the randomized algorithm
be executed to obtain a near optimal solution with high probability? Answering this question allows
one to determine a level of accuracy and confidence with which approximate optimality claims can
be made, as a function of the amount of time available for computation. In this talk we present some

  

Source: Akhmedov, Azer - Department of Mathematics, University of California at Santa Barbara

 

Collections: Mathematics