Únase a nosotros en Binding Hook Live el 27 de octubre en el Underbelly Boulevard Soho de Londres
Únase a nosotros en Binding Hook Live

Reinforcement Learning

This module explores reinforcement learning, a type of machine learning where agents learn to make decisions by interacting with an environment to maximize cumulative reward. It covers key concepts such as the Markov decision process, policy optimization, and value-based methods, along with applications in areas like gaming, robotics, and autonomous systems..

Portal > Artificial Intelligence > Reinforcement Learning

Curriculum Builder

Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning Series. Cambridge, Massachusetts: The MIT Press, 2018.

Kochenderfer, Mykel J., Tim A. Wheeler, and Kyle H. Wray. Algorithms for Decision Making. Cambridge, Massachusetts: The MIT Press, 2022.

Agarwal, Alekh, Nan Jiang, and S. Kakade. “Reinforcement Learning: Theory and Algorithms,” 2019.

https://www.semanticscholar.org/paper/Reinforcement-Learning%3A-Theory-and-Algorithms-Agarwal-Jiang/8ef87e938b53c7f3ffdf47dfc317aa9b82848535

Bertsekas, Dimitri P. Reinforcement Learning and Optimal Control. 2nd printing (includes editorial revisions). Belmont, Massachusetts: Athena Scientific, 2019.

Gracias por suscribirse a nuestro boletín.

Thank you! RSVP received for Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning Series. Cambridge, Massachusetts: The MIT Press, 2018.

Gracias por su solicitud. Nos pondremos en contacto con usted.

Apply for: Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning Series. Cambridge, Massachusetts: The MIT Press, 2018.

Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning Series. Cambridge, Massachusetts: The MIT Press, 2018.

Cargando...

Cargando...