Reinforcement Learning: An Introduction, Second Edition

Reinforcement Learning: An Introduction, Second Edition

This textbook provides a clear and simple account of the key ideas and algorithms of reinforcement learning that is accessible to readers in all the related disciplines. Familiarity with elementary concepts of probability is required.

Publication date: 03 Apr 2018

ISBN-10: n/a

ISBN-13: n/a

Paperback: 548 pages

Views: 31,750

Type: Textbook

Publisher: The MIT Press

License: Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic

Post time: 09 Jan 2017 11:00:00

Reinforcement Learning: An Introduction, Second Edition

Reinforcement Learning: An Introduction, Second Edition This textbook provides a clear and simple account of the key ideas and algorithms of reinforcement learning that is accessible to readers in all the related disciplines. Familiarity with elementary concepts of probability is required.
Tag(s): Machine Learning
Publication date: 03 Apr 2018
ISBN-10: n/a
ISBN-13: n/a
Paperback: 548 pages
Views: 31,750
Document Type: Textbook
Publisher: The MIT Press
License: Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic
Post time: 09 Jan 2017 11:00:00
Summary/Excerpts of (and not a substitute for) the Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic:
You are free to:

Share — copy and redistribute the material in any medium or format

The licensor cannot revoke these freedoms as long as you follow the license terms.

Click here to read the full license.
Admin's note:

This is a draft of the second edition, a work in progress. When this book is completed, there is a possibility that this draft will no longer be publicly and freely accessible.

From the Preface to the First Edition:
Our goal in writing this book was to provide a clear and simple account of the key ideas and algorithms of reinforcement learning. We wanted our treatment to be accessible to readers in all of the related disciplines, but we could not cover all of these perspectives in detail. For the most part, our treatment takes the point of view of artificial intelligence and engineering. Coverage of connections to other fields we leave to others or to another time. We also chose not to produce a rigorous formal treatment of reinforcement learning. We did not reach for the highest possible level of mathematical abstraction and did not rely on a theorem–proof format. We tried to choose a level of mathematical detail that points the mathematically inclined in the right directions without distracting from the simplicity and potential generality of the underlying ideas.

The book is largely self-contained. The only mathematical background assumed is familiarity with elementary concepts of probability, such as expectations of random variables. Chapter 9 is substantially easier to digest if the reader has some knowledge of artificial neural networks or some other kind of supervised learning method, but it can be read without prior background. We strongly recommend working the exercises provided throughout the book. Solution manuals are available to instructors. This and other related and timely material is available via the Internet.

From the Preface to the Second Edition:
The nearly twenty years since the publication of the first edition of this book have seen tremendous progress in artificial intelligence, propelled in large part by advances in machine learning, including advances in reinforcement learning. Although the impressive computational power that became available is responsible for some of these advances, new developments in theory and algorithms have been driving forces as well. In the face of this progress, we decided that a second edition of our 1998 book was long overdue, and we finally began the project in 2013. Our goal for the second edition was the same as our goal for the first: to provide a clear and simple account of the key ideas and algorithms of reinforcement learning that is accessible to readers in all the related disciplines. The edition remains an introduction, and we retain a focus on core, on-line learning algorithms. This edition includes some new topics that rose to importance over the intervening years, and we expanded coverage of topics that we now understand better. But we made no attempt to provide comprehensive coverage of the field, which has exploded in many different directions with outstanding contributions by many active researchers. We apologize for having to leave out all but a handful of these contributions.

More Resources:
- Code solutions are available at GitHub
- The book official webpage

Updates:
- 2018-04-07: Draft of April 3, 2018 is now available. The download link has been updated.
- 2017-10-08: Draft of June 19, 2017 is now available. The download link has been updated.
 




About The Author(s)


Andrew Barto is Professor Emeritus in the College of Information and Computer Sciences at University of Massachusetts Amherst. He is a Co-Director at Autonomous Learning Laboratory. His research interests are theory and application of methods for learning and planning in stochastic sequential decision problems; algebraic approaches to abstraction; psychology, neuroscience, and computational theory of motivation, reward, and addiction; computational models of learning and adaptation in animal motor control systems. 

Andrew G. Barto

Andrew Barto is Professor Emeritus in the College of Information and Computer Sciences at University of Massachusetts Amherst. He is a Co-Director at Autonomous Learning Laboratory. His research interests are theory and application of methods for learning and planning in stochastic sequential decision problems; algebraic approaches to abstraction; psychology, neuroscience, and computational theory of motivation, reward, and addiction; computational models of learning and adaptation in animal motor control systems. 


Richard S. Sutton is Professor and iCORE chair Department of Computing Science at University of Alberta. Dr. Sutton is considered one of the founding fathers of modern computational reinforcement learning, having several significant contributions to the field, including temporal difference learning, policy gradient methods, the Dyna architecture.

Richard S. Sutton

Richard S. Sutton is Professor and iCORE chair Department of Computing Science at University of Alberta. Dr. Sutton is considered one of the founding fathers of modern computational reinforcement learning, having several significant contributions to the field, including temporal difference learning, policy gradient methods, the Dyna architecture.


Book Categories
Sponsors