Π£ Π½Π°Ρ Π²Ρ ΠΌΠΎΠΆΠ΅ΡΠ΅ ΠΏΠΎΡΠΌΠΎΡΡΠ΅ΡΡ Π±Π΅ΡΠΏΠ»Π°ΡΠ½ΠΎ Function Approximation | Reinforcement Learning Part 5 ΠΈΠ»ΠΈ ΡΠΊΠ°ΡΠ°ΡΡ Π² ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΠΎΠΌ Π΄ΠΎΡΡΡΠΏΠ½ΠΎΠΌ ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅, Π²ΠΈΠ΄Π΅ΠΎ ΠΊΠΎΡΠΎΡΠΎΠ΅ Π±ΡΠ»ΠΎ Π·Π°Π³ΡΡΠΆΠ΅Π½ΠΎ Π½Π° ΡΡΡΠ±. ΠΠ»Ρ Π·Π°Π³ΡΡΠ·ΠΊΠΈ Π²ΡΠ±Π΅ΡΠΈΡΠ΅ Π²Π°ΡΠΈΠ°Π½Ρ ΠΈΠ· ΡΠΎΡΠΌΡ Π½ΠΈΠΆΠ΅:
ΠΡΠ»ΠΈ ΠΊΠ½ΠΎΠΏΠΊΠΈ ΡΠΊΠ°ΡΠΈΠ²Π°Π½ΠΈΡ Π½Π΅
Π·Π°Π³ΡΡΠ·ΠΈΠ»ΠΈΡΡ
ΠΠΠΠΠΠ’Π ΠΠΠΠ‘Π¬ ΠΈΠ»ΠΈ ΠΎΠ±Π½ΠΎΠ²ΠΈΡΠ΅ ΡΡΡΠ°Π½ΠΈΡΡ
ΠΡΠ»ΠΈ Π²ΠΎΠ·Π½ΠΈΠΊΠ°ΡΡ ΠΏΡΠΎΠ±Π»Π΅ΠΌΡ ΡΠΎ ΡΠΊΠ°ΡΠΈΠ²Π°Π½ΠΈΠ΅ΠΌ Π²ΠΈΠ΄Π΅ΠΎ, ΠΏΠΎΠΆΠ°Π»ΡΠΉΡΡΠ° Π½Π°ΠΏΠΈΡΠΈΡΠ΅ Π² ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΊΡ ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ Π²Π½ΠΈΠ·Ρ
ΡΡΡΠ°Π½ΠΈΡΡ.
Π‘ΠΏΠ°ΡΠΈΠ±ΠΎ Π·Π° ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΡΠ΅ΡΠ²ΠΈΡΠ° ClipSaver.ru
The machine learning consultancy: https://truetheta.io Join my email list to get educational and useful articles (and nothing else!): https://mailchi.mp/truetheta/true-the... Want to work together? See here: https://truetheta.io/about/#want-to-w... Here, we learn about Function Approximation. This is a broad class of methods for learning within state spaces that are far too large for our previous methods to work. This is part five of a six part series on Reinforcement Learning. SOCIAL MEDIA LinkedIn : Β Β /Β dj-rich-90b91753Β Β Twitter : Β Β /Β duanejrichΒ Β Github: https://github.com/Duane321 Enjoy learning this way? Want me to make more videos? Consider supporting me on Patreon: Β Β /Β mutualinformationΒ Β SOURCES [1] R. Sutton and A. Barto. Reinforcement learning: An Introduction (2nd Ed). MIT Press, 2018. [2] H. Hasselt, et al. RL Lecture Series, Deepmind and UCL, 2021, Β Β Β β’Β DeepMindΒ xΒ UCLΒ |Β DeepΒ LearningΒ LectureΒ Ser...Β Β SOURCE NOTES This video covers topics from chapters 9, 10 and 11 from [1], with only a light covering of chapter 11. [2] includes a lecture on Function Approximation, which was a helpful secondary source. TIMESTAMP 0:00 Intro 0:25 Large State Spaces and Generalization 1:55 On Policy Evaluation 4:31 How do we select w? 6:46 How do we choose our target U? 9:27 A Linear Value Function 10:34 1000-State Random Walk 12:51 On Policy Control with FA 14:26 The Mountain Car Task 19:30 Off-Policy Methods with FA LINKS 1000-State Random Walk Problem: https://github.com/Duane321/mutual_in... Mountain Car Task: https://github.com/Duane321/mutual_in... NOTES [1] In the Mountain Car Task, I left out a hyperparameter to tune: Lambda. This controls how far away the evenly spaced proto-points are from any given evaluation point. If lambda is very high, the prototypical points are considered very close together, and they won't do a good job discriminating different values over the state space. But if lambda is too low, then the prototypical points won't share any information beyond a tiny region surrounding each point.