Abstract¶
This is a presentation I gave to the Journal Club of the Nakamura Laboratory while I was interning there, before my graduate studies. We discussed the the paper Continuous Inverse Optimal Control with Locally Optimal Examples by Levine and Koltun. It is a work in inverse optimal control, a.k.a. inverse reinforcement learning, which is the problem of deriving an unknown reward function from demonstrations of the optimal policy in a Markov decision process.
References¶
![]() |
Maximum Entropy Inverse Reinforcement Learning |
![]() |
Continuous Inverse Optimal Control with Locally Optimal Examples |
Discussion ¶
There are no comments yet. Feel free to leave a reply using the form below.