Abstract¶
This is a presentation I gave to the Journal Club of the Nakamura Laboratory while I was interning there, before my graduate studies. We discussed the the paper Continuous Inverse Optimal Control with Locally Optimal Examples by Levine and Koltun. It is a work in inverse optimal control, a.k.a. inverse reinforcement learning, which is the problem of deriving an unknown reward function from demonstrations of the optimal policy in a Markov decision process.
References¶
Maximum Entropy Inverse Reinforcement Learning | |
Continuous Inverse Optimal Control with Locally Optimal Examples |
Discussion ¶
Feel free to post a comment by e-mail using the form below. Your e-mail address will not be disclosed.