Research
I am currently working on differentiable optimization and physics simulators. My past research worked towards bringing learning to the real world. I have worked on developing more natural means of task specification for deep RL to avoid the burden of manually engineered reward functions, as well as on developing data-efficient learning techniques that allow for safety guarantees throughout the learning process.
|
|
Learning Parameter-Efficient Markovian Quadrotor Dynamics Models
Suvansh Sanjeev
CMU Masters Thesis, 2022
[Thesis]
|
|
Scalable Learning of Safety Guarantees for Autonomous Systems using Hamilton-Jacobi Reachability
Sylvia Herbert*, Jason J. Choi*, Suvansh Sanjeev, Marsalis Gibson, Koushil Sreenath, Claire J. Tomlin
Robotics: Science and Systems, 2021
[Paper]
|
|
PaVE the Way for NFL Passing Analytics: Passing Value in Expectation
NFL Big Data Bowl 2021
[Code]
|
|
Ecological Reinforcement Learning
John D. Co-Reyes*, Suvansh Sanjeev*, Glen Berseth, Abhishek Gupta, Sergey Levine
Deep RL Workshop at NeurIPS, 2019
[Paper]
|
|
Guiding Policies with Language via Meta-Learning
John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, Sergey Levine
International Conference on Learning Representations, 2019
Best Paper at Meta-Learning Workshop at NeurIPS, 2018
[Paper]
|
I received the 2020-2021 Outstanding Graduate Student Instructor Award at UC Berkeley, where I was fortunate enough to serve as the head teaching assistant for the incredible Professors Gireeja Ranade, Alexandre Bayen, and Babak Ayazifar.
One of three lectures I delivered during the Fall 2019 offering of EECS 127/227A can be found here.
|