Minwoo Cho

I am an intern of RLLAB in Yonsei Univ., specializing in Reinforcement Learning (RL) and Robotics. Under the supervision of Prof. Youngwoon Lee, I am currently working on learning ego-centric visual policies for humanoid whole-body control. I achieved my M.S. degree in KAIST C.S. in 2025, under the supervision of Prof. Daehyung Park in the RIRO (Robust Intelligence & RObotics) Lab. I worked on learning constraints (e.g., safety conditions and task specifications) from demonstrations.

I received my B.S. from KAIST in 2023, where I majored in Mechanical Engineering and minored in Mathematical Sciences. I had the honor of being awarded the KAIST Presidential Fellowship. (Last updated 2025.9.)

Email  /  CV  /  Github

profile photo

Research Interest

My ultimate research goal is to build a robotic agent that can learn new skills quickly on its own, like humans!.

  • RL with priors: RL becomes sample-inefficient when the agent’s degrees of freedom are high. How can we learn good priors that effectively reduce the exploration space, enabling the agent to learn challenging tasks with simple rewards?

  • Humanoids: Humanoid morphology offers human-level capabilities. Scaling humanoid policies with egocentric vision would enable robots to perceive and act autonomously, as humans do.

  • Reward learning: RL enables learning new skills, but we often need well-engineered rewards. Can we mitigate this by learning like humans—e.g., from videos or language feedbacks?

Publications

ILCL: Inverse Logic-Constraint Learning from Temporally Constrained Demonstrations
Minwoo Cho, Jaehwi Jang, Daehyung Park
Under review, 2025

In this work, we learn free-form temporal logic constraints from demonstrations using a tree-based genetic algorithm and constraint-regularized RL with logic cost redistribution.

Natural Language-Guided Semantic Navigation using Scene Graph
Dohyun Kim*, Jinwoo Kim*, Minwoo Cho, Daehyung Park
RiTA, 2022. Best student paper

In this work, we perform semantic navigation using a scene-graph grounding network that predicts the object from a natural language input, on RBQ-3 quadruped robot.

Projects

Manipulation of daily objects with visual reinforcement learning
Project at RIRO Lab, 2024

This project aims to learn RL policies for manipulating everyday objects in diverse rigid and deformable scenarios—such as pick-and-place and entangling/disentangling—directly from visual input. We build on SERL as the code base.

Learning common constraints from multi-task demonstrations without rewards
Course project in CS672 (Reinforcement Learning), 2023

This project aims to infer common constraints from multi-task demonstrations by decomposing the task-agnostic part of the reward inferred from Meta-IRL.

Autonomous navigation of quadruped robot through traversability estimation based on locomotion policy
Undergraduate research program in RIRO Lab, 2022

This project aims to generate a navigation plan based on the traversability prediction network. Traversabilities are estimated from the voxels of the terrain and trained with the locomotion performance of RL policies in various terrains.

Self-driving hovercraft
Course project in ME400 (Capstone Design), 2022

This project aims to design and create a self-driving hovercraft to navigate to given goals using a 2D LiDAR and BLDC motors. We designed the hovercraft with two thrust propellers, and controlled the machine using PD control.

Goldberg machine simulation in Gazebo
Course project in ME454 (Dynamic System Programming), 2022

This project aims to creatively design a Goldberg machine in a Gazebo simulator. Enjoy the video!


Template from Jon Barron.