About

I am currently pursuing my Ph.D. in Robotics at the University of Michigan. My advisor is Prof. Maani Ghaffari and I am a member of the Computational Autonomy and Robotics Laboratory (CURLY).

My research focuses on robot state estimation, perception, and localization and mapping.

Email: tzuyuan [at] umich [dot] edu

News

November 16, 2021
I will be the Graduate Student Instructor of ROB530: Mobile Robotics for Winter 2022!
September 28, 2021
Check out our latest Mini Cheetah video.
September 13, 2021
Our paper "Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events" is accepted to 2021 Conference on Robot Learning (CoRL)!
May 30, 2021
Our paper "A New Framework for Registration of Semantic Point Clouds from Stereo and RGB-D Cameras" is published in 2021 IEEE International Conference on Robotics and Automation (ICRA)!

Research

Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events

We developed a learning-based contact estimator for legged robots that bypasses the need for physical sensors and takes multi-modal proprioceptive sensory data as input. The trained network can estimate contact events on different terrains and is deployed along with a contact-aided invariant extended Kalman filter (InEKF). This network allows legged robot state estimators to benefit from foot contact information without having a physical contact sensor. The proposed state estimation pipeline runs real-time with an NVIDIA AGX Jetson on an MIT Mini Cheetah robot. All codes are open-sourced to the public.
[Paper]
[Deep-Contact-Estimator]
[Invariant-EKF for Mini Cheetah]

Continuous Visual Odometry

We develop a fundamentally novel formulation of the sensor registration problem, Continuous Sensor Registration, that is continuous and models the action of an arbitrary Lie group on any smooth manifold. The continuity is achieved by treating the output of a given sensor as a function that lives on a reproducing kernel Hilbert space (RKHS). The outputs of two sensors are registered by integrating the flow in the Lie algebra and minimizing the norm of the difference between the two functions. Continuous Visual Odometry (CVO) is a special case of the Continuous Sensor Registration that takes in images from a commonly used camera, either stereo or monocular, as input and estimate the trajectory by accumulating the transformation matrix found between each frame. [Code]

Simulation of Bipedal Robot with Camera and Lidar Sensor

We built up a simulation environment in Gazebo and Mujoco that allowed us to test control and perception systems for the bipedal robot, Cassie Blue. This project aims to help increase the variety and amount of environments in which we can test Cassie’s performance.

Automated Colonoscopy Assistance

In colonoscopy procedures, the examining device can accidentally pierce the colon, which is known as perforation and has an incident rate of 0.2%. The goal of this project is to prevent the tip of the colonoscope from contacting the intestinal wall to reduce the perforation rate. We used the video feed from the colonoscopy device and implemented a computer vision algorithm to track the center of the colon in real-time and applied a PID controller to achieve the goal.

Publications

Conference Papers

  • Tzu-Yuan Lin, Ray Zhang, Justin Yu, and Maani Ghaffari. "Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events." In 5th Annual Conference on Robot Learning (CoRL), 2021.   [Paper][Code][Video]
  • Ray Zhang, Tzu-Yuan Lin, Chien Erh Lin, Steven A Parkison, William Clark, Jessy W Grizzle, Ryan M Eustice, and Maani Ghaffari. "A New Framework for Registration of Semantic Point Clouds from Stereo and RGB-D Cameras" In 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 12214-12221.   [Paper][Code][Video]

Preprints

  • Xi Lin, Dingyi Sun, Tzu-Yuan Lin, Ryan M. Eustice, and Maani Ghaffari. "A Keyframe-based Continuous Visual SLAM for RGB-D Cameras via Nonparametric Joint Geometric and Appearance Representation." Submitted to 2020 Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA, 2020. [arXiv]
  • Tzu-Yuan Lin, William Clark, Ryan M. Eustice, Jessy W. Grizzle, Anthony Bloch, and Maani Ghaffari. "Adaptive Continuous Visual Odometry from RGB-D Images." Submitted to 2020 International Conference on Robotics and Automation (ICRA). IEEE, Paris, France, 2020. [arXiv]

Projects

Image Caption Generator with Simple Semantic Segmentation

Utilized a pre-trained ImageNet as the encoder, and a Long-Short Term Memory (LSTM) net with attention module as the decoder in PyTorch that can automatically generate properly formed English sentences of the inputted images. Implemented a simple semantic segmentation algorithm by highlighting the attention layer used to generate the English sentence.

Direct Visual Odometry with Pose Graph Optimization and LoopClosure

Implemented an offline direct visual odometry algorithm with pose-graph optimization to track a robot’s position using RGB-D camera image as input.

Simultaneous Localization and Mapping (SLAM) Robot with Particle Filter and Path Planning

Implemented a particle filter based simultaneous localization and mapping (SLAM) system and A* path planning algorithm for a robot with 2D LiDAR to explore and escape an arbitrarily-configured maze.

Vehicle Classification and Localization

Utilized OpenCV for image preprocessing and TensorFlow for DenseNet deep learning to classify 22 different types of vehicle from a video game and localize them by point cloud data.

6 DOF Robot Arm with a 3D Block Detector and Color Segmentation

We developed a working system capable of autonomously identifying and manipulating colored blocks with a 3-link robotic manipulator. The tasks we accomplished with the robot arm includes: Stacked the blocks in a specific color order, moved and mirrored the position of the blocks in one side of the plane to the other side, and built a pyramid with the wooden blocks.

Teaching

ROB 530: Mobile Robotics -- Graduate Student Instructor

Robotics Institute, University of Michigan, Winter 2020 & Winter 2022

Theory and application of probabilistic techniques for autonomous mobile robotics. Topics include Bayesian filtering; stochastic representations of the environment; motion and sensor models for mobile robots; algorithms for mapping, localization; application to autonomous marine, ground, and air vehicles.

ME 2001: Engineering Mathematics -- Teaching Assistant

Mechanical Engineering, National Taiwan University, Fall 2017 & Spring 2017

Topics include Linear Algebra, Differential Equations, Laplace Transform, Fourier Series, and Real Analysis.

ME 1003: Engineering Graphics -- Teaching Assistant

Mechanical Engineering, National Taiwan University, Spring 2017

This course talks about concepts in engineering drawing and teaches students a 3D drawing software - Autodesk Inventor.

ME 2004: Machine Design Theory -- Teaching Assistant

Mechanical Engineering, National Taiwan University, Fall 2017

ME 2005: Thermodynamics -- Teaching Assistant

Mechanical Engineering, National Taiwan University, Fall 2017

Outreach

UM Discover Engineering

Helped design and organize a robotics coding activity for a two day camp for local high school students. Guided students in the workshop through the robotics line following and grasping task. The camp focused on increasing their interest in STEM, as well as over-viewing its academic and career paths.

Ann Arbor Summer Festival KidZone

Demonstrated current robotics sensors and explained their usage and theories to local families.