Jiaying Fang

Jiaying Fang

Electrical Engineering Master Student

Stanford University

Biography

I am a Master Student at Stanford University. I have worked on robot learning at the Interactive Perception and Robot Learning Lab and surgical robotics at Collaborative Haptics and Robotics in Medicine Lab at Stanford. In summer 2024, I joined Intuitive Surgical working on Human-Robot Interaction as a Machine Learning Intern. I finished my BEng degree in Electronic and Information Engineering at HK PolyU. My interest lies in Surgical Robotics, Robot Learning, and the intersection of Computer Vision and Robotics.

Interests
  • Surgical Robotics
  • Robot Learning
  • Computer Vision
  • Deep Learning
Education
  • MSc in Electrical Engineering, 2023 - 2025

    Stanford University

  • BEng in Electronic and Information Engineering, 2019-2023

    The Hong Kong Polytechnic University

  • Exchange Student, 2022

    McGill University

Skills

Machine Learning
Computer Vision
Robotics
Software Engineering
Signal Processing
Electronics

Experience

 
 
 
 
 
Interactive Perception and Robot Learning, Stanford University
Research Assistant
Feb 2024 – Sep 2024 Stanford
  • Worked with PhD candidate Marion Lepert, under the supervision of Prof. Jeannette Bohg.
  • Robot Manipulation from Human Videos: (Work in Progress) Extracted hand poses from human videos and converted to actions; Segmented out human hand using SAM2; Rendered hand mask from hand pose using MANO hand. To be published in early 2025. The next step is to zero-shot transfer a policy trained on videos of humans performing a task with augmented frames to a robot.
  • Haptic Data Analysis in Large Robotics Dataset: Explored joint torque data in a large robotics dataset - DROID dataset, converted joint torque data to external force data.
  • Robot Learning in Simulation: Evaluated Reinforcement Learning and Imitation Learning methods on robotics-related tasks in Mujoco.
 
 
 
 
 
Intuitive Surgical
Machine Learning Intern
Jun 2024 – Sep 2022 Sunnyvale
  • Deep Learning-based Gaze Estimation: Designed and implemented an end-to-end deep learning-based 3D gaze estimation algorithm. The algorithm is robust to head motions and subject appearance differences.
  • Performance Improvement: The developed algorithm improves the gaze estimation performance by 84.5%.
  • Synthetic Data: Generated more than 100k synthetic data and images with suitable domain randomization in Blender for gaze estimation training.
  • Data Collection: Designed real-world gaze estimation data collection pipeline and conducted data collection. Did detailed analysis and visualization of the dataset.
  • Semi-auto Labeling: Implemented a semi-auto labeling tool for pupil localization and segmentation using SAM2.
 
 
 
 
 
Collaborative Haptics and Robotics in Medicine Lab, Stanford University
Research Assistant
Sep 2023 – Dec 2023 Sunnyvale
  • Worked with Dr. Alaa Eldin Abdelaal, under the supervision of Prof. Allison Okamura.
  • Force-Aware Autonomous Robotic Surgery: Assisted in the data collection, model training, and model design in robot imitation learning using force and vision data for robotic automatic surgery. The task completion rate of autonomous tissue retraction increased 50% with haptic sensing.
  • dVRK System: Conducted experiments using the da-Vinci Research Kit.
 
 
 
 
 
China Telecom AI
Computer Vision Algorithm Intern
Jun 2023 – Aug 2022 Beijing
  • ICCV Challenge: As the main member of the team, participated in the ICCV'23 challenge: Open Fine Grained Activity Detection Challenge (OpenFAD). Currently, the team ranks at third place on the activity recognition track and second place on the activity detection track.
  • Foundation Model: Explored video foundation models like VideoMAE and UniFormer.
  • Model Inference Optimization: Made use of TensorRT to speed up a detection model based on YOLO.
 
 
 
 
 
DECAR Lab, McGill University Montreal, Canada
Research Assistant
May 2022 – Aug 2022 Montreal
  • Research: Researched estimation and control in the robotics field.
  • Controller Design: Designed and implemented a robust LQR controller which can be used in real-world applications.
  • Experiments: Conducted experiments about the LQR controller on an unmanned ground vehicle.
 
 
 
 
 
HK PolyU Autonomous Systems Lab
Research Assistant
May 2021 – Jun 2023 Hong Kong
  • Research about Visual Odometry: Researched visual odometry and simultaneous localization and mapping in the robotics field. Especially focusing on the integration of visual odometry and multi-object tracking.
  • Implementation of a Visual Odometry: Implemented a system with integration of deep learning-based visual odometry and multi-object tracking. Deep optical flow estimation and a 3D object detection network were used.

Accomplish­ments

Outstanding Student Award - Faculty of Engineering
HKSAR Government Scholarship 2021/22 - HK$80,000
Wong Tit-shing Student Exchange Scholarship 2021/22 - HK$20,000
  • Best Academic Performance Award (2020/2021)
  • Best Academic Performance Award (2019/2020)
  • Best GPA Award (2020/2021)
  • Best GPA Award (2019/2020)
Professor Leung Tin-pui Memorial Scholarship 2020/21 - HK$20,000