I'm a Lead (Staff) Research Scientist at Bosch AI Research Center at Silicon Valley, where I lead a team that works on 3D vision and interactive spatial AI solutions.
At Bosch, my work focuses on pioneering computer vision and AI research, aimed at enabling generalization across diverse hardware configurations and embodiments. My research has been integrated into multiple Bosch products or prototypes, such as assisted driving technologies, industrial augmented reality (AR) systems, and AI-powered indoor robotics.
My research focuses on developing active 3D perception methods adaptable across diverse embodiments and automating the creation of interactive digital twins of real-world environments. The ultimate goal is to enable intelligent human-AI interactions and advance physical AI for greater autonomy and adaptability.
A fully online system to effectively integrate dense CLIP features with Gaussian Splatting. High-resolution dense CLIP embedding and online compressor learning modules are introduced to serve dense language mapping at realtime (40+ FPS) while retaining open-vocabulary capability for flexible query-based human-machine interaction.
Depth Any Camera (DAC) is a powerful framework achieving superior zero-shot generalization in metric depth estimation for large FoV cameras, including fisheye and 360°. Tired of collecting new data for specific cameras? DAC maximizes the utility of every existing 3D data for training, regardless of the specific camera types used in new applications.
SMART augments online topology reasoning with robust map priors learned from scalable SD and satellite maps, substantially improving lane perception and topology reasoning.
A monocular object reconstruction framework effectively integrating object pose estimation and NeRF-based reconstruction. A novel camera-invariant pose estimation module is introduced to resolve depth-scale ambiguity and enhance cross-domain generalization.
An advanced Gaussian Splatting method effectively fusing Lidar and surrounding camera views for autonomous driving. The method uniquely leverages an intermediate occ-tree feature volume before GS such that GS parameters can be initialized from feature-volume-generated 3D surface more effectively.
An effective framework leveraging lightweight and scalable priors-Standard Definition (SD) maps in the estimation of online vectorized HD map representations.
A mathematical framework to prove that the dice loss leads to superior noise-robustness and model convergence for large objects compared to regression losses. A flexible monocular 3D detection pipeline integrated with bird-eye view segmentation.
A neural reconstruction method enabling the completion the occluded surfaces from large 3D scene reconstrucion. A milestone in automating the creation of interactable digital twins from real world.
The first vision transformer approach to handle 360 monocular depth estimation with spherical distortion. Novel designs include tangent-image coordinate embedding and geometry-aware feature fusion.
A real-time method to predict multi-person 3D poses from a depth image. Introduce new part-level representation to enables an explicit fusion process of bottom-up part detection and global pose detection. A new 3D human posture dataset with challenging multi-person occlusion.
A joint model of learned part-based appearance and parametric shape representation to precisely estimate the highly articulated poses of multiple laboratory animals.
One-shot learning gesture recognition on RGB-D data recorded from Microsoft Kinect. A novel bag of manifold words (BoMW) based feature representation on sysmetric positive definite (SPD) manifolds.
This study investigates the relative diagnosticity and the optimal combination of multiple cues (we consider luminance, color, motion and binocular disparity) for boundary detection in natural scenes. A multi-cue boundary dataset is introduced to facilitate the study.
A multi-stage approach to curve extraction where the curve fragment search space is iteratively reduced by removing unlikely candidates using geometric constrains, but without affecting recall, to a point where the application of an objective functional becomes appropriate.
Selected Patents
Yuliang Guo, Xinyu Huang, Liu Ren, Systems and methods for providing product assembly step recognition using augmented reality, US Patent 11,715,300, 2023
Yuliang Guo, Xinyu Huang, Liu Ren, Semantic SLAM Framework for Improved Object Pose Estimation, US Patent App. 17/686,677, 2023
Yuliang Guo, Zhixin Yan, Yuyan Li, Xinyu Huang, Liu Ren, Method for fast domain adaptation from perspective projection image domain to omnidirectional image domain in machine perception tasks, US Patent App. 17/545,673, 2023
Yuliang Guo, Tae Eun Choe, KaWai Tsoi, Guang Chen, Weide Zhang, Determining vanishing points based on lane lines, US Patent 11,227,167, 2022
Tae Eun Choe, Yuliang Guo, Guang Chen, KaWai Tsoi, Weide Zhang, Sensor calibration system for autonomous driving vehicles, US Patent 10,891,747, 2021