Search Results for author: Xinghao Zhu

Found 8 papers, 0 papers with code

PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large Multimodal Models

no code implementations26 Feb 2024 Dingkun Guo, Yuqi Xiang, Shuqi Zhao, Xinghao Zhu, Masayoshi Tomizuka, Mingyu Ding, Wei Zhan

With these two capabilities, PhyGrasp is able to accurately assess the physical properties of object parts and determine optimal grasping poses.

Object Physical Commonsense Reasoning +1

Multi-level Reasoning for Robotic Assembly: From Sequence Inference to Contact Selection

no code implementations17 Dec 2023 Xinghao Zhu, Devesh K. Jha, Diego Romeres, Lingfeng Sun, Masayoshi Tomizuka, Anoop Cherian

Automating the assembly of objects from their parts is a complex problem with innumerable applications in manufacturing, maintenance, and recycling.

Motion Planning valid

Diff-Transfer: Model-based Robotic Manipulation Skill Transfer via Differentiable Physics Simulation

no code implementations7 Oct 2023 Yuqi Xiang, Feitong Chen, Qinsi Wang, Yang Gang, Xiang Zhang, Xinghao Zhu, Xingyu Liu, Lin Shao

In this work, we introduce $\textit{Diff-Transfer}$, a novel framework leveraging differentiable physics simulation to efficiently transfer robotic skills.

Q-Learning

Human-oriented Representation Learning for Robotic Manipulation

no code implementations4 Oct 2023 Mingxiao Huo, Mingyu Ding, Chenfeng Xu, Thomas Tian, Xinghao Zhu, Yao Mu, Lingfeng Sun, Masayoshi Tomizuka, Wei Zhan

We introduce Task Fusion Decoder as a plug-and-play embedding translator that utilizes the underlying relationships among these perceptual skills to guide the representation learning towards encoding meaningful structure for what's important for all perceptual skills, ultimately empowering learning of downstream robotic manipulation tasks.

Hand Detection Representation Learning +1

Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints

no code implementations29 Mar 2022 Xinghao Zhu, Siddarth Jain, Masayoshi Tomizuka, Jeroen van Baar

Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts.

Image Augmentation Robotic Grasping

Optimization Model for Planning Precision Grasps with Multi-Fingered Hands

no code implementations15 Apr 2019 Yongxiang Fan, Xinghao Zhu, Masayoshi Tomizuka

Searching precision grasps on the object represented by point cloud, is challenging due to the complex object shape, high-dimensionality, collision and undesired properties of the sensing and positioning.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.