Search Results for author: Mike Lambeta

Found 6 papers, 4 papers with code

A Touch, Vision, and Language Dataset for Multimodal Alignment

1 code implementation20 Feb 2024 Letian Fu, Gaurav Datta, Huang Huang, William Chung-Ho Panitch, Jaimyn Drake, Joseph Ortiz, Mustafa Mukadam, Mike Lambeta, Roberto Calandra, Ken Goldberg

This is partially due to the difficulty of obtaining natural language labels for tactile data and the complexity of aligning tactile readings with both visual observations and language descriptions.

Language Modelling Text Generation

General In-Hand Object Rotation with Vision and Touch

no code implementations18 Sep 2023 Haozhi Qi, Brent Yi, Sudharshan Suresh, Mike Lambeta, Yi Ma, Roberto Calandra, Jitendra Malik

We introduce RotateIt, a system that enables fingertip-based object rotation along multiple axes by leveraging multimodal sensory inputs.

Object

PyTouch: A Machine Learning Library for Touch Processing

1 code implementation26 May 2021 Mike Lambeta, Huazhe Xu, Jingwei Xu, Po-Wei Chou, Shaoxiong Wang, Trevor Darrell, Roberto Calandra

With the increased availability of rich tactile sensors, there is an equally proportional need for open-source and integrated software capable of efficiently and effectively processing raw touch measurements into high-level signals that can be used for control and decision-making.

BIG-bench Machine Learning Decision Making +1

TACTO: A Fast, Flexible, and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

1 code implementation15 Dec 2020 Shaoxiong Wang, Mike Lambeta, Po-Wei Chou, Roberto Calandra

We believe that TACTO is a step towards the widespread adoption of touch sensing in robotic applications, and to enable machine learning practitioners interested in multi-modal learning and control.

Benchmarking

Cannot find the paper you are looking for? You can Submit a new open access paper.