Search Results for author: Amr Gomaa

Found 13 papers, 6 papers with code

Looking for a better fit? An Incremental Learning Multimodal Object Referencing Framework adapting to Individual Drivers

1 code implementation29 Jan 2024 Amr Gomaa, Guillermo Reyes, Michael Feld, Antonio Krüger

The rapid advancement of the automotive industry towards automated and semi-automated vehicles has rendered traditional methods of vehicle interaction, such as touch-based and voice command systems, inadequate for a widening range of non-driving related tasks, such as referencing objects outside of the vehicle.

Incremental Learning

Toward a Surgeon-in-the-Loop Ophthalmic Robotic Apprentice using Reinforcement and Imitation Learning

no code implementations29 Nov 2023 Amr Gomaa, Bilal Mahdy, Niko Kleer, Antonio Krüger

Thus, we propose a simulation-based image-guided approach for surgeon-centered autonomous agents that can adapt to the individual surgeon's skill level and preferred surgical techniques during ophthalmic cataract surgery.

Imitation Learning

It's all about you: Personalized in-Vehicle Gesture Recognition with a Time-of-Flight Camera

no code implementations2 Oct 2023 Amr Gomaa, Guillermo Reyes, Michael Feld

Despite significant advances in gesture recognition technology, recognizing gestures in a driving environment remains challenging due to limited and costly data and its dynamic, ever-changing nature.

Data Augmentation Hand Gesture Recognition +2

LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

2 code implementations29 Sep 2023 Sahar Abdelnabi, Amr Gomaa, Sarath Sivaprasad, Lea Schönherr, Mario Fritz

There is a growing interest in using Large Language Models (LLMs) as agents to tackle real-world tasks that may require assessing complex situations.

Decision Making

Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous Systems

no code implementations11 Sep 2023 Amr Gomaa, Michael Feld

Recent advances in machine learning, particularly deep learning, have enabled autonomous systems to perceive and comprehend objects and their environments in a perceptual subsymbolic manner.

Incremental Learning object-detection +1

SynthoGestures: A Novel Framework for Synthetic Dynamic Hand Gesture Generation for Driving Scenarios

1 code implementation8 Sep 2023 Amr Gomaa, Robin Zitt, Guillermo Reyes, Antonio Krüger

Creating a diverse and comprehensive dataset of hand gestures for dynamic human-machine interfaces in the automotive domain can be challenging and time-consuming.

Gesture Generation Gesture Recognition

Teach Me How to Learn: A Perspective Review towards User-centered Neuro-symbolic Learning for Robotic Surgical Systems

no code implementations7 Jul 2023 Amr Gomaa, Bilal Mahdy, Niko Kleer, Michael Feld, Frank Kirchner, Antonio Krüger

Recent advances in machine learning models allowed robots to identify objects on a perceptual nonsymbolic level (e. g., through sensor fusion and natural language understanding).

Natural Language Understanding Sensor Fusion

Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces

no code implementations7 Nov 2022 Amr Gomaa

With the recently increasing capabilities of modern vehicles, novel approaches for interaction emerged that go beyond traditional touch-based and voice command approaches.

What's on your mind? A Mental and Perceptual Load Estimation Framework towards Adaptive In-vehicle Interaction while Driving

1 code implementation10 Aug 2022 Amr Gomaa, Alexandra Alles, Elena Meiser, Lydia Helene Rupp, Marco Molz, Guillermo Reyes

In this paper, we analyze the effects of mental workload and perceptual load on psychophysiological dimensions and provide a machine learning-based framework for mental and perceptual load estimation in a dual task scenario for in-vehicle interaction (https://github. com/amrgomaaelhady/MWL-PL-estimator).

ML-PersRef: A Machine Learning-based Personalized Multimodal Fusion Approach for Referencing Outside Objects From a Moving Vehicle

1 code implementation3 Nov 2021 Amr Gomaa, Guillermo Reyes, Michael Feld

This allows for novel approaches to interaction with the vehicle that go beyond traditional touch-based and voice command approaches, such as emotion recognition, head rotation, eye gaze, and pointing gestures.

Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.