Search Results for author: Yasin Almalioglu

Found 14 papers, 4 papers with code

Unsupervised Deep Persistent Monocular Visual Odometry and Depth Estimation in Extreme Environments

no code implementations31 Oct 2020 Yasin Almalioglu, Angel Santamaria-Navarro, Benjamin Morrell, Ali-akbar Agha-mohammadi

In recent years, unsupervised deep learning approaches have received significant attention to estimate the depth and visual odometry (VO) from unlabelled monocular image sequences.

Depth Estimation Monocular Visual Odometry +1

VR-Caps: A Virtual Environment for Capsule Endoscopy

2 code implementations29 Aug 2020 Kagan Incetan, Ibrahim Omer Celik, Abdulhamid Obeid, Guliz Irem Gokceler, Kutsev Bengisu Ozyoruk, Yasin Almalioglu, Richard J. Chen, Faisal Mahmood, Hunter Gilbert, Nicholas J. Durr, Mehmet Turan

Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions.

Depth Estimation Visual Localization

EndoL2H: Deep Super-Resolution for Capsule Endoscopy

3 code implementations13 Feb 2020 Yasin Almalioglu, Kutsev Bengisu Ozyoruk, Abdulkadir Gokce, Kagan Incetan, Guliz Irem Gokceler, Muhammed Ali Simsek, Kivanc Ararat, Richard J. Chen, Nicholas J. Durr, Faisal Mahmood, Mehmet Turan

Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics.

Super-Resolution

SelfVIO: Self-Supervised Deep Monocular Visual-Inertial Odometry and Depth Estimation

no code implementations22 Nov 2019 Yasin Almalioglu, Mehmet Turan, Alp Eren Sari, Muhamad Risqi U. Saputra, Pedro P. B. de Gusmão, Andrew Markham, Niki Trigoni

In the last decade, numerous supervised deep learning approaches requiring large amounts of labeled data have been proposed for visual-inertial odometry (VIO) and depth map estimation.

Depth Estimation Pose Estimation +3

Milli-RIO: Ego-Motion Estimation with Millimetre-Wave Radar and Inertial Measurement Unit Sensor

no code implementations12 Sep 2019 Yasin Almalioglu, Mehmet Turan, Chris Xiaoxuan Lu, Niki Trigoni, Andrew Markham

With the fast-growing demand of location-based services in various indoor environments, robust indoor ego-motion estimation has attracted significant interest in the last decades.

Indoor Localization Motion Estimation +1

GANVO: Unsupervised Deep Monocular Visual Odometry and Depth Estimation with Generative Adversarial Networks

no code implementations16 Sep 2018 Yasin Almalioglu, Muhamad Risqi U. Saputra, Pedro P. B. de Gusmao, Andrew Markham, Niki Trigoni

In the last decade, supervised deep learning approaches have been extensively employed in visual odometry (VO) applications, which is not feasible in environments where labelled data is not abundant.

Depth Estimation Monocular Visual Odometry +1

Unsupervised Odometry and Depth Learning for Endoscopic Capsule Robots

1 code implementation2 Mar 2018 Mehmet Turan, Evin Pinar Ornek, Nail Ibrahimli, Can Giracoglu, Yasin Almalioglu, Mehmet Fatih Yanik, Metin Sitti

In the last decade, many medical companies and research groups have tried to convert passive capsule endoscopes as an emerging and minimally invasive diagnostic technology into actively steerable endoscopic capsule robots which will provide more intuitive disease detection, targeted drug delivery and biopsy-like operations in the gastrointestinal(GI) tract.

Robotics

Deep EndoVO: A Recurrent Convolutional Neural Network (RCNN) based Visual Odometry Approach for Endoscopic Capsule Robots

no code implementations22 Aug 2017 Mehmet Turan, Yasin Almalioglu, Helder Araujo, Ender Konukoglu, Metin Sitti

Ingestible wireless capsule endoscopy is an emerging minimally invasive diagnostic technology for inspection of the GI tract and diagnosis of a wide range of diseases and pathologies.

Monocular Visual Odometry Pose Estimation

Magnetic-Visual Sensor Fusion based Medical SLAM for Endoscopic Capsule Robot

no code implementations17 May 2017 Mehmet Turan, Yasin Almalioglu, Hunter Gilbert, Helder Araujo, Ender Konukoglu, Metin Sitti

A reliable, real-time simultaneous localization and mapping (SLAM) method is crucial for the navigation of actively controlled capsule endoscopy robots.

Sensor Fusion Simultaneous Localization and Mapping +1

A Deep Learning Based 6 Degree-of-Freedom Localization Method for Endoscopic Capsule Robots

no code implementations15 May 2017 Mehmet Turan, Yasin Almalioglu, Ender Konukoglu, Metin Sitti

We present a robust deep learning based 6 degrees-of-freedom (DoF) localization system for endoscopic capsule robots.

Translation

A Non-Rigid Map Fusion-Based RGB-Depth SLAM Method for Endoscopic Capsule Robots

no code implementations15 May 2017 Mehmet Turan, Yasin Almalioglu, Helder Araujo, Ender Konukoglu, Metin Sitti

In this paper, we propose to our knowledge for the first time in literature a visual simultaneous localization and mapping (SLAM) method specifically developed for endoscopic capsule robots.

Simultaneous Localization and Mapping

Cannot find the paper you are looking for? You can Submit a new open access paper.