1 code implementation • 27 Feb 2024 • Bing Xue, Charles Alba, Joanna Abraham, Thomas Kannampallil, Chenyang Lu
Adapting models through self-supervised finetuning further improved performance by 3. 2% for AUROC & 1. 5% for AUPRC Incorporating labels into the finetuning procedure further boosted performances, with semi-supervised finetuning improving by 1. 8% for AUROC & 2% for AUPRC & foundational modelling improving by 3. 6% for AUROC & 2. 6% for AUPRC compared to self-supervised finetuning.
1 code implementation • 8 Oct 2023 • Ruiqi Wang, Hanyang Liu, Jiaming Qiu, Moran Xu, Roch Guerin, Chenyang Lu
It is, therefore, important to develop an adaptive approach that maximizes the inference performance of ML applications under timing constraints and the resource constraints of IoT devices.
1 code implementation • 19 Aug 2023 • Benjamin C. Warner, Ziqi Xu, Simon Haroutounian, Thomas Kannampallil, Chenyang Lu
A relatively unexplored source of information in the feature selection process is the usage of textual names of features, which may be semantically indicative of which features are relevant to a target outcome.
no code implementations • 6 Jul 2023 • Bing Xue, Ahmed Sameh Said, Ziqi Xu, Hanyang Liu, Neel Shah, Hanqing Yang, Philip Payne, Chenyang Lu
TVAE is specifically designed to address the modeling challenges like ECMO with strong treatment selection bias and scarce treatment cases.
1 code implementation • CVPR 2023 • Chenyang Lu, Daan de Geus, Gijs Dubbelman
This paper introduces Content-aware Token Sharing (CTS), a token reduction approach that improves the computational efficiency of semantic segmentation networks that use Vision Transformers (ViTs).
1 code implementation • 10 Oct 2022 • Dingwen Li, Bing Xue, Christopher King, Bradley Fritz, Michael Avidan, Joanna Abraham, Chenyang Lu
Towards this end, we propose a hierarchical model combining the strength of both attention and recurrent models for intraoperative time series.
1 code implementation • 31 Jul 2022 • Jiaming Qiu, Ruiqi Wang, Ayan Chakrabarti, Roch Guerin, Chenyang Lu
Because of limited computing capacity, embedded devices rely on a parsimonious classification model with uneven accuracy.
1 code implementation • 24 May 2022 • Hanyang Liu, Sunny S. Lou, Benjamin C. Warner, Derek R. Harford, Thomas Kannampallil, Chenyang Lu
Burnout is a significant public health concern affecting nearly half of the healthcare workforce.
1 code implementation • 21 Mar 2022 • Chenyang Lu, Gijs Dubbelman
Aiming for higher-level scene understanding, this work presents a neural network approach that takes a road-layout map in bird's-eye-view as input, and predicts a human-interpretable graph that represents the road's topological layout.
no code implementations • 29 Sep 2021 • Bing Xue, York Jiao, Thomas Kannampallil, Joanna Abraham, Christopher Ryan King, Bradley A Fritz, Michael Avidan, Chenyang Lu
Given the risks and cost of surgeries, there has been significant interest in exploiting predictive models to improve perioperative care.
1 code implementation • CVPR 2021 • Daan de Geus, Panagiotis Meletis, Chenyang Lu, Xiaoxiao Wen, Gijs Dubbelman
In this work, we introduce the new scene understanding task of Part-aware Panoptic Segmentation (PPS), which aims to understand a scene at multiple levels of abstraction, and unifies the tasks of scene parsing and part parsing.
Ranked #2 on Image Segmentation on Pascal Panoptic Parts
no code implementations • 30 Apr 2021 • Hanyang Liu, Michael C. Montana, Dingwen Li, Chase Renfroe, Thomas Kannampallil, Chenyang Lu
We present an end-to-end model using streaming physiological time series to predict near-term risk for hypoxemia, a rare, but life-threatening condition known to cause serious patient harm during surgery.
no code implementations • 10 Dec 2020 • Chenyang Lu, Gijs Dubbelman
To overcome this, we are the first to present a self-supervised approach based on a fully-differentiable auto-encoder in which the bottleneck encodes the graph's nodes and edges.
1 code implementation • 26 Oct 2020 • Ayan Chakrabarti, Roch Guérin, Chenyang Lu, Jiangnan Liu
To deploy machine learning-based algorithms for real-time applications with strict latency constraints, we consider an edge-computing setting where a subset of inputs are offloaded to the edge for processing by an accurate but resource-intensive model, and the rest are processed only by a less-accurate model on the device itself.
4 code implementations • 16 Apr 2020 • Panagiotis Meletis, Xiaoxiao Wen, Chenyang Lu, Daan de Geus, Gijs Dubbelman
In this technical report, we present two novel datasets for image scene understanding.
1 code implementation • 10 Sep 2019 • Chenyang Lu, Gijs Dubbelman
Our approach is inherently more efficient than the previous two-stage state-of-the-art method, and outperforms it by a margin of 3% IoU for the inpainted foreground regions on Cityscapes.
no code implementations • 23 Jul 2019 • Chenyang Lu, Gijs Dubbelman
We propose a novel single-step training strategy that allows convolutional encoder-decoder networks that use skip connections, to complete partially observed data by means of hallucination.
no code implementations • 6 Apr 2018 • Chenyang Lu, Marinus Jacobus Gerardus van de Molengraft, Gijs Dubbelman
In this work, we research and evaluate end-to-end learning of monocular semantic-metric occupancy grid mapping from weak binocular ground truth.
Ranked #2 on Bird's-Eye View Semantic Segmentation on nuScenes (IoU veh - 224x480 - No vis filter - 100x50 at 0.25 metric)