Search Results for author: Tianyu Han

Found 14 papers, 9 papers with code

LongHealth: A Question Answering Benchmark with Long Clinical Documents

1 code implementation25 Jan 2024 Lisa Adams, Felix Busch, Tianyu Han, Jean-Baptiste Excoffier, Matthieu Ortala, Alexander Löser, Hugo JWL. Aerts, Jakob Nikolas Kather, Daniel Truhn, Keno Bressem

However, all models struggled significantly in tasks requiring the identification of missing information, highlighting a critical area for improvement in clinical data interpretation.

Information Retrieval Multiple-choice +2

From Text to Image: Exploring GPT-4Vision's Potential in Advanced Radiological Analysis across Subspecialties

no code implementations24 Nov 2023 Felix Busch, Tianyu Han, Marcus Makowski, Daniel Truhn, Keno Bressem, Lisa Adams

The study evaluates and compares GPT-4 and GPT-4Vision for radiological tasks, suggesting GPT-4Vision may recognize radiological features from images, thereby enhancing its diagnostic potential over text-based descriptions.

Large Language Models Streamline Automated Machine Learning for Clinical Studies

1 code implementation27 Aug 2023 Soroosh Tayebi Arasteh, Tianyu Han, Mahshad Lotfinia, Christiane Kuhl, Jakob Nikolas Kather, Daniel Truhn, Sven Nebelung

A knowledge gap persists between machine learning (ML) developers (e. g., data scientists) and practitioners (e. g., clinicians), hampering the full utilization of ML for clinical data analysis.

Transformers for CT Reconstruction From Monoplanar and Biplanar Radiographs

no code implementations11 May 2023 Firas Khader, Gustav Müller-Franzes, Tianyu Han, Sven Nebelung, Christiane Kuhl, Johannes Stegmaier, Daniel Truhn

X-rays are widely available and even if the CT reconstructed from these radiographs is not a replacement of a complete CT in the diagnostic setting, it might serve to spare the patients from radiation where a CT is only acquired for rough measurements such as determining organ size.

Computed Tomography (CT)

Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers

no code implementations11 May 2023 Firas Khader, Jakob Nikolas Kather, Tianyu Han, Sven Nebelung, Christiane Kuhl, Johannes Stegmaier, Daniel Truhn

However, while the conventional transformer allows for a simultaneous processing of a large set of input tokens, the computational demand scales quadratically with the number of input tokens and thus quadratically with the number of image patches.

Image Classification whole slide images

MedAlpaca -- An Open-Source Collection of Medical Conversational AI Models and Training Data

no code implementations14 Apr 2023 Tianyu Han, Lisa C. Adams, Jens-Michalis Papaioannou, Paul Grundmann, Tom Oberhauser, Alexander Löser, Daniel Truhn, Keno K. Bressem

As large language models (LLMs) like OpenAI's GPT series continue to make strides, we witness the emergence of artificial intelligence applications in an ever-expanding range of fields.

Medical Diffusion: Denoising Diffusion Probabilistic Models for 3D Medical Image Generation

1 code implementation7 Nov 2022 Firas Khader, Gustav Mueller-Franzes, Soroosh Tayebi Arasteh, Tianyu Han, Christoph Haarburger, Maximilian Schulze-Hagen, Philipp Schad, Sandy Engelhardt, Bettina Baessler, Sebastian Foersch, Johannes Stegmaier, Christiane Kuhl, Sven Nebelung, Jakob Nikolas Kather, Daniel Truhn

Furthermore, we demonstrate that synthetic images can be used in a self-supervised pre-training and improve the performance of breast segmentation models when data is scarce (dice score 0. 91 vs. 0. 95 without vs. with synthetic data).

Computed Tomography (CT) Denoising +3

Novice Type Error Diagnosis with Natural Language Models

no code implementations7 Oct 2022 Chuqin Geng, Haolin Ye, Yixuan Li, Tianyu Han, Brigitte Pientka, Xujie Si

Strong static type systems help programmers eliminate many errors without much burden of supplying type annotations.

Language Modelling Vocal Bursts Type Prediction

Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization

1 code implementation25 Nov 2020 Tianyu Han, Sven Nebelung, Federico Pedersoli, Markus Zimmermann, Maximilian Schulze-Hagen, Michael Ho, Christoph Haarburger, Fabian Kiessling, Christiane Kuhl, Volkmar Schulz, Daniel Truhn

Contrary to previous research on adversarially trained models, we found that the accuracy of such models was equal to standard models when sufficiently large datasets and dual batch norm training were used.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.