no code implementations • 9 Jun 2021 • Kai-Chieh Liang, Lei Bi, Ashnil Kumar, Michael Fulham, Jinman Kim
Our ST-DSNN learns and accumulates image features from the PET images done over time.
no code implementations • 23 Apr 2021 • Yige Peng, Lei Bi, Ashnil Kumar, Michael Fulham, Dagan Feng, Jinman Kim
Most CNNs are designed for single-modality imaging data (CT or PET alone) and do not exploit the information embedded in PET-CT where there is a combination of an anatomical and functional imaging modality.
no code implementations • 1 Apr 2021 • Xiaohang Fu, Lei Bi, Ashnil Kumar, Michael Fulham, Jinman Kim
Further, there is not a method to exploit the intercategory relationships in the 7PC.
no code implementations • 5 Mar 2021 • Xiaohang Fu, Lei Bi, Ashnil Kumar, Michael Fulham, Jinman Kim
Furthermore, lung nodules are often heterogeneous in the cross-sectional image slices of a 3D volume.
no code implementations • 29 Jul 2020 • Xiaohang Fu, Lei Bi, Ashnil Kumar, Michael Fulham, Jinman Kim
Our MSAM can be applied to common backbone architectures and trained end-to-end.
no code implementations • 22 Sep 2019 • Ha Tran Hong Phan, Ashnil Kumar, David Feng, Michael Fulham, Jinman Kim
Cell event detection in cell videos is essential for monitoring of cellular behavior over extended time periods.
no code implementations • 7 Jun 2019 • Euijoon Ahn, Ashnil Kumar, Dagan Feng, Michael Fulham, Jinman Kim
Hence, we propose a new unsupervised feature learning method that learns feature representations to then differentiate dissimilar medical images using an ensemble of different convolutional neural networks (CNNs) and K-means clustering.
no code implementations • 15 Mar 2019 • Euijoon Ahn, Ashnil Kumar, Dagan Feng, Michael Fulham, Jinman Kim
The accuracy and robustness of image classification with supervised deep learning are dependent on the availability of large-scale, annotated training data.
1 code implementation • 5 Oct 2018 • Ashnil Kumar, Michael Fulham, Dagan Feng, Jinman Kim
Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis.
no code implementations • 16 Jul 2018 • Euijoon Ahn, Jinman Kim, Ashnil Kumar, Michael Fulham, Dagan Feng
The availability of large-scale annotated image datasets and recent advances in supervised deep learning methods enable the end-to-end derivation of representative image features that can impact a variety of image analysis problems.
no code implementations • 7 Sep 2017 • Ha Tran Hong Phan, Ashnil Kumar, David Feng, Michael Fulham, Jinman Kim
We compared our method to several published supervised methods evaluated on the same dataset and to a supervised LSTM method with a similar design and configuration to our unsupervised method.
2 code implementations • 31 Jul 2017 • Lei Bi, Jinman Kim, Ashnil Kumar, Dagan Feng, Michael Fulham
Positron emission tomography (PET) image synthesis plays an important role, which can be used to boost the training data for computer aided diagnosis systems.
no code implementations • 10 Apr 2017 • Lei Bi, Jinman Kim, Ashnil Kumar, Dagan Feng
Recently, deep learning methods based on fully convolutional networks (FCNs) have been successful in many segmentation problems primarily because they leverage a large labelled dataset to hierarchically learn the features that best correspond to the shallow visual appearance as well as the deep semantics of the areas to be segmented.