Search Results for author: Xilin Yang

Found 13 papers, 0 papers with code

Automated HER2 Scoring in Breast Cancer Images Using Deep Learning and Pyramid Sampling

no code implementations1 Apr 2024 Sahan Yoruc Selcuk, Xilin Yang, Bijie Bai, Yijie Zhang, Yuzhu Li, Musa Aydin, Aras Firat Unal, Aditya Gomatam, Zhen Guo, Darrow Morgan Angus, Goren Kolodney, Karine Atlan, Tal Keidar Haran, Nir Pillar, Aydogan Ozcan

Human epidermal growth factor receptor 2 (HER2) is a critical protein in cancer cell growth that signifies the aggressiveness of breast cancer (BC) and helps predict its prognosis.

Virtual birefringence imaging and histological staining of amyloid deposits in label-free tissue using autofluorescence microscopy and deep learning

no code implementations14 Mar 2024 Xilin Yang, Bijie Bai, Yijie Zhang, Musa Aydin, Sahan Yoruc Selcuk, Zhen Guo, Gregory A. Fishbein, Karine Atlan, William Dean Wallace, Nir Pillar, Aydogan Ozcan

Systemic amyloidosis is a group of diseases characterized by the deposition of misfolded proteins in various organs and tissues, leading to progressive organ dysfunction and failure.

Multiplexed all-optical permutation operations using a reconfigurable diffractive optical network

no code implementations4 Feb 2024 Guangdong Ma, Xilin Yang, Bijie Bai, Jingxi Li, Yuhang Li, Tianyi Gan, Che-Yung Shen, Yijie Zhang, Yuzhu Li, Mona Jarrahi, Aydogan Ozcan

We demonstrated the feasibility of this reconfigurable multiplexed diffractive design by approximating 256 randomly selected permutation matrices using K=4 rotatable diffractive layers.

Subwavelength Imaging using a Solid-Immersion Diffractive Optical Processor

no code implementations17 Jan 2024 Jingtian Hu, Kun Liao, Niyazi Ulas Dinc, Carlo Gigli, Bijie Bai, Tianyi Gan, Xurong Li, Hanlong Chen, Xilin Yang, Yuhang Li, Cagatay Isil, Md Sadman Sakib Rahman, Jingxi Li, Xiaoyong Hu, Mona Jarrahi, Demetri Psaltis, Aydogan Ozcan

To resolve subwavelength features of an object, the diffractive imager uses a thin, high-index solid-immersion layer to transmit high-frequency information of the object to a spatially-optimized diffractive encoder, which converts/encodes high-frequency information of the input into low-frequency spatial modes for transmission through air.

Decoder

Complex-valued universal linear transformations and image encryption using spatially incoherent diffractive networks

no code implementations5 Oct 2023 Xilin Yang, Md Sadman Sakib Rahman, Bijie Bai, Jingxi Li, Aydogan Ozcan

Similarly, D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination; however, under spatially incoherent light, these transformations are non-negative, acting on diffraction-limited optical intensity patterns at the input field-of-view (FOV).

Pyramid diffractive optical networks for unidirectional magnification and demagnification

no code implementations29 Aug 2023 Bijie Bai, Xilin Yang, Tianyi Gan, Jingxi Li, Deniz Mengu, Mona Jarrahi, Aydogan Ozcan

Our analyses revealed the efficacy of this P-D2NN design in unidirectional image magnification and demagnification tasks, producing high-fidelity magnified or demagnified images in only one direction, while inhibiting the image formation in the opposite direction - confirming the desired unidirectional imaging operation.

Universal Linear Intensity Transformations Using Spatially-Incoherent Diffractive Processors

no code implementations23 Mar 2023 Md Sadman Sakib Rahman, Xilin Yang, Jingxi Li, Bijie Bai, Aydogan Ozcan

Under spatially-coherent light, a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view (FOVs) if the total number (N) of optimizable phase-only diffractive features is greater than or equal to ~2 Ni x No, where Ni and No refer to the number of useful pixels at the input and the output FOVs, respectively.

Data class-specific all-optical transformations and encryption

no code implementations25 Dec 2022 Bijie Bai, Heming Wei, Xilin Yang, Deniz Mengu, Aydogan Ozcan

We numerically demonstrated all-optical class-specific transformations covering A-->A, I-->I, and P-->I transformations using various image datasets.

Specificity

Deep Learning-enabled Virtual Histological Staining of Biological Samples

no code implementations13 Nov 2022 Bijie Bai, Xilin Yang, Yuzhu Li, Yijie Zhang, Nir Pillar, Aydogan Ozcan

Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue.

Virtual stain transfer in histology via cascaded deep neural networks

no code implementations14 Jul 2022 Xilin Yang, Bijie Bai, Yijie Zhang, Yuzhu Li, Kevin De Haan, Tairan Liu, Aydogan Ozcan

Unlike a single neural network structure which only takes one stain type as input to digitally output images of another stain type, C-DNN first uses virtual staining to transform autofluorescence microscopy images into H&E and then performs stain transfer from H&E to the domain of the other stain in a cascaded manner.

Few-shot Transfer Learning for Holographic Image Reconstruction using a Recurrent Neural Network

no code implementations27 Jan 2022 Luzhe Huang, Xilin Yang, Tairan Liu, Aydogan Ozcan

Here, we demonstrate a few-shot transfer learning method that helps a holographic image reconstruction deep neural network rapidly generalize to new types of samples using small datasets.

Image Reconstruction Transfer Learning

Deep learning-based virtual refocusing of images using an engineered point-spread function

no code implementations22 Dec 2020 Xilin Yang, Luzhe Huang, Yilin Luo, Yichen Wu, Hongda Wang, Yair Rivenson, Aydogan Ozcan

We present a virtual image refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF).

Image Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.