no code implementations • 10 Nov 2023 • Zhengliang Liu, Hanqi Jiang, Tianyang Zhong, Zihao Wu, Chong Ma, Yiwei Li, Xiaowei Yu, Yutong Zhang, Yi Pan, Peng Shu, Yanjun Lyu, Lu Zhang, Junjie Yao, Peixin Dong, Chao Cao, Zhenxiang Xiao, Jiaqi Wang, Huan Zhao, Shaochen Xu, Yaonai Wei, Jingyuan Chen, Haixing Dai, Peilong Wang, Hao He, Zewei Wang, Xinyu Wang, Xu Zhang, Lin Zhao, Yiheng Liu, Kai Zhang, Liheng Yan, Lichao Sun, Jun Liu, Ning Qiang, Bao Ge, Xiaoyan Cai, Shijie Zhao, Xintao Hu, Yixuan Yuan, Gang Li, Shu Zhang, Xin Zhang, Xi Jiang, Tuo Zhang, Dinggang Shen, Quanzheng Li, Wei Liu, Xiang Li, Dajiang Zhu, Tianming Liu
GPT-4V represents a breakthrough in artificial general intelligence (AGI) for computer vision, with applications in the biomedical domain.
no code implementations • 10 Jul 2023 • Haixing Dai, Lu Zhang, Lin Zhao, Zihao Wu, Zhengliang Liu, David Liu, Xiaowei Yu, Yanjun Lyu, Changying Li, Ninghao Liu, Tianming Liu, Dajiang Zhu
With the popularity of deep neural networks (DNNs), model interpretability is becoming a critical concern.
no code implementations • 27 Mar 2023 • Xiaowei Yu, Lu Zhang, Haixing Dai, Yanjun Lyu, Lin Zhao, Zihao Wu, David Liu, Tianming Liu, Dajiang Zhu
Designing more efficient, reliable, and explainable neural network architectures is critical to studies that are based on artificial intelligence (AI) techniques.
no code implementations • 31 Jan 2023 • Xiaowei Yu, Lu Zhang, Haixing Dai, Lin Zhao, Yanjun Lyu, Zihao Wu, Tianming Liu, Dajiang Zhu
To solve this fundamental problem, we design a novel Twin-Transformer framework to unveil the unique functional roles of gyri and sulci as well as their relationship in the whole brain function.
no code implementations • 26 May 2022 • Lu Zhang, Xiaowei Yu, Yanjun Lyu, Zhengwang Wu, Haixing Dai, Lin Zhao, Li Wang, Gang Li, Tianming Liu, Dajiang Zhu
Our experimental results show that: 1) the learned embedding vectors can quantitatively encode the commonality and individuality of cortical folding patterns; 2) with the embeddings we can robustly infer the complicated many-to-many anatomical correspondences among different brains and 3) our model can be successfully transferred to new populations with very limited training samples.
no code implementations • 20 Apr 2022 • Xiaowei Yu, Lu Zhang, Lin Zhao, Yanjun Lyu, Tianming Liu, Dajiang Zhu
In this work, we propose a novel Twin-Transformers framework to simultaneously infer common and individual functional networks in both spatial and temporal space, in a self-supervised manner.
no code implementations • 14 Jun 2021 • Lu Zhang, Xiaowei Yu, Yanjun Lyu, Li Wang, Dajiang Zhu
By mapping the learned clinical group related feature vectors to the original FC space, representative FCs were constructed for each group.