no code implementations • ACL 2022 • Chenguang Zhu, Yichong Xu, Xiang Ren, Bill Lin, Meng Jiang, Wenhao Yu
Knowledge in natural language processing (NLP) has been a rising trend especially after the advent of large scale pre-trained models.
no code implementations • 29 Feb 2024 • Chester Holtz, Yucheng Wang, Chung-Kuan Cheng, Bill Lin
Namely, we show that when a small number of cells (e. g. 1%-5% of cells) have their positions shifted such that a measure of global congestion is guaranteed to remain unaffected (e. g. 1% of the design adversarially shifted by 0. 001% of the layout space results in a predicted decrease in congestion of up to 90%, while no change in congestion is implied by the perturbation).
no code implementations • 13 Feb 2024 • Tzu-Chien Hsueh, Yeshaiahu Fainman, Bill Lin
A system-on-chip (SoC) photonic-electronic linear-algebra accelerator with the features of wavelength-division-multiplexing (WDM) based broadband photodetections and high-dimensional matrix-inversion operations fabricated in advanced monolithic silicon-photonics (M-SiPh) semiconductor process technology is proposed to achieve substantial leaps in computation density and energy efficiency, including realistic considerations of energy/area overhead due to electronic/photonic on-chip conversions, integrations, and calibrations through holistic co-design methodologies to support linear-detection based massive multiple-input multiple-output (MIMO) decoding technology requiring the inversion of channel matrices and other emergent applications limited by linear-algebra computation capacities.
no code implementations • 19 Nov 2023 • Tzu-Chien Hsueh, Yeshaiahu Fainman, Bill Lin
This paper proposes to adopt advanced monolithic silicon-photonics integrated-circuits manufacturing capabilities to achieve a system-on-chip photonic-electronic linear-algebra accelerator with the features of optical comb-based broadband incoherent photo-detections and high-dimensional operations of consecutive matrix-matrix multiplications to enable substantial leaps in computation density and energy efficiency, with practical considerations of power/area overhead due to photonic-electronic on-chip conversions, integrations, and calibrations through holistic co-design approaches to support attention-head mechanism based deep-learning neural networks used in Large Language Models and other emergent applications.
1 code implementation • 28 Jul 2023 • Mahdi Morafah, Weijia Wang, Bill Lin
Many of the works use inconsistent experimental settings and there are no comprehensive studies on the effect of FL-specific experimental variables on the results and practical insights for a more comparable and consistent FL experimental setup.
no code implementations • ICCV 2023 • Saeed Vahidian, Sreevatsank Kadaveru, Woonjoon Baek, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin
Specifically, we aim to investigate how ordered learning principles can contribute to alleviating the heterogeneity effects in FL.
1 code implementation • 14 Oct 2022 • Jicang Cai, Saeed Vahidian, Weijia Wang, Mohsen Joneidi, Bill Lin
Inspired by the widely recognized finding in neuroscience that distinct parts of the brain are highly specialized for different types of tasks, we aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks.
1 code implementation • 30 Sep 2022 • Mahdi Morafah, Saeed Vahidian, Chen Chen, Mubarak Shah, Bill Lin
Though successful, federated learning presents new challenges for machine learning, especially when the issue of data heterogeneity, also known as Non-IID data, arises.
1 code implementation • 21 Sep 2022 • Saeed Vahidian, Mahdi Morafah, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin
This small set of principal vectors is provided to the server so that the server can directly identify distribution similarities among the clients to form clusters.
1 code implementation • 20 Aug 2022 • Mahdi Morafah, Saeed Vahidian, Weijia Wang, Bill Lin
Classical federated learning approaches yield significant performance degradation in the presence of Non-IID data distributions of participants.
no code implementations • 6 Aug 2022 • Dharanidhar Dang, Amitash Nanda, Bill Lin, Debashis Sahoo
Neuromorphic computing is a promising such approach with its brain-inspired circuitry, use of emerging technologies, and low-power nature.
no code implementations • 28 Jun 2022 • Dharanidhar Dang, Bill Lin, Debashis Sahoo
It is due to the highly compute and memory-intensive nature of the training phase.
1 code implementation • 2 May 2021 • Saeed Vahidian, Mahdi Morafah, Bill Lin
The traditional approach in FL tries to learn a single global model collaboratively with the help of many clients under the orchestration of a central server.
1 code implementation • 4 Mar 2021 • Litao Qiao, Weijia Wang, Bill Lin
Each neuron in the first layer directly maps to an interpretable if-then rule after training, and the output neuron in the second layer directly maps to a disjunction of the first-layer rules to form the decision rule set.
no code implementations • 1 Jan 2021 • Saeed Vahidian, Mohsen Joneidi, Ashkan Esmaeili, Siavash Khodadadeh, Sharare Zehtabian, Ladislau Boloni, Nazanin Rahnavard, Bill Lin, Mubarak Shah
The approach is based on the concept of {\em self-rank}, defined as the minimum number of samples needed to reconstruct all samples with an accuracy proportional to the rank-$K$ approximation.
no code implementations • ICLR 2021 • Siavash Khodadadeh, Sharare Zehtabian, Saeed Vahidian, Weijia Wang, Bill Lin, Ladislau Bölöni
Unsupervised meta-learning approaches rely on synthetic meta-tasks that are created using techniques such as random selection, clustering and/or augmentation.
1 code implementation • 16 Jun 2020 • Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin, Pengtao Xie
To address this problem, we propose federated neural architecture search (FNAS), where different parties collectively search for a differentiable architecture by exchanging gradients of architecture variables without exposing their data to other parties.