Search Results for author: Bill Lin

Found 17 papers, 8 papers with code

Knowledge-Augmented Methods for Natural Language Processing

no code implementations ACL 2022 Chenguang Zhu, Yichong Xu, Xiang Ren, Bill Lin, Meng Jiang, Wenhao Yu

Knowledge in natural language processing (NLP) has been a rising trend especially after the advent of large scale pre-trained models.

Text Generation

On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations

no code implementations29 Feb 2024 Chester Holtz, Yucheng Wang, Chung-Kuan Cheng, Bill Lin

Namely, we show that when a small number of cells (e. g. 1%-5% of cells) have their positions shifted such that a measure of global congestion is guaranteed to remain unaffected (e. g. 1% of the design adversarially shifted by 0. 001% of the layout space results in a predicted decrease in congestion of up to 90%, while no change in congestion is implied by the perturbation).

valid

Monolithic Silicon-Photonics Linear-Algebra Accelerators Enabling Next-Gen Massive MIMO

no code implementations13 Feb 2024 Tzu-Chien Hsueh, Yeshaiahu Fainman, Bill Lin

A system-on-chip (SoC) photonic-electronic linear-algebra accelerator with the features of wavelength-division-multiplexing (WDM) based broadband photodetections and high-dimensional matrix-inversion operations fabricated in advanced monolithic silicon-photonics (M-SiPh) semiconductor process technology is proposed to achieve substantial leaps in computation density and energy efficiency, including realistic considerations of energy/area overhead due to electronic/photonic on-chip conversions, integrations, and calibrations through holistic co-design methodologies to support linear-detection based massive multiple-input multiple-output (MIMO) decoding technology requiring the inversion of channel matrices and other emergent applications limited by linear-algebra computation capacities.

ChatGPT at the Speed of Light: Optical Comb-Based Monolithic Photonic-Electronic Linear-Algebra Accelerators

no code implementations19 Nov 2023 Tzu-Chien Hsueh, Yeshaiahu Fainman, Bill Lin

This paper proposes to adopt advanced monolithic silicon-photonics integrated-circuits manufacturing capabilities to achieve a system-on-chip photonic-electronic linear-algebra accelerator with the features of optical comb-based broadband incoherent photo-detections and high-dimensional operations of consecutive matrix-matrix multiplications to enable substantial leaps in computation density and energy efficiency, with practical considerations of power/area overhead due to photonic-electronic on-chip conversions, integrations, and calibrations through holistic co-design approaches to support attention-head mechanism based deep-learning neural networks used in Large Language Models and other emergent applications.

A Practical Recipe for Federated Learning Under Statistical Heterogeneity Experimental Design

1 code implementation28 Jul 2023 Mahdi Morafah, Weijia Wang, Bill Lin

Many of the works use inconsistent experimental settings and there are no comprehensive studies on the effect of FL-specific experimental variables on the results and practical insights for a more comparable and consistent FL experimental setup.

Experimental Design Federated Learning

When Do Curricula Work in Federated Learning?

no code implementations ICCV 2023 Saeed Vahidian, Sreevatsank Kadaveru, Woonjoon Baek, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin

Specifically, we aim to investigate how ordered learning principles can contribute to alleviating the heterogeneity effects in FL.

Federated Learning

Neural Routing in Meta Learning

1 code implementation14 Oct 2022 Jicang Cai, Saeed Vahidian, Weijia Wang, Mohsen Joneidi, Bill Lin

Inspired by the widely recognized finding in neuroscience that distinct parts of the brain are highly specialized for different types of tasks, we aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks.

Meta-Learning

Rethinking Data Heterogeneity in Federated Learning: Introducing a New Notion and Standard Benchmarks

1 code implementation30 Sep 2022 Mahdi Morafah, Saeed Vahidian, Chen Chen, Mubarak Shah, Bill Lin

Though successful, federated learning presents new challenges for machine learning, especially when the issue of data heterogeneity, also known as Non-IID data, arises.

Federated Learning

Efficient Distribution Similarity Identification in Clustered Federated Learning via Principal Angles Between Client Data Subspaces

1 code implementation21 Sep 2022 Saeed Vahidian, Mahdi Morafah, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin

This small set of principal vectors is provided to the server so that the server can directly identify distribution similarities among the clients to form clusters.

Federated Learning

FLIS: Clustered Federated Learning via Inference Similarity for Non-IID Data Distribution

1 code implementation20 Aug 2022 Mahdi Morafah, Saeed Vahidian, Weijia Wang, Bill Lin

Classical federated learning approaches yield significant performance degradation in the presence of Non-IID data distributions of participants.

Personalized Federated Learning

NeuCASL: From Logic Design to System Simulation of Neuromorphic Engines

no code implementations6 Aug 2022 Dharanidhar Dang, Amitash Nanda, Bill Lin, Debashis Sahoo

Neuromorphic computing is a promising such approach with its brain-inspired circuitry, use of emerging technologies, and low-power nature.

Personalized Federated Learning by Structured and Unstructured Pruning under Data Heterogeneity

1 code implementation2 May 2021 Saeed Vahidian, Mahdi Morafah, Bill Lin

The traditional approach in FL tries to learn a single global model collaboratively with the help of many clients under the orchestration of a central server.

Personalized Federated Learning

Learning Accurate and Interpretable Decision Rule Sets from Neural Networks

1 code implementation4 Mar 2021 Litao Qiao, Weijia Wang, Bill Lin

Each neuron in the first layer directly maps to an interpretable if-then rule after training, and the output neuron in the second layer directly maps to a disjunction of the first-layer rules to form the decision rule set.

General Classification

Asymptotic Optimality of Self-Representative Low-Rank Approximation and Its Applications

no code implementations1 Jan 2021 Saeed Vahidian, Mohsen Joneidi, Ashkan Esmaeili, Siavash Khodadadeh, Sharare Zehtabian, Ladislau Boloni, Nazanin Rahnavard, Bill Lin, Mubarak Shah

The approach is based on the concept of {\em self-rank}, defined as the minimum number of samples needed to reconstruct all samples with an accuracy proportional to the rank-$K$ approximation.

Differentially-private Federated Neural Architecture Search

1 code implementation16 Jun 2020 Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin, Pengtao Xie

To address this problem, we propose federated neural architecture search (FNAS), where different parties collectively search for a differentiable architecture by exchanging gradients of architecture variables without exposing their data to other parties.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.