Search Results for author: Bohang Zhang

Found 10 papers, 7 papers with code

Do Efficient Transformers Really Save Computation?

no code implementations21 Feb 2024 Kai Yang, Jan Ackermann, Zhenyu He, Guhao Feng, Bohang Zhang, Yunzhen Feng, Qiwei Ye, Di He, LiWei Wang

Our results show that while these models are expressive enough to solve general DP tasks, contrary to expectations, they require a model size that scales with the problem size.

Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness

1 code implementation16 Jan 2024 Bohang Zhang, Jingchu Gai, Yiheng Du, Qiwei Ye, Di He, LiWei Wang

Specifically, we identify a fundamental expressivity measure termed homomorphism expressivity, which quantifies the ability of GNN models to count graphs under homomorphism.

Graph Learning Subgraph Counting

Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective

no code implementations NeurIPS 2023 Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, LiWei Wang

By using circuit complexity theory, we first give impossibility results showing that bounded-depth Transformers are unable to directly produce correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length.

Decision Making Math

Rethinking the Expressive Power of GNNs via Graph Biconnectivity

1 code implementation23 Jan 2023 Bohang Zhang, Shengjie Luo, LiWei Wang, Di He

In this paper, we take a fundamentally different perspective to study the expressive power of GNNs beyond the WL test.

Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective

1 code implementation4 Oct 2022 Bohang Zhang, Du Jiang, Di He, LiWei Wang

Designing neural networks with bounded Lipschitz constant is a promising way to obtain certifiably robust classifiers against adversarial examples.

Robust classification

Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis

no code implementations NeurIPS 2021 Jikai Jin, Bohang Zhang, Haiyang Wang, LiWei Wang

Distributionally robust optimization (DRO) is a widely-used approach to learn models that are robust against distribution shift.

Boosting the Certified Robustness of L-infinity Distance Nets

2 code implementations ICLR 2022 Bohang Zhang, Du Jiang, Di He, LiWei Wang

Recently, Zhang et al. (2021) developed a new neural network architecture based on $\ell_\infty$-distance functions, which naturally possesses certified $\ell_\infty$ robustness by its construction.

Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons

2 code implementations10 Feb 2021 Bohang Zhang, Tianle Cai, Zhou Lu, Di He, LiWei Wang

This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs.

Improved Analysis of Clipping Algorithms for Non-convex Optimization

1 code implementation NeurIPS 2020 Bohang Zhang, Jikai Jin, Cong Fang, LiWei Wang

Gradient clipping is commonly used in training deep neural networks partly due to its practicability in relieving the exploding gradient problem.

Cannot find the paper you are looking for? You can Submit a new open access paper.