Search Results for author: Bo Xie

Found 11 papers, 3 papers with code

Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions

no code implementations5 Apr 2024 Zhuochun Li, Bo Xie, Robin Hilsabeck, Alyssa Aguirre, Ning Zou, Zhimeng Luo, Daqing He

Evidence suggests that different prompts lead large language models (LLMs) to generate responses with varying quality.

Chiral Decomposition of Twisted Graphene Multilayers with Arbitrary Stacking

no code implementations22 Dec 2020 ShengNan Zhang, Bo Xie, QuanSheng Wu, Jianpeng Liu, Oleg V. Yazyev

We formulate the chiral decomposition rules that govern the electronic structure of a broad family of twisted $N+M$ multilayer graphene configurations that combine arbitrary stacking order and a mutual twist.

Mesoscale and Nanoscale Physics Materials Science Strongly Correlated Electrons

On the Complexity of Learning Neural Networks

no code implementations NeurIPS 2017 Le Song, Santosh Vempala, John Wilmes, Bo Xie

Moreover, this hard family of functions is realizable with a small (sublinear in dimension) number of activation units in the single hidden layer.

Deep Semi-Random Features for Nonlinear Function Approximation

1 code implementation28 Feb 2017 Kenji Kawaguchi, Bo Xie, Vikas Verma, Le Song

For deep models, with no unrealistic assumptions, we prove universal approximation ability, a lower bound on approximation error, a partial optimization guarantee, and a generalization bound.

Diverse Neural Network Learns True Target Functions

no code implementations9 Nov 2016 Bo Xie, YIngyu Liang, Le Song

In this paper, we answer these questions by analyzing one-hidden-layer neural networks with ReLU activation, and show that despite the non-convexity, neural networks with diverse units have no spurious local minima.

Relation Linking

Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients

no code implementations NeurIPS 2015 Bo Xie, YIngyu Liang, Le Song

We propose a simple, computationally efficient, and memory friendly algorithm based on the "doubly stochastic gradients" to scale up a range of kernel nonlinear component analysis, such as kernel PCA, CCA and SVD.

Communication Efficient Distributed Kernel Principal Component Analysis

no code implementations23 Mar 2015 Maria-Florina Balcan, YIngyu Liang, Le Song, David Woodruff, Bo Xie

Can we perform kernel PCA on the entire dataset in a distributed and communication efficient fashion while maintaining provable and strong guarantees in solution quality?

Scalable Kernel Methods via Doubly Stochastic Gradients

1 code implementation NeurIPS 2014 Bo Dai, Bo Xie, Niao He, YIngyu Liang, Anant Raj, Maria-Florina Balcan, Le Song

The general perception is that kernel methods are not scalable, and neural nets are the methods of choice for nonlinear learning problems.

Nonparametric Estimation of Multi-View Latent Variable Models

no code implementations13 Nov 2013 Le Song, Animashree Anandkumar, Bo Dai, Bo Xie

We establish that the sample complexity for the proposed method is quadratic in the number of latent components and is a low order polynomial in the other relevant parameters.

Cannot find the paper you are looking for? You can Submit a new open access paper.