Search Results for author: Thang Bui

Found 13 papers, 5 papers with code

Measuring Sharpness in Grokking

1 code implementation14 Feb 2024 Jack Miller, Patrick Gleeson, Charles O'Neill, Thang Bui, Noam Levi

Neural networks sometimes exhibit grokking, a phenomenon where perfect or near-perfect performance is achieved on a validation set well after the same performance has been obtained on the corresponding training set.

Grokking Beyond Neural Networks: An Empirical Exploration with Model Complexity

1 code implementation26 Oct 2023 Jack Miller, Charles O'Neill, Thang Bui

In some settings neural networks exhibit a phenomenon known as \textit{grokking}, where they achieve perfect or near-perfect accuracy on the validation set long after the same performance has been achieved on the training set.

regression

Adversarial Fine-Tuning of Language Models: An Iterative Optimisation Approach for the Generation and Detection of Problematic Content

no code implementations26 Aug 2023 Charles O'Neill, Jack Miller, Ioana Ciuca, Yuan-Sen Ting, Thang Bui

The performance of our approach is evaluated through classification accuracy on a dataset consisting of problematic prompts not detected by GPT-4, as well as a selection of contentious but unproblematic prompts.

Steering Language Generation: Harnessing Contrastive Expert Guidance and Negative Prompting for Coherent and Diverse Synthetic Data Generation

no code implementations15 Aug 2023 Charles O'Neill, Yuan-Sen Ting, Ioana Ciuca, Jack Miller, Thang Bui

Large Language Models (LLMs) hold immense potential to generate synthetic data of high quality and utility, which has numerous applications from downstream model training to practical data utilisation.

Comment Generation Synthetic Data Generation +1

q-Paths: Generalizing the Geometric Annealing Path using Power Means

1 code implementation1 Jul 2021 Vaden Masrani, Rob Brekelmans, Thang Bui, Frank Nielsen, Aram Galstyan, Greg Ver Steeg, Frank Wood

Many common machine learning methods involve the geometric annealing path, a sequence of intermediate densities between two distributions of interest constructed using the geometric average.

Bayesian Inference

Annealed Importance Sampling with q-Paths

2 code implementations NeurIPS Workshop DL-IG 2020 Rob Brekelmans, Vaden Masrani, Thang Bui, Frank Wood, Aram Galstyan, Greg Ver Steeg, Frank Nielsen

Annealed importance sampling (AIS) is the gold standard for estimating partition functions or marginal likelihoods, corresponding to importance sampling over a path of distributions between a tractable base and an unnormalized target.

Learning Attribute-Based and Relationship-Based Access Control Policies with Unknown Values

no code implementations19 Aug 2020 Thang Bui, Scott D. Stoller

Attribute-Based Access Control (ABAC) and Relationship-based access control (ReBAC) provide a high level of expressiveness and flexibility that promote security and information sharing, by allowing policies to be expressed in terms of attributes of and chains of relationships between entities.

Attribute

Gaussian Process Meta-Representations Of Neural Networks

no code implementations25 Sep 2019 Theofanis Karaletsos, Thang Bui

Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty.

Active Learning Bayesian Inference +1

A Decision Tree Learning Approach for Mining Relationship-Based Access Control Policies

no code implementations24 Sep 2019 Thang Bui, Scott D. Stoller

Relationship-based access control (ReBAC) provides a high level of expressiveness and flexibility that promotes security and information sharing, by allowing policies to be expressed in terms of chains of relationships between entities.

Cryptography and Security

Black-box $α$-divergence Minimization

3 code implementations10 Nov 2015 José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, Thang Bui, Richard E. Turner

Black-box alpha (BB-$\alpha$) is a new approximate inference method based on the minimization of $\alpha$-divergences.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.