Search Results for author: Guillaume Rabusseau

Found 38 papers, 11 papers with code

Simulating Weighted Automata over Sequences and Trees with Transformers

no code implementations12 Mar 2024 Michael Rizvi, Maude Lizaire, Clara Lacroce, Guillaume Rabusseau

Recent work has shown that these models can compactly simulate the sequential reasoning abilities of deterministic finite automata (DFAs).

Generative Learning of Continuous Data by Tensor Networks

no code implementations31 Oct 2023 Alex Meiburg, Jing Chen, Jacob Miller, Raphaëlle Tihon, Guillaume Rabusseau, Alejandro Perdomo-Ortiz

Beyond their origin in modeling many-body quantum systems, tensor networks have emerged as a promising class of models for solving machine learning problems, notably in unsupervised generative learning.

Automated Theorem Proving Tensor Networks

Temporal Graph Benchmark for Machine Learning on Temporal Graphs

2 code implementations NeurIPS 2023 Shenyang Huang, Farimah Poursafaei, Jacob Danovitch, Matthias Fey, Weihua Hu, Emanuele Rossi, Jure Leskovec, Michael Bronstein, Guillaume Rabusseau, Reihaneh Rabbany

We present the Temporal Graph Benchmark (TGB), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation of machine learning models on temporal graphs.

Node Property Prediction Property Prediction

Fast and Attributed Change Detection on Dynamic Graphs with Density of States

2 code implementations15 May 2023 Shenyang Huang, Jacob Danovitch, Guillaume Rabusseau, Reihaneh Rabbany

Current solutions do not scale well to large real-world graphs, lack robustness to large amounts of node additions/deletions, and overlook changes in node attributes.

Change Detection Change Point Detection

Spectral Regularization: an Inductive Bias for Sequence Modeling

1 code implementation4 Nov 2022 Kaiwen Hou, Guillaume Rabusseau

Various forms of regularization in learning tasks strive for different notions of simplicity.

Inductive Bias

Sequential Density Estimation via Nonlinear Continuous Weighted Finite Automata

no code implementations8 Jun 2022 Tianyu Li, Bogdan Mazoure, Guillaume Rabusseau

Although WFAs have been extended to deal with continuous input data, namely continuous WFAs (CWFAs), it is still unclear how to approximate density functions over sequences of continuous random variables using WFA-based models, due to the limitation on the expressiveness of the model as well as the tractability of approximating density functions via CWFAs.

Density Estimation

High-Order Pooling for Graph Neural Networks with Tensor Decomposition

no code implementations24 May 2022 Chenqing Hua, Guillaume Rabusseau, Jian Tang

Graph Neural Networks (GNNs) are attracting growing attention due to their effectiveness and flexibility in modeling a variety of graph-structured data.

Graph Classification Node Classification +2

Lower and Upper Bounds on the Pseudo-Dimension of Tensor Network Models

no code implementations NeurIPS 2021 Behnoush Khavari, Guillaume Rabusseau

These results are used to derive a generalization bound which can be applied to classification with low rank matrices as well as linear classifiers based on any of the commonly used tensor decomposition models.

Tensor Decomposition

Rademacher Random Projections with Tensor Networks

no code implementations26 Oct 2021 Beheshteh T. Rakhshan, Guillaume Rabusseau

Random projection (RP) have recently emerged as popular techniques in the machine learning community for their ability in reducing the dimension of very high-dimensional tensors.

Tensor Networks

Lower and Upper Bounds on the VC-Dimension of Tensor Network Models

no code implementations22 Jun 2021 Behnoush Khavari, Guillaume Rabusseau

These results are used to derive a generalization bound which can be applied to classification with low rank matrices as well as linear classifiers based on any of the commonly used tensor decomposition models.

Tensor Decomposition

Extracting Weighted Automata for Approximate Minimization in Language Modelling

no code implementations5 Jun 2021 Clara Lacroce, Prakash Panangaden, Guillaume Rabusseau

The objective is to obtain a weighted finite automaton (WFA) that fits within a given size constraint and which mimics the behaviour of the original model while minimizing some notion of distance between the black box and the extracted WFA.

Language Modelling

Assessing the Impact: Does an Improvement to a Revenue Management System Lead to an Improved Revenue?

no code implementations13 Jan 2021 Greta Laage, Emma Frejinger, Andrea Lodi, Guillaume Rabusseau

This is a challenging problem as it corresponds to the difference between the generated value and the value that would have been generated keeping the system as before.

counterfactual Management

Quantum Tensor Networks, Stochastic Processes, and Weighted Automata

no code implementations20 Oct 2020 Siddarth Srinivasan, Sandesh Adhikary, Jacob Miller, Guillaume Rabusseau, Byron Boots

We address this gap by showing how stationary or uniform versions of popular quantum tensor network models have equivalent representations in the stochastic processes and weighted automata literature, in the limit of infinitely long sequences.

Tensor Networks

Connecting Weighted Automata, Tensor Networks and Recurrent Neural Networks through Spectral Learning

no code implementations19 Oct 2020 Tianyu Li, Doina Precup, Guillaume Rabusseau

In this paper, we present connections between three models used in different research fields: weighted finite automata~(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks which encompasses a set of optimization techniques for high-order tensors used in quantum physics and numerical analysis.

Tensor Networks

A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix

2 code implementations7 Oct 2020 Thang Doan, Mehdi Bennani, Bogdan Mazoure, Guillaume Rabusseau, Pierre Alquier

Continual learning (CL) is a setting in which an agent has to learn from an incoming stream of data during its entire lifetime.

Continual Learning

Laplacian Change Point Detection for Dynamic Graphs

1 code implementation2 Jul 2020 Shenyang Huang, Yasmeen Hitti, Guillaume Rabusseau, Reihaneh Rabbany

To solve the above challenges, we propose Laplacian Anomaly Detection (LAD) which uses the spectrum of the Laplacian matrix of the graph structure at each snapshot to obtain low dimensional embeddings.

Anomaly Detection Change Point Detection

Tensorized Random Projections

no code implementations11 Mar 2020 Beheshteh T. Rakhshan, Guillaume Rabusseau

We introduce a novel random projection technique for efficiently reducing the dimension of very high-dimensional tensors.

RandomNet: Towards Fully Automatic Neural Architecture Design for Multimodal Learning

no code implementations2 Mar 2020 Stefano Alletto, Shenyang Huang, Vincent Francois-Lavet, Yohei Nakata, Guillaume Rabusseau

Almost all neural architecture search methods are evaluated in terms of performance (i. e. test accuracy) of the model structures that it finds.

Neural Architecture Search

Tensor Networks for Probabilistic Sequence Modeling

1 code implementation2 Mar 2020 Jacob Miller, Guillaume Rabusseau, John Terilla

Tensor networks are a powerful modeling framework developed for computational many-body physics, which have only recently been applied within machine learning.

Language Modelling Tensor Networks

Neural Architecture Search for Class-incremental Learning

no code implementations14 Sep 2019 Shenyang Huang, Vincent François-Lavet, Guillaume Rabusseau

To understand how to expand a continual learner, we focus on the neural architecture design problem in the context of class-incremental learning: at each time step, the learner must optimize its performance on all classes observed so far by selecting the most competitive neural architecture.

Class Incremental Learning Incremental Learning +1

Clustering-Oriented Representation Learning with Attractive-Repulsive Loss

1 code implementation18 Dec 2018 Kian Kenyon-Dean, Andre Cianflone, Lucas Page-Caccia, Guillaume Rabusseau, Jackie Chi Kit Cheung, Doina Precup

The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective.

Clustering General Classification +1

Hierarchical Methods of Moments

1 code implementation NeurIPS 2017 Matteo Ruffini, Guillaume Rabusseau, Borja Balle

Spectral methods of moments provide a powerful tool for learning the parameters of latent variable models.

Tensor Decomposition

Sequential Coordination of Deep Models for Learning Visual Arithmetic

no code implementations ICLR 2018 Eric Crawford, Guillaume Rabusseau, Joelle Pineau

Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive.

Connecting Weighted Automata and Recurrent Neural Networks through Spectral Learning

no code implementations4 Jul 2018 Guillaume Rabusseau, Tianyu Li, Doina Precup

In this paper, we unravel a fundamental connection between weighted finite automata~(WFAs) and second-order recurrent neural networks~(2-RNNs): in the case of sequences of discrete symbols, WFAs and 2-RNNs with linear activation functions are expressively equivalent.

Learning Graph Weighted Models on Pictures

no code implementations21 Jun 2018 Philip Amortila, Guillaume Rabusseau

Graph Weighted Models (GWMs) have recently been proposed as a natural generalization of weighted automata over strings and trees to arbitrary families of labeled graphs (and hypergraphs).

Tensor Regression Networks with various Low-Rank Tensor Approximations

2 code implementations27 Dec 2017 Xingwei Cao, Guillaume Rabusseau

We evaluate the compressive and regularization performances of the proposed model with both deep and shallow convolutional neural networks.

regression

Multitask Spectral Learning of Weighted Automata

no code implementations NeurIPS 2017 Guillaume Rabusseau, Borja Balle, Joelle Pineau

We first present a natural notion of relatedness between WFAs by considering to which extent several WFAs can share a common underlying representation.

On overfitting and asymptotic bias in batch reinforcement learning with partial observability

no code implementations22 Sep 2017 Vincent Francois-Lavet, Guillaume Rabusseau, Joelle Pineau, Damien Ernst, Raphael Fonteneau

This paper provides an analysis of the tradeoff between asymptotic bias (suboptimality with unlimited data) and overfitting (additional suboptimality due to limited data) in the context of reinforcement learning with partial observability.

reinforcement-learning Reinforcement Learning (RL)

Neural Network Based Nonlinear Weighted Finite Automata

no code implementations13 Sep 2017 Tianyu Li, Guillaume Rabusseau, Doina Precup

Weighted finite automata (WFA) can expressively model functions defined over strings but are inherently linear models.

Low-Rank Regression with Tensor Responses

no code implementations NeurIPS 2016 Guillaume Rabusseau, Hachem Kadri

This paper proposes an efficient algorithm (HOLRR) to handle regression tasks where the outputs have a tensor structure.

regression

Higher-Order Low-Rank Regression

no code implementations22 Feb 2016 Guillaume Rabusseau, Hachem Kadri

This paper proposes an efficient algorithm (HOLRR) to handle regression tasks where the outputs have a tensor structure.

regression

Low-Rank Approximation of Weighted Tree Automata

no code implementations4 Nov 2015 Guillaume Rabusseau, Borja Balle, Shay B. Cohen

We describe a technique to minimize weighted tree automata (WTA), a powerful formalisms that subsumes probabilistic context-free grammars (PCFGs) and latent-variable PCFGs.

Learning Negative Mixture Models by Tensor Decompositions

no code implementations17 Mar 2014 Guillaume Rabusseau, François Denis

Building upon a recent paper on tensor decompositions for learning latent variable models, we extend this work to the broader setting of tensors having a symmetric decomposition with positive and negative weights.

Cannot find the paper you are looking for? You can Submit a new open access paper.