Search Results for author: Julian G. Zilly

Found 2 papers, 0 papers with code

The Negative Pretraining Effect in Sequential Deep Learning and Three Ways to Fix It

no code implementations1 Jan 2021 Julian G. Zilly, Franziska Eckert, Bhairav Mehta, Andrea Censi, Emilio Frazzoli

Negative pretraining is a prominent sequential learning effect of neural networks where a pretrained model obtains a worse generalization performance than a model that is trained from scratch when either are trained on a target task.

Today Me, Tomorrow Thee: Efficient Resource Allocation in Competitive Settings using Karma Games

no code implementations22 Jul 2019 Andrea Censi, Saverio Bolognani, Julian G. Zilly, Shima Sadat Mousavi, Emilio Frazzoli

We present a new type of coordination mechanism among multiple agents for the allocation of a finite resource, such as the allocation of time slots for passing an intersection.

Cannot find the paper you are looking for? You can Submit a new open access paper.