no code implementations • 1 Jan 2021 • Julian G. Zilly, Franziska Eckert, Bhairav Mehta, Andrea Censi, Emilio Frazzoli
Negative pretraining is a prominent sequential learning effect of neural networks where a pretrained model obtains a worse generalization performance than a model that is trained from scratch when either are trained on a target task.
no code implementations • 22 Jul 2019 • Andrea Censi, Saverio Bolognani, Julian G. Zilly, Shima Sadat Mousavi, Emilio Frazzoli
We present a new type of coordination mechanism among multiple agents for the allocation of a finite resource, such as the allocation of time slots for passing an intersection.