no code implementations • 22 Jan 2024 • Zachary Novack, Julian McAuley, Taylor Berg-Kirkpatrick, Nicholas J. Bryan
We propose Diffusion Inference-Time T-Optimization (DITTO), a general-purpose frame-work for controlling pre-trained text-to-music diffusion models at inference-time via optimizing initial noise latents.
1 code implementation • 16 Oct 2023 • Zachary Novack, Nikita Srivatsan, Taylor Berg-Kirkpatrick, Julian McAuley
Lead sheets have become commonplace in generative music research, being used as an initial compressed representation for downstream tasks like multitrack music generation and automatic arrangement.
1 code implementation • 6 Feb 2023 • Zachary Novack, Julian McAuley, Zachary C. Lipton, Saurabh Garg
Open vocabulary models (e. g.
1 code implementation • 29 Nov 2022 • Zachary Novack, Simran Kaur, Tanya Marwah, Saurabh Garg, Zachary C. Lipton
A number of competing hypotheses have been proposed to explain why small-batch Stochastic Gradient Descent (SGD)leads to improved generalization over the full-batch regime, with recent work crediting the implicit regularization of various quantities throughout training.