Search Results for author: Léo Tronchon

Found 2 papers, 1 papers with code

Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset

no code implementations14 Mar 2024 Hugo Laurençon, Léo Tronchon, Victor Sanh

Using vision-language models (VLMs) in web development presents a promising strategy to increase efficiency and unblock no-code solutions: by providing a screenshot or a sketch of a UI, a VLM could generate the code to reproduce it, for instance in a language like HTML.

OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents

1 code implementation NeurIPS 2023 Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh

Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks.

Cannot find the paper you are looking for? You can Submit a new open access paper.