The mission of Papers With Code is to create a free and open resource with Machine Learning papers, code and evaluation tables.
We believe this is best done together with the community and powered by automation.
We've already automated the linking of code to papers, and we are now working on automating the extraction of evaluation metrics from papers.
We hang out on Slack, come join us!
You can also follow us and get in touch on Twitter.
Anyone can contribute!
Want to submit a new code implementation? Search for the paper title, and then add the implementation on the paper page.
Want to add an evaluation table or a task? You'll see edit buttons on the paper and task pages - just go ahead and edit! We found this a fun way to learn about new areas of machine learning and staying in tune with research.
Want to help out with automation? Have a look at our repositories on GitHub.
All data is licenced under the CC BY-SA licence, same as Wikipedia.
Most of the data comes from our own annotation of papers. To ensure we have a broad coverage of Machine Learning tasks we've parsed the titles of more than 60,000 papers (as many papers as named as "Method X for Task Y"). In addition, we've also manually annotated tasks and datasets in 1,600 ArXiv abstract from the last three months of 2018.
We've read hundreds of papers to populate the most popular ML tasks with evaluation metrics. We are also grateful to the following projects which provided their data under a free licence, and therefore enabled us to include their data as well:
The code to scrape and import data is open source: paperswithcode/sota-extractor.