TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

14 Mar 2016Martín AbadiAshish AgarwalPaul BarhamEugene BrevdoZhifeng ChenCraig CitroGreg S. CorradoAndy DavisJeffrey DeanMatthieu DevinSanjay GhemawatIan GoodfellowAndrew HarpGeoffrey IrvingMichael IsardYangqing JiaRafal JozefowiczLukasz KaiserManjunath KudlurJosh LevenbergDan ManeRajat MongaSherry MooreDerek MurrayChris OlahMike SchusterJonathon ShlensBenoit SteinerIlya SutskeverKunal TalwarPaul TuckerVincent VanhouckeVijay VasudevanFernanda ViegasOriol VinyalsPete WardenMartin WattenbergMartin WickeYuan YuXiaoqiang Zheng

TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper

🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet