Search Results for author: Surat Teerapittayanon

Found 7 papers, 5 papers with code

StitchNet: Composing Neural Networks from Pre-Trained Fragments

1 code implementation5 Jan 2023 Surat Teerapittayanon, Marcus Comiter, Brad McDanel, H. T. Kung

We then show that these fragments can be stitched together to create neural networks with accuracy comparable to that of traditionally trained networks at a fraction of computing resource and data requirements.

DaiMoN: A Decentralized Artificial Intelligence Model Network

1 code implementation19 Jul 2019 Surat Teerapittayanon, H. T. Kung

A main feature of DaiMoN is that it allows peers to verify the accuracy improvement of submitted models without knowing the test labels.

CheckNet: Secure Inference on Untrusted Devices

no code implementations17 Jun 2019 Marcus Comiter, Surat Teerapittayanon, H. T. Kung

CheckNet is like a checksum for neural network inference: it verifies the integrity of the inference computation performed by untrusted devices to 1) ensure the inference has actually been performed, and 2) ensure the inference has not been manipulated by an attacker.

Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference

no code implementations21 Oct 2017 Bradley McDanel, Surat Teerapittayanon, H. T. Kung

At inference time, the number of channels used can be dynamically adjusted to trade off accuracy for lowered power consumption and reduced latency by selecting only a beginning subset of channels.

Image Classification

BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks

3 code implementations6 Sep 2017 Surat Teerapittayanon, Bradley McDanel, H. T. Kung

Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer.

Embedded Binarized Neural Networks

2 code implementations6 Sep 2017 Bradley McDanel, Surat Teerapittayanon, H. T. Kung

Beyond minimizing the memory required to store weights, as in a BNN, we show that it is essential to minimize the memory used for temporaries which hold intermediate results between layers in feedforward inference.

Distributed Deep Neural Networks over the Cloud, the Edge and End Devices

1 code implementation6 Sep 2017 Surat Teerapittayanon, Bradley McDanel, H. T. Kung

In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.

Distributed Computing Object Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.