Energy Efficient Placement of ML-Based Services in IoT Networks

The Internet of Things (IoT) is gaining momentum in its quest to bridge the gap between the physical and the digital world. The main goal of the IoT is the creation of smart environments and self-aware things that help to facilitate a variety of services such as smart transport, climate monitoring, e-health, etc. Huge volumes of data are expected to be collected by the connected sensors/things, which in traditional cases are processed centrally by large data centers in the core network that will inevitably lead to excessive transportation power consumption as well as added latency overheads. Instead, fog computing has been proposed by researchers from industry and academia to extend the capability of the cloud right to the point where the data is collected at the sensing layer. This way, primitive tasks that can be hosted in IoT sensors do not need to be sent all the way to the cloud for processing. In this paper we propose energy efficient embedding of machine learning (ML) models over a cloud-fog network using a Mixed Integer Linear Programming (MILP) optimization model. We exploit virtualization in our framework to provide service abstraction of Deep Neural Networks (DNN) layers that can be composed into a set of VMs interconnected by virtual links. We constrain the number of VMs that can be processed at the IoT layer and study the impact on the performance of the cloud fog approach.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here