Search Results for author: Takayuki Nishio

Found 23 papers, 2 papers with code

$Λ$-Split: A Privacy-Preserving Split Computing Framework for Cloud-Powered Generative AI

1 code implementation23 Oct 2023 Shoki Ohta, Takayuki Nishio

To mitigate these concerns, we introduce $\Lambda$-Split, a split computing framework to facilitate computational offloading while simultaneously fortifying data privacy against risks such as eavesdropping and unauthorized access.

Privacy Preserving

Tram-FL: Routing-based Model Training for Decentralized Federated Learning

no code implementations9 Aug 2023 Kota Maejima, Takayuki Nishio, Asato Yamazaki, Yuko Hara-Azumi

In decentralized federated learning (DFL), substantial traffic from frequent inter-node communication and non-independent and identically distributed (non-IID) data challenges high-accuracy model acquisition.

Federated Learning

CSI-Inpainter: Enabling Visual Scene Recovery from CSI Time Sequences for Occlusion Removal

no code implementations9 May 2023 Cheng Chen, Shoki Ohta, Takayuki Nishio, Mehdi Bennis, Jihong Park, Mohamed Wahib

Introducing CSI-Inpainter, a pioneering approach for occlusion removal using Channel State Information (CSI) time sequences, this work propels the application of wireless signal processing into the realm of visual scene recovery.

Image Inpainting Image Restoration

Point Cloud-based Proactive Link Quality Prediction for Millimeter-wave Communications

no code implementations2 Jan 2023 Shoki Ohta, Takayuki Nishio, Riichi Kudo, Kahoko Takahashi, Hisashi Nagata

The experimental results showed that our proposed method can predict future large attenuation of mmWave received signal strength and throughput induced by the LOS path blockage by pedestrians with comparable or superior accuracy to image-based prediction methods.

Time Series Analysis

Watch from sky: machine-learning-based multi-UAV network for predictive police surveillance

no code implementations6 Mar 2022 Ryusei Sugano, Ryoichi Shinkuma, Takayuki Nishio, Sohei Itahara, Narayan B. Mandayam

This paper presents the watch-from-sky framework, where multiple unmanned aerial vehicles (UAVs) play four roles, i. e., sensing, data forwarding, computing, and patrolling, for predictive police surveillance.

BIG-bench Machine Learning reinforcement-learning +1

Communication-oriented Model Fine-tuning for Packet-loss Resilient Distributed Inference under Highly Lossy IoT Networks

no code implementations17 Dec 2021 Sohei Itahara, Takayuki Nishio, Yusuke Koda, Koji Yamamoto

However, generally, there is a communication system-level trade-off between communication latency and reliability; thus, to provide accurate DI results, a reliable and high-latency communication system is required to be adapted, which results in non-negligible end-to-end latency of the DI.

Frame-Capture-Based CSI Recomposition Pertaining to Firmware-Agnostic WiFi Sensing

no code implementations29 Oct 2021 Ryosuke Hanahara, Sohei Itahara, Kota Yamashita, Yusuke Koda, Akihito Taya, Takayuki Nishio, Koji Yamamoto

This indicates that WiFi sensing that leverages the BFM matrix is more practical to implement using the pre-installed APs.

Beamforming Feedback-based Model-Driven Angle of Departure Estimation Toward Legacy Support in WiFi Sensing: An Experimental Study

no code implementations27 Oct 2021 Sohei Itahara, Sota Kondo, Kota Yamashita, Takayuki Nishio, Koji Yamamoto, Yusuke Koda

Moreover, the evaluations performed in this study revealed that the BFF-based MUSIC achieved a comparable error of AoD estimation to the CSI-based MUSIC, while BFF is a highly compressed version of CSI in IEEE 802. 11ac/ax.

Packet-Loss-Tolerant Split Inference for Delay-Sensitive Deep Learning in Lossy Wireless Networks

no code implementations28 Apr 2021 Sohei Itahara, Takayuki Nishio, Koji Yamamoto

This study solves the problem of the incremental retransmission latency caused by packet loss in a lossy IoT network.

Decentralized and Model-Free Federated Learning: Consensus-Based Distillation in Function Space

no code implementations1 Apr 2021 Akihito Taya, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

Because FL algorithms hardly converge the parameters of machine learning (ML) models, this paper focuses on the convergence of ML models in function spaces.

Federated Learning Knowledge Distillation

Zero-Shot Adaptation for mmWave Beam-Tracking on Overhead Messenger Wires through Robust Adversarial Reinforcement Learning

no code implementations16 Feb 2021 Masao Shinzaki, Yusuke Koda, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura, Yushi Shirato, Daisei Uchida, Naoki Kita

Second, we demonstrate the feasibility of \textit{zero-shot adaptation} as a solution, where a learning agent adapts to environmental parameters unseen during training.

When Wireless Communications Meet Computer Vision in Beyond 5G

no code implementations13 Oct 2020 Takayuki Nishio, Yusuke Koda, Jihong Park, Mehdi Bennis, Klaus Doppler

This article articulates the emerging paradigm, sitting at the confluence of computer vision and wireless communication, to enable beyond-5G/6G mission-critical applications (autonomous/remote-controlled vehicles, visuo-haptic VR, and other cyber-physical applications).

Image Reconstruction

MAB-based Client Selection for Federated Learning with Uncertain Resources in Mobile Networks

no code implementations29 Sep 2020 Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

This paper proposes a multi-armed bandit (MAB)-based client selection method to solve the exploration and exploitation trade-off and reduce the time consumption for FL in mobile networks.

Networking and Internet Architecture

Estimation of Individual Device Contributions for Incentivizing Federated Learning

no code implementations20 Sep 2020 Takayuki Nishio, Ryoichi Shinkuma, Narayan B. Mandayam

Federated learning (FL) is an emerging technique used to train a machine-learning model collaboratively using the data and computation resource of the mobile devices without exposing privacy-sensitive user data.

Federated Learning

Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data

no code implementations14 Aug 2020 Sohei Itahara, Takayuki Nishio, Yusuke Koda, Masahiro Morikura, Koji Yamamoto

To this end, based on the idea of leveraging an unlabeled open dataset, we propose a distillation-based semi-supervised FL (DS-FL) algorithm that exchanges the outputs of local models among mobile devices, instead of model parameter exchange employed by the typical frameworks.

Data Augmentation Federated Learning

Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning

no code implementations21 Apr 2020 Sohei Itahara, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

The key idea of the proposed method is to obtain a ``good'' subnetwork from the original NN using the unlabeled data based on the lottery hypothesis.

Denoising Federated Learning +3

Differentially Private AirComp Federated Learning with Power Adaptation Harnessing Receiver Noise

no code implementations14 Apr 2020 Yusuke Koda, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura

To this end, a differentially private AirComp-based FL is designed in this study, where the key idea is to harness receiver noise perturbation injected to aggregated global models inherently, thereby preventing the inference of clients' private data.

Networking and Internet Architecture Signal Processing

Deep Reinforcement Learning-Based Channel Allocation for Wireless LANs with Graph Convolutional Networks

no code implementations17 May 2019 Kota Nakashima, Shotaro Kamiya, Kazuki Ohtsu, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura

In densely deployed WLANs, the number of the available topologies of APs is extremely high, and thus we extract the features of the topological structures based on GCNs.

reinforcement-learning Reinforcement Learning (RL)

Hybrid-FL for Wireless Networks: Cooperative Learning Mechanism Using Non-IID Data

no code implementations17 May 2019 Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, Ryo Yonetani

Therefore, to mitigate the degradation induced by non-IID data, we assume that a limited number (e. g., less than 1%) of clients allow their data to be uploaded to a server, and we propose a hybrid learning mechanism referred to as Hybrid-FL, wherein the server updates the model using the data gathered from the clients and aggregates the model with the models trained by clients.

Federated Learning

Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge

1 code implementation23 Apr 2018 Takayuki Nishio, Ryo Yonetani

Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models.

Edge-computing Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.