Search Results for author: Mohammad Vahid Jamali

Found 5 papers, 1 papers with code

ProductAE: Toward Deep Learning Driven Error-Correction Codes of Large Dimensions

no code implementations29 Mar 2023 Mohammad Vahid Jamali, Hamid Saber, Homayoon Hatami, Jung Hyun Bae

In this paper, we propose Product Autoencoder (ProductAE) -- a computationally-efficient family of deep learning driven (encoder, decoder) pairs -- aimed at enabling the training of relatively large codes (both encoder and decoder) with a manageable training complexity.

Decoder

Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes

no code implementations16 Jan 2023 Mohammad Vahid Jamali, Xiyang Liu, Ashok Vardhan Makkuva, Hessam Mahdavifar, Sewoong Oh, Pramod Viswanath

Next, we derive the soft-decision based version of our algorithm, called soft-subRPA, that not only improves upon the performance of subRPA but also enables a differentiable decoding algorithm.

ProductAE: Towards Training Larger Channel Codes based on Neural Product Codes

no code implementations9 Oct 2021 Mohammad Vahid Jamali, Hamid Saber, Homayoon Hatami, Jung Hyun Bae

Due the dimensionality challenge in channel coding, it is prohibitively complex to design and train relatively large neural channel codes via deep learning techniques.

Decoder

KO codes: Inventing Nonlinear Encoding and Decoding for Reliable Wireless Communication via Deep-learning

1 code implementation29 Aug 2021 Ashok Vardhan Makkuva, Xiyang Liu, Mohammad Vahid Jamali, Hessam Mahdavifar, Sewoong Oh, Pramod Viswanath

In this paper, we construct KO codes, a computationaly efficient family of deep-learning driven (encoder, decoder) pairs that outperform the state-of-the-art reliability performance on the standardized AWGN channel.

Benchmarking Decoder

Reed-Muller Subcodes: Machine Learning-Aided Design of Efficient Soft Recursive Decoding

no code implementations2 Feb 2021 Mohammad Vahid Jamali, Xiyang Liu, Ashok Vardhan Makkuva, Hessam Mahdavifar, Sewoong Oh, Pramod Viswanath

To lower the complexity of our decoding algorithm, referred to as subRPA in this paper, we investigate different ways for pruning the projections.

Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.