no code implementations • 29 Mar 2023 • Mohammad Vahid Jamali, Hamid Saber, Homayoon Hatami, Jung Hyun Bae
In this paper, we propose Product Autoencoder (ProductAE) -- a computationally-efficient family of deep learning driven (encoder, decoder) pairs -- aimed at enabling the training of relatively large codes (both encoder and decoder) with a manageable training complexity.
no code implementations • 16 Jan 2023 • Mohammad Vahid Jamali, Xiyang Liu, Ashok Vardhan Makkuva, Hessam Mahdavifar, Sewoong Oh, Pramod Viswanath
Next, we derive the soft-decision based version of our algorithm, called soft-subRPA, that not only improves upon the performance of subRPA but also enables a differentiable decoding algorithm.
no code implementations • 9 Oct 2021 • Mohammad Vahid Jamali, Hamid Saber, Homayoon Hatami, Jung Hyun Bae
Due the dimensionality challenge in channel coding, it is prohibitively complex to design and train relatively large neural channel codes via deep learning techniques.
1 code implementation • 29 Aug 2021 • Ashok Vardhan Makkuva, Xiyang Liu, Mohammad Vahid Jamali, Hessam Mahdavifar, Sewoong Oh, Pramod Viswanath
In this paper, we construct KO codes, a computationaly efficient family of deep-learning driven (encoder, decoder) pairs that outperform the state-of-the-art reliability performance on the standardized AWGN channel.
no code implementations • 2 Feb 2021 • Mohammad Vahid Jamali, Xiyang Liu, Ashok Vardhan Makkuva, Hessam Mahdavifar, Sewoong Oh, Pramod Viswanath
To lower the complexity of our decoding algorithm, referred to as subRPA in this paper, we investigate different ways for pruning the projections.
Information Theory Information Theory