Search Results for author: Panos P. Markopoulos

Found 7 papers, 1 papers with code

Convolutional Neural Network Compression via Dynamic Parameter Rank Pruning

no code implementations15 Jan 2024 Manish Sharma, Jamison Heard, Eli Saber, Panos P. Markopoulos

To address these issues, we propose an efficient training method for CNN compression via dynamic parameter rank pruning.

Neural Network Compression

Robust Singular Values based on L1-norm PCA

no code implementations21 Oct 2022 Duc Le, Panos P. Markopoulos

The L2-norm (sum of squared values) formulation of PCA promotes peripheral data points and, thus, makes PCA sensitive against outliers.

Image Compression

Minimum Mean-Squared-Error Autocorrelation Processing in Coprime Arrays

no code implementations21 Oct 2020 Dimitris G. Chachlakis, Tongdi Zhou, Fauzia Ahmad, Panos P. Markopoulos

Coprime arrays enable Direction-of-Arrival (DoA) estimation of an increased number of sources.

Structured Autocorrelation Matrix Estimation for Coprime Arrays

no code implementations27 Aug 2020 Dimitris G. Chachlakis, Panos P. Markopoulos

A coprime array receiver processes a collection of received-signal snapshots to estimate the autocorrelation matrix of a larger (virtual) uniform linear array, known as coarray.

Direction of Arrival Estimation

The Exact Solution to Rank-1 L1-norm TUCKER2 Decomposition

1 code implementation31 Oct 2017 Panos P. Markopoulos, Dimitris G. Chachlakis, Evangelos E. Papalexakis

We study rank-1 {L1-norm-based TUCKER2} (L1-TUCKER2) decomposition of 3-way tensors, treated as a collection of $N$ $D \times M$ matrices that are to be jointly decomposed.

Combinatorial Optimization

Efficient L1-Norm Principal-Component Analysis via Bit Flipping

no code implementations6 Oct 2016 Panos P. Markopoulos, Sandipan Kundu, Shubham Chamadia, Dimitris A. Pados

It was shown recently that the $K$ L1-norm principal components (L1-PCs) of a real-valued data matrix $\mathbf X \in \mathbb R^{D \times N}$ ($N$ data samples of $D$ dimensions) can be exactly calculated with cost $\mathcal{O}(2^{NK})$ or, when advantageous, $\mathcal{O}(N^{dK - K + 1})$ where $d=\mathrm{rank}(\mathbf X)$, $K<d$ [1],[2].

Dimensionality Reduction

Some Options for L1-Subspace Signal Processing

no code implementations4 Sep 2013 Panos P. Markopoulos, George N. Karystinos, Dimitris A. Pados

We describe ways to define and calculate $L_1$-norm signal subspaces which are less sensitive to outlying data than $L_2$-calculated subspaces.

Dimensionality Reduction Direction of Arrival Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.