Exact high-dimensional asymptotics for Support Vector Machine

13 May 2019  ·  Haoyang Liu ·

The Support Vector Machine (SVM) is one of the most widely used classification methods. In this paper, we consider the soft-margin SVM used on data points with independent features, where the sample size $n$ and the feature dimension $p$ grows to $\infty$ in a fixed ratio $p/n\rightarrow \delta$. We propose a set of equations that exactly characterizes the asymptotic behavior of support vector machine. In particular, we give exact formulas for (1) the variability of the optimal coefficients, (2) the proportion of data points lying on the margin boundary (i.e. number of support vectors), (3) the final objective function value, and (4) the expected misclassification error on new data points, which in particular implies the exact formula for the optimal tuning parameter given a data generating mechanism. We first establish these formulas in the case where the label $y\in\{+1,-1\}$ is independent of the feature $x$. Then the results are generalized to the case where the label $y\in\{+1,-1\}$ is allowed to have a general dependence on the feature $x$ through a linear combination $a_0^Tx$. These formulas for the non-smooth hinge loss are analogous to the recent results in \citep{sur2018modern} for smooth logistic loss. Our approach is based on heuristic leave-one-out calculations.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods