no code implementations • 5 Apr 2024 • Mahesh Lorik Yadav, Harish Guruprasad Ramaswamy, Chandrashekar Lakshminarayanan
Unlike deep linear networks, the DLGN is capable of learning non-linear features (which are then linearly combined), and unlike ReLU networks these features are ultimately simple -- each feature is effectively an indicator function for a region compactly described as an intersection of (number of layers) half-spaces in the input space.
no code implementations • NeurIPS 2020 • Shiv Kumar Tavker, Harish Guruprasad Ramaswamy, Harikrishna Narasimhan
We present a statistically consistent algorithm for constrained classification problems where the objective (e. g. F-measure, G-mean) and the constraints (e. g. demographic parity, coverage) are defined by general functions of the confusion matrix.