2 code implementations • 9 Apr 2021 • Matthew Sotoudeh, Aditya V. Thakur
This has motivated a large number of techniques for finding unsafe behavior in DNNs.
1 code implementation • 9 Jan 2021 • Matthew Sotoudeh, Aditya V. Thakur
Formally, DNNs are complicated vector-valued functions which come in a variety of sizes and applications.
1 code implementation • 14 Sep 2020 • Matthew Sotoudeh, Aditya V. Thakur
In this paper, we argue that analogy making should be seen as a core primitive in software engineering.
1 code implementation • 11 Sep 2020 • Matthew Sotoudeh, Aditya V. Thakur
We present a framework parameterized by the abstract domain and activation functions used in the DNN that can be used to construct a corresponding ANN.
1 code implementation • 17 Aug 2019 • Matthew Sotoudeh, Aditya V. Thakur
Analysis and manipulation of trained neural networks is a challenging and important problem.
2 code implementations • NeurIPS 2019 • Matthew Sotoudeh, Aditya V. Thakur
A linear restriction of a function is the same function with its domain restricted to points on a given line.
no code implementations • 20 Feb 2018 • Matthew Sotoudeh, Sara S. Baghsorkhi
For DeepSpeech, DeepThin-compressed networks achieve better test loss than all other compression methods, reaching a 28% better result than rank factorization, 27% better than pruning, 20% better than hand-tuned same-size networks, and 12% better than HashedNets.