no code implementations • 19 Oct 2021 • Atharv Sonwane, Sharad Chitlangia, Tirtharaj Dash, Lovekesh Vig, Gautam Shroff, Ashwin Srinivasan
The ability to solve Bongard problems is an example of such a test.
no code implementations • 21 Jul 2021 • Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan
We present a survey of ways in which existing scientific knowledge are included when constructing models with neural networks.
1 code implementation • 7 Jun 2021 • Vijay Janapa Reddi, Brian Plancher, Susan Kennedy, Laurence Moroney, Pete Warden, Anant Agarwal, Colby Banbury, Massimo Banzi, Matthew Bennett, Benjamin Brown, Sharad Chitlangia, Radhika Ghosal, Sarah Grafman, Rupert Jaeger, Srivatsan Krishnan, Maximilian Lam, Daniel Leiker, Cara Mann, Mark Mazumder, Dominic Pajak, Dhilan Ramaprasad, J. Evan Smith, Matthew Stewart, Dustin Tingley
Broadening access to both computational and educational resources is critical to diffusing machine-learning (ML) innovation.
1 code implementation • CVPR 2022 • Hanjiang Hu, Zuxin Liu, Sharad Chitlangia, Akhil Agnihotri, Ding Zhao
To this end, we introduce an easy-to-compute information-theoretic surrogate metric to quantitatively and fast evaluate LiDAR placement for 3D detection of different types of objects.
no code implementations • 27 Feb 2021 • Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, Ashwin Srinivasan
We present a survey of ways in which domain-knowledge has been included when constructing models with neural networks.
no code implementations • 25 Jun 2020 • Ajay Subramanian, Sharad Chitlangia, Veeky Baths
In this paper, we comprehensively review a large number of findings in both neuroscience and psychology that evidence reinforcement learning as a promising candidate for modeling learning and decision making in the brain.
1 code implementation • 2 Oct 2019 • Srivatsan Krishnan, Maximilian Lam, Sharad Chitlangia, Zishen Wan, Gabriel Barth-Maron, Aleksandra Faust, Vijay Janapa Reddi
We believe that this is the first of many future works on enabling computationally energy-efficient and sustainable reinforcement learning.