no code implementations • 2 Sep 2021 • Henry W. J. Reeve, Timothy I. Cannings, Richard J. Samworth
We formulate the problem as one of constrained optimisation, where we seek a low-complexity, data-dependent selection set on which, with a guaranteed probability, the regression function is uniformly at least as large as the threshold; subject to this constraint, we would like the region to contain as much mass under the marginal feature distribution as possible.
no code implementations • 8 Jun 2021 • Henry W. J. Reeve, Timothy I. Cannings, Richard J. Samworth
In transfer learning, we wish to make inference about a target population when we have access to data both from the distribution itself, and from a different but related source distribution.
2 code implementations • 8 Feb 2021 • Jacob R. Bradley, Timothy I. Cannings
Based on this model, we then propose a new procedure for estimating biomarkers such as tumour mutation burden and tumour indel nurden.
no code implementations • 25 Nov 2019 • Timothy I. Cannings
Random projections offer an appealing and flexible approach to a wide range of large-scale statistical problems.
no code implementations • 29 May 2018 • Timothy I. Cannings, Yingying Fan, Richard J. Samworth
One consequence of these results is that the knn and SVM classifiers are robust to imperfect training labels, in the sense that the rate of convergence of the excess risks of these classifiers remains unchanged; in fact, our theoretical and empirical results even show that in some cases, imperfect labels may improve the performance of these methods.
no code implementations • 3 Apr 2017 • Timothy I. Cannings, Thomas B. Berrett, Richard J. Samworth
We derive a new asymptotic expansion for the global excess risk of a local-$k$-nearest neighbour classifier, where the choice of $k$ may depend upon the test point.