no code implementations • 13 May 2023 • Md. Rabiul Islam, Daniel Massicotte, Philippe Y. Massicotte, Wei-Ping Zhu
The existing approaches employed very large and complex deep ConvNet or 2SRNN-based domain adaptation methods to approximate the distribution shift caused by these inter-session and inter-subject data variability.
no code implementations • 9 Jan 2023 • Jean-Sébastien Dessureault, Daniel Massicotte
The primary contribution of the AI$^{2}$ framework allows a user to call the machine learning algorithms in English, making its interface usage easier.
no code implementations • 17 Jun 2022 • Jean-Sébastien Dessureault, Daniel Massicotte
The algorithm is also able to find the optimal number of clusters for the FCM and the k-means algorithm, according to the consistency of the clusters given by the Silhouette Index (SI).
no code implementations • 17 Jun 2022 • Jean-Sébastien Dessureault, Daniel Massicotte
This paper proposes a novel metric named "Explainable Global Error Weighted on Feature Importance"(xGEWFI).
no code implementations • 17 Jun 2022 • Jean-Sébastien Dessureault, Daniel Massicotte
The method applies a regression or a classification, evaluates the results, and gives a diagnosis about the best dimensionality reduction process in this specific supervised learning context.
no code implementations • 20 Nov 2021 • Jean-Sebastien Dessureault, Daniel Massicotte
A clustering algorithm is an unsupervised method.
no code implementations • 22 Dec 2020 • Jean-Sébastien Dessureault, Jonathan Simard, Daniel Massicotte
Based on the resulting clusters and VI, a linear regression is applied to predict the VI of each district of a city.
no code implementations • 8 Jun 2019 • Md. Rabiul Islam, Daniel Massicotte, Francois Nougarou, Philippe Massicotte, Wei-Ping Zhu
Without using any pre-trained models, our proposed S-ConvNet and All-ConvNet demonstrate very competitive recognition accuracy to the more complex state of the art for neuromuscular activity recognition based on instantaneous HD-sEMG images, while using a ~ 12 x smaller dataset and reducing learning parameters to a large extent.