Transfer learning is a methodology where weights from a model trained on one task are taken and either used (a) to construct a fixed feature extractor, (b) as weight initialization and/or fine-tuning.
( Image credit: Subodh Malgonde )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Forecasting models that are trained across sets of many time series, known as Global Forecasting Models (GFM), have shown recently promising results in forecasting competitions and real-world applications, outperforming many state-of-the-art univariate forecasting techniques.
The target scenario is Acoustic Model training based on this platform.
In this paper, a new approach is proposed for designing transferable soft sensors.
In this paper, we tackle an open research question in transfer learning, which is selecting a model initialization to achieve high performance on a new task, given several pre-trained models.
The common encoder in our architecture can capture useful common features present in the different tasks.
Cross-prompt automated essay scoring (AES) requires the system to use non target-prompt essays to award scores to a target-prompt essay.
To cope with the forgetting problem, many CIL methods transfer the knowledge of old classes by preserving some exemplar samples into the size-constrained memory buffer.
Using gesturerecognition as a case study, we show SOEL can be used for onlinefew-shot learning of new classes of pre-recorded gesture data andrapid online learning of new gestures from data streamed livefrom a Dynamic Active-pixel Vision Sensor to an Intel Loihineuromorphic research processor.
Keeping model interpretability of paramount importance, especially in the healthcare field, this study utilises LIME explanations to distinguish PD from non-PD, using visual superpixels on the DaTscans.