Learning Semantic Representations
12 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Learning Semantic Representations
Latest papers with no code
NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time Series Pretraining
In this work, we make key technical contributions that are tailored to the numerical properties of time-series data and allow the model to scale to large datasets, e. g., millions of temporal sequences.
Learning Semantic Representations to Verify Hardware Designs
We evaluate Design2Vec on three real-world hardware designs, including an industrial chip used in commercial data centers.
SEEC: Semantic Vector Federation across Edge Computing Environments
Specifically, for scenarios where multiple edge locations can engage in joint learning, we adapt the recently proposed federated learning techniques for semantic vector embedding.
IITK at the FinSim Task: Hypernym Detection in Financial Domain via Context-Free and Contextualized Word Embeddings
We leverage both context-dependent and context-independent word embeddings in our analysis.
On the Limits of Learning to Actively Learn Semantic Representations
We conclude that the current applicability of LTAL for improving data efficiency in learning semantic meaning representations is limited.
Towards Deep and Representation Learning for Talent Search at LinkedIn
In this paper, we present the results of our application of deep and representation learning models on LinkedIn Recruiter.
Multiplicative Tree-Structured Long Short-Term Memory Networks for Semantic Representations
In addition to syntactic trees, we also investigate the use of Abstract Meaning Representation in tree-structured models, in order to incorporate both syntactic and semantic information from the sentence.
Investigating Inner Properties of Multimodal Representation and Semantic Compositionality with Brain-based Componential Semantics
Considering that multimodal models are originally motivated by human concept representations, we assume that correlating multimodal representations with brain-based semantics would interpret their inner properties to answer the above questions.
Dynamic Compositional Neural Networks over Tree Structure
Tree-structured neural networks have proven to be effective in learning semantic representations by exploiting syntactic information.