Learning Semantic Representations

12 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

NuTime: Numerically Multi-Scaled Embedding for Large-Scale Time Series Pretraining

no code yet • 11 Oct 2023

In this work, we make key technical contributions that are tailored to the numerical properties of time-series data and allow the model to scale to large datasets, e. g., millions of temporal sequences.

Learning Semantic Representations to Verify Hardware Designs

no code yet • NeurIPS 2021

We evaluate Design2Vec on three real-world hardware designs, including an industrial chip used in commercial data centers.

SEEC: Semantic Vector Federation across Edge Computing Environments

no code yet • 30 Aug 2020

Specifically, for scenarios where multiple edge locations can engage in joint learning, we adapt the recently proposed federated learning techniques for semantic vector embedding.

IITK at the FinSim Task: Hypernym Detection in Financial Domain via Context-Free and Contextualized Word Embeddings

no code yet • FinNLP (COLING) 2020

We leverage both context-dependent and context-independent word embeddings in our analysis.

On the Limits of Learning to Actively Learn Semantic Representations

no code yet • CONLL 2019

We conclude that the current applicability of LTAL for improving data efficiency in learning semantic meaning representations is limited.

Towards Deep and Representation Learning for Talent Search at LinkedIn

no code yet • 17 Sep 2018

In this paper, we present the results of our application of deep and representation learning models on LinkedIn Recruiter.

Multiplicative Tree-Structured Long Short-Term Memory Networks for Semantic Representations

no code yet • SEMEVAL 2018

In addition to syntactic trees, we also investigate the use of Abstract Meaning Representation in tree-structured models, in order to incorporate both syntactic and semantic information from the sentence.

Investigating Inner Properties of Multimodal Representation and Semantic Compositionality with Brain-based Componential Semantics

no code yet • 15 Nov 2017

Considering that multimodal models are originally motivated by human concept representations, we assume that correlating multimodal representations with brain-based semantics would interpret their inner properties to answer the above questions.

Dynamic Compositional Neural Networks over Tree Structure

no code yet • 11 May 2017

Tree-structured neural networks have proven to be effective in learning semantic representations by exploiting syntactic information.