Language Models

GLM

Introduced by Zeng et al. in GLM-130B: An Open Bilingual Pre-trained Model

GLM is a bilingual (English and Chinese) pre-trained transformer-based language model that follow the traditional architecture of decoder-only autoregressive language modeling. It leverages autoregressive blank infilling as its training objective.

Source: GLM-130B: An Open Bilingual Pre-trained Model

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 6 14.63%
Quantization 3 7.32%
Question Answering 2 4.88%
Denoising 2 4.88%
Large Language Model 2 4.88%
Semantic Segmentation 2 4.88%
Dialogue Generation 1 2.44%
Chatbot 1 2.44%
Knowledge Graphs 1 2.44%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories