Search Results for author: Renlong Jie

Found 7 papers, 0 papers with code

Unsupervised Extractive Summarization with Learnable Length Control Strategies

no code implementations12 Dec 2023 Renlong Jie, Xiaojun Meng, Xin Jiang, Qun Liu

Different from the centrality-based ranking methods, our extractive scorer can be trained in an end-to-end manner, with no other requirement of positional assumption.

Extractive Summarization Sentence +1

Prompt-Based Length Controlled Generation with Reinforcement Learning

no code implementations23 Aug 2023 Renlong Jie, Xiaojun Meng, Lifeng Shang, Xin Jiang, Qun Liu

Large language models (LLMs) like ChatGPT and GPT-4 have attracted great attention given their surprising performance on a wide range of NLP tasks.

reinforcement-learning

Enhancing Coherence of Extractive Summarization with Multitask Learning

no code implementations22 May 2023 Renlong Jie, Xiaojun Meng, Lifeng Shang, Xin Jiang, Qun Liu

This study proposes a multitask learning architecture for extractive summarization with coherence boosting.

Extractive Summarization Sentence

Differentiable Neural Architecture Search with Morphism-based Transformable Backbone Architectures

no code implementations14 Jun 2021 Renlong Jie, Junbin Gao

It is extended from the existing study on differentiable neural architecture search, and we made the backbone architecture transformable rather than fixed during the training process.

Language Modelling Neural Architecture Search +2

Adaptive Hierarchical Hyper-gradient Descent

no code implementations17 Aug 2020 Renlong Jie, Junbin Gao, Andrey Vasnev, Minh-Ngoc Tran

In this study, we investigate learning rate adaption at different levels based on the hyper-gradient descent framework and propose a method that adaptively learns the optimizer parameters by combining multiple levels of learning rates with hierarchical structures.

Meta-Learning

Regularized Flexible Activation Function Combinations for Deep Neural Networks

no code implementations26 Jul 2020 Renlong Jie, Junbin Gao, Andrey Vasnev, Min-ngoc Tran

Based on this, a novel family of flexible activation functions that can replace sigmoid or tanh in LSTM cells are implemented, as well as a new family by combining ReLU and ELUs.

Image Compression Philosophy +2

COMBINED FLEXIBLE ACTIVATION FUNCTIONS FOR DEEP NEURAL NETWORKS

no code implementations25 Sep 2019 Renlong Jie, Junbin Gao, Andrey Vasnev, Minh-Ngoc Tran

Based on this, we develop two novel flexible activation functions that can be implemented in LSTM cells and auto-encoder layers.

Image Classification Philosophy +2

Cannot find the paper you are looking for? You can Submit a new open access paper.