Search Results for author: Jinchang Hou

Found 2 papers, 2 papers with code

CLHA: A Simple yet Effective Contrastive Learning Framework for Human Alignment

1 code implementation25 Mar 2024 Feiteng Fang, Liang Zhu, Min Yang, Xi Feng, Jinchang Hou, Qixuan Zhao, Chengming Li, Xiping Hu, Ruifeng Xu

Reinforcement learning from human feedback (RLHF) is a crucial technique in aligning large language models (LLMs) with human preferences, ensuring these LLMs behave in beneficial and comprehensible ways to users.

Contrastive Learning reinforcement-learning

E-EVAL: A Comprehensive Chinese K-12 Education Evaluation Benchmark for Large Language Models

1 code implementation29 Jan 2024 Jinchang Hou, Chang Ao, Haihong Wu, Xiangtao Kong, Zhigang Zheng, Daijia Tang, Chengming Li, Xiping Hu, Ruifeng Xu, Shiwen Ni, Min Yang

The integration of LLMs and education is getting closer and closer, however, there is currently no benchmark for evaluating LLMs that focuses on the Chinese K-12 education domain.

Ethics Multiple-choice

Cannot find the paper you are looking for? You can Submit a new open access paper.