Search Results for author: Tianyu Shi

Found 15 papers, 5 papers with code

WcDT: World-centric Diffusion Transformer for Traffic Scene Generation

1 code implementation2 Apr 2024 Chen Yang, Aaron Xuxiang Tian, Dong Chen, Tianyu Shi, Arsalan Heydarian

To enhance the scene diversity and stochasticity, the historical trajectory data is first preprocessed and encoded into latent space using Denoising Diffusion Probabilistic Models (DDPM) enhanced with Diffusion with Transformer (DiT) blocks.

Autonomous Driving Denoising +1

CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models

1 code implementation2 Apr 2024 Xuechen Liang, Meiling Tao, Tianyu Shi, Yiting Xie

Open large language models (LLMs) have significantly advanced the field of natural language processing, showcasing impressive performance across various tasks. Despite the significant advancements in LLMs, their effective operation still relies heavily on human input to accurately guide the dialogue flow, with agent tuning being a crucial optimization technique that involves human adjustments to the model for better response to such guidance. Addressing this dependency, our work introduces the TinyAgent model, trained on a meticulously curated high-quality dataset.

RoleCraft-GLM: Advancing Personalized Role-Playing in Large Language Models

1 code implementation17 Dec 2023 Meiling Tao, Xuechen Liang, Tianyu Shi, Lei Yu, Yiting Xie

This study presents RoleCraft-GLM, an innovative framework aimed at enhancing personalized role-playing with Large Language Models (LLMs).

Language Modelling

A Fully Data-Driven Approach for Realistic Traffic Signal Control Using Offline Reinforcement Learning

no code implementations27 Nov 2023 Jianxiong Li, Shichao Lin, Tianyu Shi, Chujie Tian, Yu Mei, Jian Song, Xianyuan Zhan, Ruimin Li

Specifically, we combine well-established traffic flow theory with machine learning to construct a reward inference model to infer the reward signals from coarse-grained traffic data.

Offline RL Reinforcement Learning (RL)

Improving the generalizability and robustness of large-scale traffic signal control

no code implementations2 Jun 2023 Tianyu Shi, Francois-Xavier Devailly, Denis Larocque, Laurent Charlin

Building upon the state-of-the-art previous model which uses a decentralized approach for large-scale traffic signal control with graph convolutional networks (GCNs), we first learn models using a distributional reinforcement learning (DisRL) approach.

Distributional Reinforcement Learning Multi-agent Reinforcement Learning +2

Fast Rule-Based Decoding: Revisiting Syntactic Rules in Neural Constituency Parsing

no code implementations16 Dec 2022 Tianyu Shi, Zhicheng Wang, Liyin Xiao, Cong Liu

Most recent studies on neural constituency parsing focus on encoder structures, while few developments are devoted to decoders.

Constituency Parsing

Order-sensitive Neural Constituency Parsing

no code implementations1 Nov 2022 Zhicheng Wang, Tianyu Shi, Liyin Xiao, Cong Liu

We propose a novel algorithm that improves on the previous neural span-based CKY decoder for constituency parsing.

Constituency Parsing

WILD-SCAV: Benchmarking FPS Gaming AI on Unity3D-based Environments

1 code implementation14 Oct 2022 Xi Chen, Tianyu Shi, Qingpeng Zhao, Yuchen Sun, Yunfei Gao, Xiangjun Wang

It provides realistic 3D environments of variable complexity, various tasks, and multiple modes of interaction, where agents can learn to perceive 3D environments, navigate and plan, compete and cooperate in a human-like manner.

Atari Games Benchmarking +3

Bilateral Deep Reinforcement Learning Approach for Better-than-human Car Following Model

no code implementations3 Mar 2022 Tianyu Shi, Yifei Ai, Omar ElSamadisy, Baher Abdulhai

We propose and introduce a Deep Reinforcement Learning (DRL) framework for car following control by integrating bilateral information into both state and reward function based on the bilateral control model (BCM) for car following control.

Autonomous Driving Multi-agent Reinforcement Learning +2

Efficient Connected and Automated Driving System with Multi-agent Graph Reinforcement Learning

no code implementations6 Jul 2020 Tianyu Shi, Jiawei Wang, Yuankai Wu, Luis Miranda-Moreno, Lijun Sun

Instead of learning a reliable behavior for ego automated vehicle, we focus on how to improve the outcomes of the total transportation system by allowing each automated vehicle to learn cooperation with each other and regulate human-driven traffic flow.

Decision Making reinforcement-learning +1

Driving Decision and Control for Autonomous Lane Change based on Deep Reinforcement Learning

no code implementations23 Apr 2019 Tianyu Shi, Pin Wang, Xuxin Cheng, Ching-Yao Chan, Ding Huang

We apply Deep Q-network (DQN) with the consideration of safety during the task for deciding whether to conduct the maneuver.

Autonomous Driving Decision Making +3

Efficient Motion Planning for Automated Lane Change based on Imitation Learning and Mixed-Integer Optimization

1 code implementation18 Apr 2019 Chenyang Xi, Tianyu Shi, Yuankai Wu, Lijun Sun

Traditional motion planning methods suffer from several drawbacks in terms of optimality, efficiency and generalization capability.

Action Generation Autonomous Driving +2

A Data Driven Method of Optimizing Feedforward Compensator for Autonomous Vehicle

no code implementations31 Jan 2019 Tianyu Shi, Pin Wang, Ching-Yao Chan, Chonghao Zou

A reliable controller is critical and essential for the execution of safe and smooth maneuvers of an autonomous vehicle. The controller must be robust to external disturbances, such as road surface, weather, and wind conditions, and so on. It also needs to deal with the internal parametric variations of vehicle sub-systems, including power-train efficiency, measurement errors, time delay, so on. Moreover, as in most production vehicles, the low-control commands for the engine, brake, and steering systems are delivered through separate electronic control units. These aforementioned factors introduce opaque and ineffectiveness issues in controller performance. In this paper, we design a feed-forward compensate process via a data-driven method to model and further optimize the controller performance. We apply the principal component analysis to the extraction of most influential features. Subsequently, we adopt a time delay neural network and include the accuracy of the predicted error in a future time horizon. Utilizing the predicted error, we then design a feed-forward compensate process to improve the control performance. Finally, we demonstrate the effectiveness of the proposed feed-forward compensate process in simulation scenarios.

Cannot find the paper you are looking for? You can Submit a new open access paper.