no code implementations • 19 Apr 2024 • Huilin Yin, Shengkai Su, Yinjia Lin, Pengju Zhen, Karin Festl, Daniel Watzenig
Based on our experiments and comprehensive analysis of the proposed method, the results demonstrate that our proposed method enables AGV to more rapidly complete path planning tasks with continuous actions in our environments.
1 code implementation • 19 Apr 2024 • Huilin Yin, Jiaxiang Li, Pengju Zhen, Jun Yan
This method searches the sensitive region of trajectory prediction models and generates the adversarial trajectories by using the vehicle-following method and incorporating information about forthcoming trajectories.
1 code implementation • 19 Dec 2023 • Wengang Guo, Jiayi Yang, Huilin Yin, Qijun Chen, Wei Ye
Experimental results have demonstrated that our method PICNN (the combination of standard CNNs with our proposed pathway) exhibits greater interpretability than standard CNNs while achieving higher or comparable discrimination power.
no code implementations • 25 Oct 2022 • Huan Hua, Jun Yan, Xi Fang, Weiquan Huang, Huilin Yin, Wancheng Ge
With the utilization of such a framework, the influence of non-robust features could be mitigated to strengthen the adversarial robustness.
1 code implementation • 8 Jun 2022 • Jun Yan, Huilin Yin, Xiaoyang Deng, Ziming Zhao, Wancheng Ge, Hao Zhang, Gerhard Rigoll
Since adversarial vulnerability can be regarded as a high-frequency phenomenon, it is essential to regulate the adversarially-trained neural network models in the frequency domain.
no code implementations • 11 Nov 2021 • Wei Zhou, Dong Chen, Jun Yan, Zhaojian Li, Huilin Yin, Wanchen Ge
In this paper, we formulate the lane-changing decision making of multiple AVs in a mixed-traffic highway environment as a multi-agent reinforcement learning (MARL) problem, where each AV makes lane-changing decisions based on the motions of both neighboring AVs and HDVs.
1 code implementation • 10 Aug 2021 • Jun Yan, Xiaoyang Deng, Huilin Yin, Wancheng Ge
Deep Neural Networks (DNNs) are vulnerable to adversarial examples which would inveigle neural networks to make prediction errors with small perturbations on the input images.