Search Results for author: Dalong Zhang

Found 5 papers, 1 papers with code

Towards Highly Realistic Artistic Style Transfer via Stable Diffusion with Step-aware and Layer-aware Prompt

1 code implementation17 Apr 2024 Zhanjie Zhang, Quanwei Zhang, Huaizhong Lin, Wei Xing, Juncheng Mo, Shuaicheng Huang, Jinheng Xie, Guangyuan Li, Junsheng Luan, Lei Zhao, Dalong Zhang, Lixia Chen

To address the above problems, we propose a novel pre-trained diffusion-based artistic style transfer method, called LSAST, which can generate highly realistic artistic stylized images while preserving the content structure of input content images well, without bringing obvious artifacts and disharmonious style patterns.

Generative Adversarial Network Style Transfer

AntDT: A Self-Adaptive Distributed Training Framework for Leader and Straggler Nodes

no code implementations15 Apr 2024 Youshao Xiao, Lin Ju, Zhenglei Zhou, Siyuan Li, ZhaoXin Huan, Dalong Zhang, Rujie Jiang, Lin Wang, Xiaolu Zhang, Lei Liang, Jun Zhou

Previous works only address part of the stragglers and could not adaptively solve various stragglers in practice.

InferTurbo: A Scalable System for Boosting Full-graph Inference of Graph Neural Network over Huge Graphs

no code implementations1 Jul 2023 Dalong Zhang, Xianzheng Song, Zhiyang Hu, Yang Li, Miao Tao, Binbin Hu, Lin Wang, Zhiqiang Zhang, Jun Zhou

Inspired by the philosophy of ``think-like-a-vertex", a GAS-like (Gather-Apply-Scatter) schema is proposed to describe the computation paradigm and data flow of GNN inference.

Philosophy

SCMA Codebook Design Based on Uniquely Decomposable Constellation Groups

no code implementations27 Feb 2021 Xuewan Zhang, Dalong Zhang, Liuqing Yang, Gangtao Han, Hsiao-Hwa Chen, Di Zhang

Thus, BER performance of the proposed codebook design approach outperforms that of the existing codebook design schemes in both uncoded and coded SCMA systems, especially for large-size codebooks.

DSSLP: A Distributed Framework for Semi-supervised Link Prediction

no code implementations27 Feb 2020 Dalong Zhang, Xianzheng Song, Ziqi Liu, Zhiqiang Zhang, Xin Huang, Lin Wang, Jun Zhou

Instead of training model on the whole graph, DSSLP is proposed to train on the \emph{$k$-hops neighborhood} of nodes in a mini-batch setting, which helps reduce the scale of the input graph and distribute the training procedure.

Link Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.