Search Results for author: Enzhi Zhang

Found 5 papers, 2 papers with code

Adaptive Patching for High-resolution Image Segmentation with Transformers

no code implementations15 Apr 2024 Enzhi Zhang, Isaac Lyngaas, Peng Chen, Xiao Wang, Jun Igarashi, Yuankai Huo, Mohamed Wahib, Masaharu Munetomo

For high-resolution images, e. g. microscopic pathology images, the quadratic compute and memory cost prohibits the use of an attention-based model, if we are to use smaller patch sizes that are favorable in segmentation.

Friction Image Segmentation +2

Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules

1 code implementation NeurIPS 2023 Zhiyuan Liu, Yaorui Shi, An Zhang, Enzhi Zhang, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua

Our results show that a subgraph-level tokenizer and a sufficiently expressive decoder with remask decoding have a large impact on the encoder's representation learning.

Representation Learning Self-Supervised Learning

ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction

1 code implementation20 Oct 2023 Yaorui Shi, An Zhang, Enzhi Zhang, Zhiyuan Liu, Xiang Wang

Predicting chemical reactions, a fundamental challenge in chemistry, involves forecasting the resulting products from a given reaction process.

Chemical Reaction Prediction

Accelerating the Evolutionary Algorithms by Gaussian Process Regression with $ε$-greedy acquisition function

no code implementations13 Oct 2022 Rui Zhong, Enzhi Zhang, Masaharu Munetomo

Based on this hypothesis, in each generation of optimization, we replace the worst individual in Evolutionary Algorithms (EAs) with the elite individual to participate in the evolution process.

Bayesian Optimization Evolutionary Algorithms +2

Accelerating the Genetic Algorithm for Large-scale Traveling Salesman Problems by Cooperative Coevolutionary Pointer Network with Reinforcement Learning

no code implementations27 Sep 2022 Rui Zhong, Enzhi Zhang, Masaharu Munetomo

In this paper, we propose a two-stage optimization strategy for solving the Large-scale Traveling Salesman Problems (LSTSPs) named CCPNRL-GA. First, we hypothesize that the participation of a well-performed individual as an elite can accelerate the convergence of optimization.

valid

Cannot find the paper you are looking for? You can Submit a new open access paper.