16 papers with code • 12 benchmarks • 8 datasets
Libraries
Use these libraries to find models and implementationsSubtasks
Latest papers
Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers
Retrieval-Augmented Generation (RAG) is a prevalent approach to infuse a private knowledge base of documents with Large Language Models (LLM) to build Generative Q\&A (Question-Answering) systems.
Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation
Driven by the motivation that Leveraging Stronger pre-trained models and Fewer trainable parameters for Superior generalizability, we introduce a robust fine-tuning approach, namely Rein, to parameter-efficiently harness VFMs for DGSS.
Llama 2: Open Foundation and Fine-Tuned Chat Models
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
Dual Cross-Attention for Medical Image Segmentation
DCA addresses the semantic gap between encoder and decoder features by sequentially capturing channel and spatial dependencies across multi-scale encoder features.
MTet: Multi-domain Translation for English and Vietnamese
We introduce MTet, the largest publicly available parallel corpus for English-Vietnamese translation.
Gradient Gating for Deep Multi-Rate Learning on Graphs
We present Gradient Gating (G$^2$), a novel framework for improving the performance of Graph Neural Networks (GNNs).
An efficient encoder-decoder architecture with top-down attention for speech separation
In addition, a large-size version of TDANet obtained SOTA results on three datasets, with MACs still only 10\% of Sepformer and the CPU inference time only 24\% of Sepformer.
NFL: Robust Learned Index via Distribution Transformation
Based on the characteristics of the transformed keys, we propose a robust After-Flow Learned Index (AFLI).
Visual Spatial Reasoning
Spatial relations are a basic part of human cognition.
Anomaly Detection via Reverse Distillation from One-Class Embedding
Knowledge distillation (KD) achieves promising results on the challenging problem of unsupervised anomaly detection (AD). The representation discrepancy of anomalies in the teacher-student (T-S) model provides essential evidence for AD.