A Systematic Study of Pseudo-Relevance Feedback with LLMs
Study shows LLM-generated pseudo-relevance feedback significantly improves query performance, especially in low-resource tasks.
Nour Jedidi, Jimmy Lin
Study shows LLM-generated pseudo-relevance feedback significantly improves query performance, especially in low-resource tasks.
Nour Jedidi, Jimmy Lin
Using RCT methodology to evaluate AI systems' impact on human performance, revealing methodological challenges and solutions.
Patricia Paskov, Kevin Wei, Shen Zhou Hong et al.
Using cross-species transfer learning to enhance electrophysiology-to-transcriptomics mapping accuracy in cortical GABAergic interneurons.
Theo Schwider, Ramin Ramezani
MLP layers in Transformers perform binary routing; validated in GPT-2, removing MLP increases perplexity by 43.3%.
Peter Balogh
GLM-OCR combines CogViT visual encoder and GLM language decoder to enhance document understanding efficiency.
Shuaiqi Duan, Yadong Xue, Weihan Wang et al.
Efficient approximation of analytic and L^p functions using height-augmented ReLU networks, significantly improving approximation rates.
ZeYu Li, FengLei Fan, TieYong Zeng
We release a large bilingual library dataset for GND-based multi-label classification.
Jennifer D'Souza, Sameer Sadruddin, Maximilian Kähler et al.
LLM-assisted MIPVU rule script generation enables interpretable Chinese metaphor identification; protocol choice is the main source of variation.
Weihang Huang, Mengna Liu
RAGPerf is an end-to-end benchmarking framework for retrieval-augmented generation systems, supporting various datasets and embedding models with negligible performance overhead.
Shaobo Li, Yirui Zhou, Yuan Xu et al.
Using structured linked data as a memory layer improves RAG system retrieval accuracy by 29.6% in standard RAG and 29.8% in agentic pipeline.
Andrea Volpini, Elie Raad, Beatrice Gamba et al.
This paper presents an event-driven E-Skin system with dynamic binary scanning and real-time SNN classification, achieving a 12.8x scan reduction and 92.11% accuracy.
Gaishan Li, Zhengnan Fu, Anubhab Tripathi et al.
Introduced DOWIS dataset to evaluate SLLMs in multilingual settings, finding text prompts outperform spoken prompts.
Maike Züfle, Sara Papi, Fabian Retkowski et al.
N-gram models predict reading time best due to sensitivity to simple statistics.
James A. Michaelov, Roger P. Levy
Proposed a Variational Latent Equilibrium (VLE) method to approximate BPTT in a biologically plausible manner, enhancing complex spatiotemporal pattern learning.
Simon Brandt, Paul Haider, Walter Senn et al.
Introduced NEMO-DE and NEEF-DE evolutionary frameworks for near-field multi-source localization, avoiding grid mismatch errors.
Seyed Jalaleddin Mousavirad, Parisa Ramezani, Mattias O'Nils et al.