Why Architecture Choice Matters in Symbolic Regression
Study shows architecture choice crucial for symbolic regression target recovery using EML operator.
Chakshu Gupta
Study shows architecture choice crucial for symbolic regression target recovery using EML operator.
Chakshu Gupta
The Structure-Guided Diffusion Model (SGDM) integrates structural information to enhance EEG-based visual reconstruction fidelity.
Yongxiang Lian, Yueyang Cang, Pingge Hu et al.
Lsys encoding excels in neural network evolution, achieving a food count of 3802, surpassing Matrix encoding.
Alexander Stuy, Nodin Weddington
MARS model achieves 21x training speedup and significant performance improvement through parallelization and subtractive skip connections.
Coşku Can Horuz, Andrea Ceni, Claudio Gallicchio et al.
Similarity-based portfolio construction enhances black-box optimization via k-nearest neighbor fine-tuning.
Catalin-Viorel Dinu, Diederick Vermetten, Carola Doerr
Combining convolution and delay learning in RSNNs achieves 52x inference speedup and 99% parameter savings on audio tasks.
Lúcio Folly Sanches Zebendo, Eleonora Cicciarella, Michele Rossi
Neuromorphic parameter estimation for power converter health monitoring using spiking neural networks, achieving ~270x energy reduction.
Hyeongmeen Baik, Hamed Poursiami, Maryam Parsa et al.
Simulating mouse cortical neurogenesis generates a minimal circuit of 85 neurons, achieving over 90% accuracy on MNIST after one training epoch.
Duan Zhou
The cHM algorithm is a universal framework for continuous optimization, excelling on 28 benchmark functions.
Piotr A. Kowalski, Szymon Kucharczyk, Jacek Mańdziuk
LLMs as semantic interfaces and ethical mediators in neuro-digital ecosystems, introducing Neuro-Linguistic Integration.
Alexander V. Shenderuk-Zhidkov, Alexander E. Hramov
A synthesizable RTL architecture for predictive coding networks, supporting local prediction-error dynamics, executed directly in hardware.
Timothy Oh
Utilizing a Quadratic Surrogate Attractor to enhance Particle Swarm Optimization's global convergence and robustness.
Maurizio Clemente, Marcello Canova
Federated few-shot learning on neuromorphic hardware using FedUnion strategy achieves 77.0% accuracy.
Steven Motta, Gioele Nanni
Proposed an SRAM-based CIM accelerator optimizing linear-decay SNNs, achieving 15.9 to 69 times energy efficiency improvement.
Hongyang Shang, Shuai Dong, Yahan Yang et al.
Stable Spike achieves dual consistency optimization via bitwise AND operations, enhancing SNN recognition performance under ultra-low latency by up to 8.33%.
Yongqi Ding, Kunshan Yang, Linze Li et al.
This paper presents an event-driven E-Skin system with dynamic binary scanning and real-time SNN classification, achieving a 12.8x scan reduction and 92.11% accuracy.
Gaishan Li, Zhengnan Fu, Anubhab Tripathi et al.
Introduced NEMO-DE and NEEF-DE evolutionary frameworks for near-field multi-source localization, avoiding grid mismatch errors.
Seyed Jalaleddin Mousavirad, Parisa Ramezani, Mattias O'Nils et al.