Federated Few-Shot Learning on Neuromorphic Hardware: An Empirical Study Across Physical Edge Nodes

TL;DR

Federated few-shot learning on neuromorphic hardware using FedUnion strategy achieves 77.0% accuracy.

cs.NE πŸ”΄ Advanced 2026-03-13 2 views
Steven Motta Gioele Nanni
federated learning neuromorphic hardware few-shot learning STDP edge computing

Key Findings

Methodology

This study presents a method for federated few-shot learning on neuromorphic hardware using BrainChip Akida AKD1000 processors. A two-node federated system was constructed, and approximately 1,580 experimental trials were conducted to evaluate four weight exchange strategies. Neuron-level concatenation (FedUnion) consistently preserved accuracy, while element-wise weight averaging (FedAvg) significantly degraded it. Domain-adaptive fine-tuning of the feature extractor was identified as the primary factor for accuracy improvement.

Key Results

  • The FedUnion strategy achieved a federated accuracy of 77.0% when scaling feature dimensionality from 64 to 256 (n=30, p<0.001), outperforming other strategies.
  • Element-wise weight averaging (FedAvg) resulted in a significant accuracy drop (p=0.002), indicating its unsuitability for federated learning with STDP weights.
  • Experiments showed that wider features benefit federated learning more than individual learning, and binarization impacts federated learning more, suggesting the importance of neuron prototype distinctiveness in cross-node transfer.

Significance

This research is the first to achieve federated learning of STDP-trained models across physical neuromorphic devices, addressing a gap in existing studies. By conducting experiments on actual hardware, it demonstrates the feasibility of federated learning on low-power neuromorphic processors, offering new possibilities for intelligent edge computing devices. This work has significant implications for both academia and industry, providing technical support for implementing intelligent applications in low-power environments.

Technical Contribution

The technical contributions of this paper include the first implementation of federated learning on physical neuromorphic hardware, proposing federated strategies suitable for STDP weights (e.g., FedUnion), and experimentally validating their effectiveness. Additionally, the paper reveals the impact of feature dimensionality expansion on federated learning performance, offering new perspectives for future neuromorphic computing research.

Novelty

This study is the first to implement federated learning of STDP weights on physical neuromorphic hardware, overcoming the limitations of previous simulation-only studies. Unlike existing simulated research, this study conducts experiments on actual hardware, validating the feasibility of federated learning in low-power environments.

Limitations

  • Due to the binary nature of STDP weights, element-wise weight averaging (FedAvg) performs poorly in federated learning, leading to accuracy degradation.
  • The experiments were conducted on a two-node system, and the performance in larger networks has not yet been verified.
  • The fine-tuning of the feature extractor relies on specific datasets, which may limit the generalizability of the method.

Future Work

Future research could explore implementing federated learning in systems with more nodes and verify its performance in different application scenarios. Additionally, optimizing the fine-tuning process of the feature extractor could improve the method's generalizability and adaptability.

AI Executive Summary

In the era of neuromorphic computing, achieving efficient machine learning on low-power edge devices has become a critical research topic. Traditional federated learning methods rely on floating-point gradient updates, while the STDP mechanism of neuromorphic hardware produces binary weight updates, posing a challenge for implementing federated learning on such hardware.

This paper presents a method for federated few-shot learning on neuromorphic hardware, using BrainChip Akida AKD1000 processors to construct a two-node federated system. Approximately 1,580 experimental trials were conducted to evaluate four weight exchange strategies, with neuron-level concatenation (FedUnion) consistently preserving accuracy, while element-wise weight averaging (FedAvg) degraded it.

The experimental results show that the FedUnion strategy achieved a federated accuracy of 77.0% when scaling feature dimensionality from 64 to 256 (n=30, p<0.001), significantly outperforming other strategies. This indicates that feature quality is the key factor for accuracy improvement, and wider features benefit federated learning more than individual learning.

This research is the first to achieve federated learning of STDP-trained models across physical neuromorphic devices, addressing a gap in existing studies. By conducting experiments on actual hardware, it demonstrates the feasibility of federated learning on low-power neuromorphic processors, offering new possibilities for intelligent edge computing devices.

However, due to the binary nature of STDP weights, element-wise weight averaging (FedAvg) performs poorly in federated learning, leading to accuracy degradation. Additionally, the experiments were conducted on a two-node system, and the performance in larger networks has not yet been verified. Future research could explore implementing federated learning in systems with more nodes and verify its performance in different application scenarios.

Deep Dive

Abstract

Federated learning on neuromorphic hardware remains unexplored because on-chip spike-timing-dependent plasticity (STDP) produces binary weight updates rather than the floating-point gradients assumed by standard algorithms. We build a two-node federated system with BrainChip Akida AKD1000 processors and run approximately 1,580 experimental trials across seven analysis phases. Of four weight-exchange strategies tested, neuron-level concatenation (FedUnion) consistently preserves accuracy while element-wise weight averaging (FedAvg) destroys it (p = 0.002). Domain-adaptive fine-tuning of the upstream feature extractor accounts for most of the accuracy gains, confirming feature quality as the dominant factor. Scaling feature dimensionality from 64 to 256 yields 77.0% best-strategy federated accuracy (n=30, p < 0.001). Two independent asymmetries (wider features help federation more than individual learning, while binarization hurts federation more) point to a shared prototype complementarity mechanism: cross-node transfer scales with the distinctiveness of neuron prototypes.

cs.NE cs.DC cs.LG

References (20)

Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type

Guoqiang Bi, M. Poo

1998 4559 citations ⭐ Influential

Prototypical Networks for Few-shot Learning

Jake Snell, Kevin Swersky, R. Zemel

2017 9592 citations ⭐ Influential View Analysis β†’

A Survey on Transfer Learning

Sinno Jialin Pan, Qiang Yang

2010 22814 citations ⭐ Influential

Networks of Spiking Neurons: The Third Generation of Neural Network Models

W. Maass

1996 2833 citations

Deep Learning With Spiking Neurons: Opportunities and Challenges

Michael Pfeiffer, T. Pfeil

2018 693 citations

Edge Computing: Vision and Challenges

Weisong Shi, Jie Cao, Quan Zhang et al.

2016 6565 citations

Federated Neuromorphic Learning of Spiking Neural Networks for Low-Power Edge Intelligence

N. Skatchkovsky, Hyeryung Jang, O. Simeone

2019 43 citations View Analysis β†’

Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition

Pete Warden

2018 1910 citations View Analysis β†’

Communication-Efficient Learning of Deep Networks from Decentralized Data

H. B. McMahan, Eider Moore, Daniel Ramage et al.

2016 23029 citations View Analysis β†’

Federated Learning With Spiking Neural Networks

Yeshwanth Venkatesha, Youngeun Kim, L. Tassiulas et al.

2021 66 citations View Analysis β†’

Loihi: A Neuromorphic Manycore Processor with On-Chip Learning

Mike Davies, N. Srinivasa, Tsung-Han Lin et al.

2018 3293 citations

Hello Edge: Keyword Spotting on Microcontrollers

Yundong Zhang, Naveen Suda, Liangzhen Lai et al.

2017 483 citations View Analysis β†’

Advances and Open Problems in Federated Learning

P. Kairouz, H. B. McMahan, Brendan Avent et al.

2019 8048 citations View Analysis β†’

Lead federated neuromorphic learning for wireless edge artificial intelligence

Helin Yang, K. Lam, Liang Xiao et al.

2022 98 citations

A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection

Q. Li, Zeyi Wen, Zhaomin Wu et al.

2019 1332 citations View Analysis β†’

Federated Learning: Strategies for Improving Communication Efficiency

Jakub KonecnΓ½, H. B. McMahan, Felix X. Yu et al.

2016 5270 citations View Analysis β†’

Comparison of Parametric Representation for Monosyllabic Word Recognition in Continuously Spoken Sentences

M. Padmanabhan, M. Picheny

2017 1871 citations

A million spiking-neuron integrated circuit with a scalable communication network and interface

P. Merolla, J. Arthur, Rodrigo Alvarez-Icaza et al.

2014 3758 citations

Federated Learning with Non-IID Data

Yue Zhao, Meng Li, Liangzhen Lai et al.

2018 3130 citations View Analysis β†’

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference

Benoit Jacob, S. Kligys, Bo Chen et al.

2017 3973 citations View Analysis β†’