Sharpness-Aware Poisoning: Enhancing Transferability of Injective Attacks on Recommender Systems

TL;DR

SharpAP significantly enhances the transferability of poisoning attacks on recommender systems, showing improved performance across multiple datasets.

cs.LG 🔴 Advanced 2026-04-24 24 views
Junsong Xie Yonghui Yang Pengyang Shao Le Wu
recommender systems poisoning attacks model transferability optimization algorithm security

Key Findings

Methodology

This paper introduces a novel attack method called Sharpness-Aware Poisoning (SharpAP), aimed at enhancing the transferability of poisoning attacks in recommender systems. The method employs the sharpness-aware minimization principle to optimize poisoned data by identifying the approximately worst-case victim model. SharpAP formulates the poisoning attack as a min-max-min tri-level optimization problem. By integrating SharpAP into the iterative attack process, the method generates more robust poisoned data, reducing sensitivity to structural shifts in surrogate models.

Key Results

  • On the MovieLens-1M dataset, the RevAdv attack using SharpAP improved the H@10 metric from 32% to 64% on the WRMF model, with transferability improvements of 16%, 22%, and 18% on BPR, LightGCN, and SGL models, respectively.
  • On the Gowalla dataset, SharpAP significantly enhanced attack transferability, particularly on the SimGCL model, with approximately a 30% increase in the H@20 metric.
  • Comparative experiments demonstrate that SharpAP consistently outperforms existing methods across multiple real-world datasets, especially in scenarios with significant structural discrepancies between models.

Significance

This research is significant in the field of recommender system security. As recommender systems become increasingly integral across various platforms, their vulnerabilities to complex attacks are of growing concern. SharpAP reveals the fragility of current systems against sophisticated attacks by enhancing the transferability of poisoning attacks. This finding not only advances academic understanding and improvement of system security but also provides practical insights for industry in designing more secure recommender systems.

Technical Contribution

The technical contribution of this paper lies in proposing a new optimization framework, SharpAP, which addresses the limitations of existing methods in model structure transferability by introducing the sharpness-aware minimization principle. SharpAP's tri-level optimization strategy significantly enhances the robustness of poisoned data and the transferability of attacks. Additionally, the paper provides theoretical analysis, establishing a foundation for improving attack transferability by optimizing poisoned data against worst-case models.

Novelty

SharpAP is the first to apply the sharpness-aware minimization principle to poisoning attacks in recommender systems, addressing the transferability issue caused by structural discrepancies between surrogate and victim models. Unlike previous methods, SharpAP considers model structure variations when generating poisoned data, significantly improving attack efficacy.

Limitations

  • SharpAP increases computational complexity, particularly when handling large-scale datasets, as the optimization process may require more computational resources.
  • The method relies on the selection of an appropriate surrogate model; if the surrogate and victim models have significant structural differences, attack effectiveness may be compromised.
  • In certain specific recommender system architectures, the advantages of SharpAP may not be as pronounced as in others.

Future Work

Future research directions include exploring the application of SharpAP in other types of machine learning models, particularly in deep learning. Additionally, investigating ways to further enhance attack transferability without increasing computational complexity is an important research topic.

AI Executive Summary

In the digital age, recommender systems have become essential tools across various platforms, yet their security issues are increasingly evident, especially against poisoning attacks. Traditional poisoning attack methods typically rely on a fixed surrogate model to simulate potential victim models, but this approach neglects structural discrepancies between surrogate and victim models, limiting attack transferability.

This paper introduces a novel attack method called Sharpness-Aware Poisoning (SharpAP), which employs the sharpness-aware minimization principle to identify the approximately worst-case victim model and optimize poisoned data specifically for this model. SharpAP formulates the poisoning attack as a min-max-min tri-level optimization problem, generating more robust poisoned data by integrating SharpAP into the iterative attack process, thus reducing sensitivity to structural shifts in surrogate models.

SharpAP was validated across multiple real-world datasets, demonstrating superior attack performance over existing methods on different recommender system models. Notably, SharpAP significantly improved attack transferability between models with substantial structural differences. For instance, on the MovieLens-1M dataset, the RevAdv attack using SharpAP improved the H@10 metric from 32% to 64% on the WRMF model.

This discovery not only advances academic understanding and improvement of system security but also provides practical insights for industry in designing more secure recommender systems. By enhancing the transferability of poisoning attacks, SharpAP reveals the fragility of current systems against sophisticated attacks.

However, SharpAP increases computational complexity, particularly when handling large-scale datasets, as the optimization process may require more computational resources. Future research directions include exploring the application of SharpAP in other types of machine learning models, particularly in deep learning. Additionally, investigating ways to further enhance attack transferability without increasing computational complexity is an important research topic.

Deep Analysis

Background

Recommender systems play a crucial role in modern information technology, widely used in e-commerce, social media, and other fields. As their importance grows, security issues have become a focus of attention. Poisoning attacks are a common form of attack, where attackers inject fake user profiles to influence recommendation results for unethical purposes. Existing poisoning attack methods typically rely on a fixed surrogate model to simulate potential victim models, but this approach neglects structural discrepancies between surrogate and victim models, limiting attack transferability. Recently, researchers have begun to focus on how to enhance the transferability of poisoning attacks to maintain effectiveness across different model structures.

Core Problem

The core problem facing poisoning attacks on recommender systems is attack transferability. Traditional methods assume that poisoned data generated for the surrogate model can be used to attack other victim models, but this assumption often fails when there are significant structural discrepancies between the surrogate and victim models. This transferability issue limits the effectiveness of poisoning attacks in practical applications. Therefore, improving attack transferability without complete knowledge of the victim model is a pressing challenge.

Innovation

The core innovation of this paper lies in proposing the Sharpness-Aware Poisoning (SharpAP) method, which addresses the transferability issue caused by structural discrepancies between surrogate and victim models by introducing the sharpness-aware minimization principle. SharpAP employs a tri-level optimization framework to identify the approximately worst-case victim model and optimize poisoned data specifically for this model. Unlike previous methods, SharpAP considers model structure variations when generating poisoned data, significantly improving attack efficacy. This innovation provides a new perspective for security research in recommender systems.

Methodology

The implementation process of the SharpAP method is as follows:


  • �� First, define the victim model space in the recommender system, ensuring that the recommendation loss on poisoned data does not exceed a preset threshold.

  • �� Second, utilize the sharpness-aware minimization principle to perform a localized search in the neighborhood of the surrogate model parameters, approximately identifying the worst-case model.

  • �� Then, formulate the poisoning attack problem as a min-max-min tri-level optimization problem, generating more robust poisoned data by integrating SharpAP into the iterative attack process.

  • �� Finally, validate SharpAP's transferability performance on different recommender system models through experiments, evaluating its attack effectiveness on real-world datasets.

Experiments

The experimental design includes validating the effectiveness of the SharpAP method on three real-world datasets: MovieLens-1M, Amazon-book, and Gowalla. Baseline methods used in the experiments include RevAdv, RAPU, and DADA, with evaluation metrics such as H@10 and N@10. Ablation studies were conducted to assess the contribution of each component in the SharpAP method. Key hyperparameter settings include learning rates, maximum iterations, etc., to ensure the reliability and reproducibility of the experimental results.

Results

Experimental results show that the SharpAP method consistently outperforms existing methods across multiple datasets. On the MovieLens-1M dataset, the RevAdv attack using SharpAP improved the H@10 metric from 32% to 64% on the WRMF model. On the Gowalla dataset, SharpAP significantly enhanced attack transferability, particularly on the SimGCL model, with approximately a 30% increase in the H@20 metric. Comparative experiments demonstrate that SharpAP consistently outperforms existing methods across multiple real-world datasets, especially in scenarios with significant structural discrepancies between models.

Applications

The SharpAP method has broad application potential in security research for recommender systems. Its direct application scenarios include security assessment of recommender systems in e-commerce platforms and social media. By enhancing the transferability of poisoning attacks, SharpAP can help researchers and engineers better understand and improve the security of recommender systems. Additionally, the method can be used to design more secure recommender systems to withstand potential complex attacks.

Limitations & Outlook

Although the SharpAP method excels in enhancing attack transferability, its increased computational complexity is a non-negligible issue. Particularly when handling large-scale datasets, the optimization process may require more computational resources. Additionally, the method relies on the selection of an appropriate surrogate model; if the surrogate and victim models have significant structural differences, attack effectiveness may be compromised. Future research directions include exploring the application of SharpAP in other types of machine learning models, particularly in deep learning.

Plain Language Accessible to non-experts

Imagine you're in a kitchen preparing a meal. A recommender system is like a chef, recommending recipes based on your tastes and preferences. Sometimes, however, bad actors might sneak in fake ingredients (fake user profiles) to influence the chef's recommendations. Traditional methods are like using a fixed recipe to simulate the chef, but if the chef changes, the recipe might not work. SharpAP is like a smart assistant that adjusts the ingredients based on different chefs' styles, ensuring the recipe works in various kitchens. Through this method, SharpAP enhances the applicability of fake ingredients across different chefs, much like making fake ingredients effective in different kitchens.

ELI14 Explained like you're 14

Hey there, friends! Did you know that recommender systems are like your personal shopping assistant online, suggesting products based on your likes? But some bad guys try to trick this assistant into recommending what they want. SharpAP is a new method that makes these bad guys' tricks work in front of different assistants. Imagine you're playing a game, and someone secretly changes the rules, but SharpAP is like a smart player who can spot these changes and keep winning! That's the cool thing about SharpAP; it makes the bad guys' attacks effective under different game rules. Isn't that awesome?

Glossary

Sharpness-Aware Poisoning

An attack method that optimizes poisoned data using the sharpness-aware minimization principle to enhance transferability of poisoning attacks.

Used in this paper to improve the transferability of poisoning attacks on recommender systems.

Recommender System

A system that recommends potentially interesting items to users by analyzing user behavior data.

The paper discusses the security issues of recommender systems under poisoning attacks.

Injective Attack

An attack where attackers inject fake user profiles to influence the recommendation results of a system.

The paper explores how to enhance the transferability of injective attacks in recommender systems.

Surrogate Model

A fixed model used to simulate potential victim models, helping attackers generate poisoned data.

SharpAP optimizes the surrogate model to enhance attack transferability.

Victim Model

The target model in a recommender system that is attacked, which attackers typically do not fully understand.

SharpAP identifies the worst-case victim model to optimize poisoned data.

Tri-level Optimization

An optimization framework involving minimization, maximization, and another minimization step to solve complex optimization problems.

SharpAP formulates the poisoning attack problem as a tri-level optimization problem.

Hit Ratio

A metric for evaluating recommender system performance, indicating the frequency of target items appearing in recommendation lists.

The paper uses hit ratio as a metric to evaluate the effectiveness of SharpAP attacks.

Collaborative Filtering

A technique for making recommendations based on interaction data between users and items.

Recommender systems typically use collaborative filtering to enhance recommendation accuracy.

Gradient-based Attack

A method that achieves attack objectives by optimizing parameterized fake profiles through gradients.

The paper compares gradient-based attacks with SharpAP's performance.

Sharpness-aware Minimization

An optimization strategy that seeks the maximum loss within a local neighborhood of model parameters and then minimizes it.

SharpAP uses the sharpness-aware minimization principle to optimize poisoned data.

Open Questions Unanswered questions from this research

  • 1 How can the transferability of SharpAP be further enhanced without increasing computational complexity? Current methods require significant computational resources when handling large-scale datasets, necessitating more efficient optimization strategies in future research.
  • 2 What is the effectiveness of SharpAP when applied to deep learning models? While its effectiveness has been validated in recommender systems, further research is needed to determine its applicability to other types of machine learning models.
  • 3 How does SharpAP perform when there are extreme structural differences between surrogate and victim models? Current research primarily focuses on models with minor structural differences, necessitating exploration of broader application scenarios in the future.
  • 4 How should an appropriate surrogate model be selected to maximize the effectiveness of SharpAP? The choice of surrogate model significantly impacts attack effectiveness, requiring more systematic selection strategies in future research.
  • 5 How can the security of SharpAP be ensured in practical applications? While it enhances attack transferability, preventing its malicious use is also a concern that warrants attention.

Applications

Immediate Applications

E-commerce Platform Security Assessment

SharpAP can be used to assess the security of e-commerce platform recommender systems by simulating complex attack scenarios, helping platforms identify potential security vulnerabilities.

Social Media Recommender System Optimization

By enhancing the transferability of poisoning attacks, SharpAP can help social media platforms optimize the security of their recommender systems, preventing malicious attackers from manipulating recommendation results.

Recommender System Security Research

Researchers can use SharpAP for security research in recommender systems, exploring attack effects across different model structures to provide references for designing more secure systems.

Long-term Vision

Cross-platform Recommender System Security Standards

With the application of SharpAP, future security standards for recommender systems across platforms can be established, ensuring consistent security against complex attacks across different platforms.

Adaptive Security Mechanisms in Intelligent Recommender Systems

By introducing SharpAP, future intelligent recommender systems can achieve adaptive security mechanisms, dynamically adjusting defense strategies based on different attack scenarios to enhance overall system security.

Abstract

Recommender Systems~(RS) have been shown to be vulnerable to injective attacks, where attackers inject limited fake user profiles to promote the exposure of target items to real users for unethical gains (e.g., economic or political advantages). Since attackers typically lack knowledge of the victim model deployed in the target RS, existing methods resort to using a fixed surrogate model to mimic the potential victim model. Despite considerable progress, we argue that the assumption that \textit{poisoned data generated for the surrogate model can be used to attack other victim models} is wishful. When there are significant structural discrepancies between the surrogate and victim models, the attack transferability inevitably suffers. Intuitively, if we can identify the worst-case victim model and iteratively optimize the poisoning effect specifically against it, then the generated poisoned data would be better transferred to other victim models. However, exactly identifying the worst-case victim model during the attack process is challenging due to the large space of victim models. To this end, in this work, we propose a novel attack method called Sharpness-Aware Poisoning (\textit{SharpAP}). Specifically, it employs the sharpness-aware minimization principle to seek the approximately worst-case victim model and optimizes the poisoned data specifically for this worst-case model. The poisoning attack with SharpAP is formulated as a min-max-min tri-level optimization problem. By integrating SharpAP into the iterative process for attacks, our method can generate more robust poisoned data which is less sensitive to the shift of model structure, mitigating the overfitting to the surrogate model. Comprehensive experimental comparisons on three real-world datasets demonstrate that \name~can significantly enhance the attack transferability.

cs.LG cs.IR

References (20)

Attacking Recommender Systems with Augmented User Profiles

Chen Lin, Si Chen, Hui Li et al.

2020 102 citations ⭐ Influential View Analysis →

Data Poisoning Attack against Recommender System Using Incomplete and Perturbed Data

Hengtong Zhang, Changxin Tian, Yaliang Li et al.

2021 78 citations ⭐ Influential

Sharpness-Aware Minimization for Efficiently Improving Generalization

Pierre Foret, Ariel Kleiner, H. Mobahi et al.

2020 1844 citations ⭐ Influential View Analysis →

Revisiting Adversarially Learned Injection Attacks Against Recommender Systems

Jiaxi Tang, Hongyi Wen, Ke Wang

2020 97 citations ⭐ Influential View Analysis →

LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

Xiangnan He, Kuan Deng, Xiang Wang et al.

2020 5073 citations ⭐ Influential View Analysis →

Unveiling Vulnerabilities of Contrastive Recommender Systems to Poisoning Attacks

Zongwei Wang, Junliang Yu, Min Gao et al.

2023 23 citations ⭐ Influential View Analysis →

Uplift Modeling for Target User Attacks on Recommender Systems

Wenjie Wang, Changsheng Wang, Fuli Feng et al.

2024 11 citations ⭐ Influential View Analysis →

Matrix Factorization Techniques for Recommender Systems

Y. Koren, Robert M. Bell, C. Volinsky

2009 10715 citations ⭐ Influential

BPR: Bayesian Personalized Ranking from Implicit Feedback

Steffen Rendle, Christoph Freudenthaler, Zeno Gantner et al.

2009 6536 citations ⭐ Influential View Analysis →

Revisiting Injective Attacks on Recommender Systems

Haoyang Li, Shimin Di, Lei Chen

2022 23 citations ⭐ Influential

Sharpness-Aware Data Poisoning Attack

P. He, Han Xu, J. Ren et al.

2023 10 citations ⭐ Influential View Analysis →

On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

N. Keskar, Dheevatsa Mudigere, J. Nocedal et al.

2016 3390 citations View Analysis →

Gray-Box Shilling Attack: An Adversarial Learning Approach

Zongwei Wang, Min Gao, Jundong Li et al.

2022 32 citations

Adversarial Weight Perturbation Helps Robust Generalization

Dongxian Wu, Shutao Xia, Yisen Wang

2020 852 citations

An automatic weighting scheme for collaborative filtering

Rong Jin, J. Chai, Luo Si

2004 373 citations

Poisoning Attacks and Defenses in Recommender Systems: A Survey

Zongwei Wang, Junliang Yu, Min Gao et al.

2024 21 citations View Analysis →

Generative-Contrastive Graph Learning for Recommendation

Yonghui Yang, Zhengwei Wu, Le Wu et al.

2023 108 citations View Analysis →

Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System

Kaike Zhang, Qi Cao, Yunfan Wu et al.

2024 13 citations View Analysis →

Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness

B. Mobasher, R. Burke, Runa Bhaumik et al.

2007 513 citations

Targeted Shilling Attacks on GNN-based Recommender Systems

Sihang Guo, Ting Bai, Weihong Deng

2023 26 citations