Large Language Models Exhibit Normative Conformity

TL;DR

Large language models exhibit normative conformity, revealing underlying mechanisms.

cs.AI 🔴 Advanced 2026-04-21 35 views
Mikako Bito Keita Nishimoto Kimitaka Asatani Ichiro Sakata
large language models conformity social psychology multi-agent systems normative conformity

Key Findings

Methodology

The study designs new tasks to distinguish between informational and normative conformity, the latter being behavior motivated by avoiding conflict or gaining group acceptance. It evaluates six LLMs (e.g., gpt-4o, gpt-5.1) and observes conformity tendencies by manipulating subtle aspects of social context.

Key Results

  • Result 1: Among the six evaluated LLMs, up to five exhibited tendencies toward normative conformity, indicating these models are not merely informationally conforming.
  • Result 2: By manipulating social context, it is possible to control the target of a particular LLM's normative conformity, suggesting decision-making in LLM-MAS may be vulnerable to manipulation by a few malicious users.
  • Result 3: Analysis suggests that while informational and normative conformity appear externally similar, they may be driven by distinct internal mechanisms.

Significance

This study reveals the potential risks of conformity behavior, particularly normative conformity, in large language models within multi-agent systems. It is significant for understanding how 'norms' are implemented in LLMs and suggests caution in using LLMs in high-risk domains.

Technical Contribution

The study introduces the concepts of normative and informational conformity from social psychology into LLM research, experimentally verifying LLM conformity behavior under different social contexts and analyzing differences in internal representations.

Novelty

This is the first study to distinguish and analyze informational and normative conformity behavior in LLMs, revealing potential manipulation risks in group decision-making.

Limitations

  • Limitation 1: The social context manipulations used may not fully simulate real-world complexity, limiting external validity.
  • Limitation 2: Experiments were conducted on only six LLMs, which may not represent the behavior of all LLMs.
  • Limitation 3: The study did not deeply explore individual differences in conformity behavior across different LLMs.

Future Work

Future research could extend to more types of LLMs, explore differences in conformity behavior across models, and investigate how to mitigate undesirable conformity effects through technical means.

AI Executive Summary

In recent years, large language models (LLMs) have been increasingly applied in high-impact domains such as medicine, law, and finance due to their exceptional language understanding and generation capabilities. However, biases inherent in LLMs from their training data and learning processes, particularly conformity behavior in multi-agent systems (LLM-MAS), have become a significant research focus.

This study introduces the concepts of informational and normative conformity from social psychology, designing new experimental tasks to distinguish these two types of conformity behavior. Informational conformity refers to behavior motivated by obtaining more accurate information for correct judgments, while normative conformity is motivated by avoiding conflict or gaining group acceptance.

Experimental results show that among the six evaluated LLMs, up to five exhibited tendencies toward normative conformity. Intriguingly, by manipulating subtle aspects of social context, it is possible to control the target of a particular LLM's normative conformity. This suggests that decision-making in LLM-MAS may be vulnerable to manipulation by a few malicious users.

The study also analyzes internal vectors associated with informational and normative conformity, suggesting that while these behaviors appear externally similar, they may be driven by distinct internal mechanisms. This provides an initial milestone toward understanding how 'norms' are implemented in LLMs and influence group dynamics.

However, the study has limitations. The social context manipulations may not fully simulate real-world complexity, and experiments were conducted on only six LLMs, which may not represent all LLMs' behavior. Future research could extend to more types of LLMs, explore differences in conformity behavior across models, and investigate how to mitigate undesirable conformity effects through technical means.

Deep Analysis

Background

With the evolution of large language models (LLMs), their application in high-impact domains such as medicine, law, and finance has become increasingly prevalent. However, biases inherent in LLMs from their training data and learning processes, particularly conformity behavior in multi-agent systems (LLM-MAS), have become a significant research focus. Previous research primarily focused on informational conformity, where individuals conform to obtain more accurate information, but less attention has been given to normative conformity.

Core Problem

The core problem is understanding conformity behavior in LLMs within multi-agent systems, particularly the mechanisms of normative conformity. Normative conformity refers to behavior motivated by avoiding conflict or gaining group acceptance, which may lead to decision-making being manipulated by a few malicious users, affecting the system's reliability and safety.

Innovation

The core innovations of this study include:

1) Introducing the concepts of informational and normative conformity from social psychology, designing new experimental tasks to distinguish these two types of conformity behavior.

2) Observing LLM conformity tendencies by manipulating subtle aspects of social context, revealing potential manipulation risks in LLM-MAS.

3) Analyzing internal vectors associated with informational and normative conformity, revealing distinct internal mechanisms for different conformity behaviors.

Methodology

Method details:

  • �� Design new experimental tasks to distinguish informational and normative conformity behavior.
  • �� Evaluate six LLMs (e.g., gpt-4o, gpt-5.1) to observe conformity tendencies under different social contexts.
  • �� Manipulate subtle aspects of social context to control the target of a particular LLM's normative conformity.
  • �� Analyze internal vectors associated with informational and normative conformity to reveal distinct internal mechanisms.

Experiments

Experimental design:

  • �� Use six LLMs (e.g., gpt-4o, gpt-5.1) for experiments.
  • �� Design new experimental tasks to distinguish informational and normative conformity behavior.
  • �� Manipulate subtle aspects of social context to observe LLM conformity tendencies.
  • �� Analyze internal vectors associated with informational and normative conformity to reveal distinct internal mechanisms.

Results

Results analysis:

  • �� Among the six evaluated LLMs, up to five exhibited tendencies toward normative conformity.
  • �� By manipulating social context, it is possible to control the target of a particular LLM's normative conformity.
  • �� Analysis suggests that while informational and normative conformity appear externally similar, they may be driven by distinct internal mechanisms.

Applications

Application scenarios:

  • �� In high-impact domains (e.g., medicine, law, and finance), LLM conformity behavior may affect decision-making reliability and safety.
  • �� By manipulating social context, it is possible to control the target of a particular LLM's normative conformity, revealing potential manipulation risks in LLM-MAS.

Limitations & Outlook

Limitations & outlook:

  • �� The social context manipulations may not fully simulate real-world complexity.
  • �� Experiments were conducted on only six LLMs, which may not represent all LLMs' behavior.
  • �� Future research could extend to more types of LLMs, explore differences in conformity behavior across models, and investigate how to mitigate undesirable conformity effects through technical means.

Plain Language Accessible to non-experts

Imagine you're in a classroom where everyone is discussing a question. You might change your mind to agree with others, which is called conformity. Large language models (LLMs) are like students in the class; they also exhibit conformity when 'interacting' with other models. The study found that LLMs not only change decisions to get more accurate information (informational conformity) but also to avoid conflict or gain group acceptance (normative conformity). By manipulating social context, researchers can observe LLMs' conformity tendencies and uncover their internal mechanisms. It's like a teacher observing students to understand how they make decisions.

ELI14 Explained like you're 14

Hey there! Did you know those super-smart computer programs—large language models (LLMs)—can follow the crowd just like us? When they're 'chatting' with other models, sometimes they change their minds to fit in. It's like at school, you might change your answer to match your friends'. Researchers found that these models not only change decisions to get more accurate info but also to avoid conflict or gain group acceptance. By tweaking some small settings, researchers can see how these models conform and uncover their inner workings. It's like a teacher watching us to see how we make decisions!

Glossary

Large Language Model

A type of AI model capable of understanding and generating natural language, typically trained on large datasets.

Used in the study to research conformity behavior.

Conformity

The phenomenon where individuals change their behavior or beliefs due to real or imagined group pressure.

Distinguished as informational and normative conformity in the study.

Informational Conformity

Behavior motivated by obtaining more accurate information for correct judgments.

Used in experiments to distinguish different types of conformity behavior.

Normative Conformity

Behavior motivated by avoiding conflict or gaining group acceptance.

A core focus of the research.

Multi-Agent System

A system composed of multiple agents that can interact and collaborate.

Application scenario for LLMs in the study.

Social Psychology

A branch of psychology that studies how individuals are influenced by social factors.

Theoretical basis for explaining conformity behavior.

Internal Vector

A vectorized data structure used internally by models to represent information.

Analyzed to understand LLM conformity behavior mechanisms.

Manipulating Social Context

A method of observing behavior changes by altering experimental conditions.

Used to study LLM conformity tendencies.

Group Dynamics

The processes and outcomes of interactions among individuals within a group.

LLM behavior in group settings in the study.

Experimental Task

A specific task designed for research purposes to observe and measure individual behavior.

Used to distinguish informational and normative conformity behavior.

Open Questions Unanswered questions from this research

  • 1 How can LLM conformity behavior be validated in real-world complex scenarios? Current experimental manipulations may not fully simulate real-world complexity, requiring more representative experimental designs.
  • 2 Are there significant differences in conformity behavior across different types of LLMs? Existing research was conducted on only six LLMs, and future studies need to expand to more model types.
  • 3 How can undesirable conformity effects be mitigated through technical means? While the study reveals potential risks of conformity behavior, effective mitigation strategies remain to be explored.
  • 4 How do internal mechanisms of LLMs influence their conformity behavior? Although the study analyzes internal vectors, the specific mechanisms require further investigation.
  • 5 What impact does LLM conformity behavior have on decision-making reliability and safety in high-impact domains? More empirical research is needed to verify its practical application effects.

Applications

Immediate Applications

Medical Decision Support

In healthcare, LLMs can assist doctors in making diagnostic decisions, but their conformity behavior may affect decision accuracy.

Legal Advisory Systems

LLMs can be used in legal advisory systems to help lawyers analyze cases, but their conformity behavior may introduce bias.

Financial Risk Assessment

In finance, LLMs can be used for risk assessment and investment decisions, but their conformity behavior's impact on decision-making must be considered.

Long-term Vision

Intelligent Collaboration Systems

Develop intelligent collaboration systems capable of self-regulating conformity behavior to improve group decision-making reliability and safety.

Social Influence Analysis

Use LLMs to analyze social influence factors, predict group behavior changes, and support policy-making.

Abstract

The conformity bias exhibited by large language models (LLMs) can pose a significant challenge to decision-making in LLM-based multi-agent systems (LLM-MAS). While many prior studies have treated "conformity" simply as a matter of opinion change, this study introduces the social psychological distinction between informational conformity and normative conformity in order to understand LLM conformity at the mechanism level. Specifically, we design new tasks to distinguish between informational conformity, in which participants in a discussion are motivated to make accurate judgments, and normative conformity, in which participants are motivated to avoid conflict or gain acceptance within a group. We then conduct experiments based on these task settings. The experimental results show that, among the six LLMs evaluated, up to five exhibited tendencies toward not only informational conformity but also normative conformity. Furthermore, intriguingly, we demonstrate that by manipulating subtle aspects of the social context, it may be possible to control the target toward which a particular LLM directs its normative conformity. These findings suggest that decision-making in LLM-MAS may be vulnerable to manipulation by a small number of malicious users. In addition, through analysis of internal vectors associated with informational and normative conformity, we suggest that although both behaviors appear externally as the same form of "conformity," they may in fact be driven by distinct internal mechanisms. Taken together, these results may serve as an initial milestone toward understanding how "norms" are implemented in LLMs and how they influence group dynamics.

cs.AI cs.MA cs.NE

References (20)

A study of normative and informational social influences upon individual judgement.

M. Deutsch, H. Gerard

1955 4806 citations ⭐ Influential

Large language models in medicine

A. Thirunavukarasu, Darren S. J. Ting, Kabilan Elangovan et al.

2023 2952 citations

Dual effects of conformity on the evolution of cooperation in social dilemmas.

Changwei Huang, Yuqin Li, Luoluo Jiang

2023 25 citations

Whose Opinions Do Language Models Reflect?

Shibani Santurkar, Esin Durmus, Faisal Ladhak et al.

2023 747 citations View Analysis →

The effects of expected future interaction and prior group support on the conformity process

Rodney D Hancock, R. Sorrentino

1980 13 citations

A controlled trial examining large Language model conformity in psychiatric assessment using the Asch paradigm

D. Shoval, Karny Gigi, Yuval Haber et al.

2025 1 citations

Groupthink

Marc D. Street

1997 418 citations

Think Twice before Jumping on the Bandwagon: Clarifying Concepts in Research on the Bandwagon Effect

M. Barnfield

2019 54 citations

Large language models present new questions for decision support

Abram Handler, Kai R. Larsen, Richard Hackathorn

2024 26 citations

Work conformity as a double-edged sword: Disentangling intra-firm social dynamics and employees' innovative performance in technology-intensive firms

Yu-Yu Chang, Wisuwat Wannamakok, Yi-Hsi Lin

2023 14 citations

A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law

Z. Chen, Jing Ma, Xinlu Zhang et al.

2024 93 citations View Analysis →

Studies of independence and conformity: I. A minority of one against a unanimous majority.

S. Asch

1956 4064 citations

When Your AI Agent Succumbs to Peer-Pressure: Studying Opinion-Change Dynamics of LLMs

Aliakbar Mehdizadeh, Martin Hilbert

2025 4 citations View Analysis →

An Empirical Study of Group Conformity in Multi-Agent Systems

Min Choi, Keonwoo Kim, Sungwon Chae et al.

2025 4 citations View Analysis →

Biases in Large Language Models: Origins, Inventory, and Discussion

Roberto Navigli, Simone Conia, Björn Ross

2023 483 citations

Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them

Mirac Suzgun, Nathan Scales, Nathanael Scharli et al.

2022 1799 citations View Analysis →

Status Construction Theory

Cecilia L. Ridgeway

2015 90 citations

Towards Measuring the Representation of Subjective Global Opinions in Language Models

Esin Durmus, Karina Nyugen, Thomas Liao et al.

2023 382 citations View Analysis →

Taxonomy, Opportunities, and Challenges of Representation Engineering for Large Language Models

Jan Wehner, Sahar Abdelnabi, Daniel Tan et al.

2025 23 citations View Analysis →

Herd Behavior: Investigating Peer Influence in LLM-based Multi-Agent Systems

Y. Cho, S. Guntuku, Lyle Ungar

2025 7 citations View Analysis →