SeaEvo: Advancing Algorithm Discovery with Strategy Space Evolution

TL;DR

SeaEvo enhances algorithm discovery via strategy space evolution, achieving 21% improvement in system optimization tasks.

cs.CL 🔴 Advanced 2026-04-27 24 views
Sichun Luo Yi Huang Haochen Luo Fengyuan Liu Guanzhi Deng Lei Li Qinghua Yao Zefa Hu Junlan Feng Qi Liu
algorithm discovery evolutionary search strategy space NLP system optimization

Key Findings

Methodology

SeaEvo introduces a strategy-space layer that elevates natural-language strategy descriptions to a first-class population-level evolutionary state in LLM-driven program search. This method includes three main modules: Strategy Articulation, Stratified Experience Retrieval, and Strategic Landscape Navigation. Strategy Articulation turns mutation into a diagnose-direct-implement process; Stratified Experience Retrieval organizes archives into strategy clusters and selects inspirations by behavioral complementarity; and Strategic Landscape Navigation periodically summarizes effective, saturated, and underexplored strategy families to guide future mutations.

Key Results

  • SeaEvo achieved a 21% relative improvement in system optimization tasks, significantly outperforming baseline methods. Specifically, in the Prism task, SeaEvo improved the average score by 32% and achieved nearly a 3x improvement in the Prism best score.
  • In mathematical algorithm discovery and agent-scaffold benchmarks, SeaEvo improved the underlying evolutionary backbones in most settings, with particularly large gains on open-ended system optimization tasks.
  • Ablation studies show that the Strategic Landscape Navigation module provided the largest standalone improvement in average fitness and convergence speed, indicating that landscape-level guidance provides the strongest global search signal.

Significance

SeaEvo addresses the strategy-representation gap in existing LLM-driven evolutionary search systems by elevating natural-language strategy descriptions to a population-level evolutionary state. This innovation enables the system to better distinguish syntactically different but strategically similar implementations, preserve lower-fitness but strategically promising directions, and detect when strategy families have saturated. By enhancing the robustness and efficiency of evolutionary search, SeaEvo paves the way for compound AI systems that accumulate algorithmic knowledge over time.

Technical Contribution

SeaEvo's technical contributions include the introduction of persistent strategy representations, semantic clustering, behaviorally complementary retrieval, and landscape-level navigation. These innovations not only achieve significant performance improvements over existing evolutionary backbones but also provide new theoretical guarantees and engineering possibilities for future algorithm discovery.

Novelty

SeaEvo is the first to elevate natural-language strategy descriptions to a population-level evolutionary state. Unlike existing methods that focus on programs and fitness scores, SeaEvo provides an explicit representation of semantic strategy families, significantly improving the efficiency and robustness of LLM-guided evolutionary search.

Limitations

  • SeaEvo's improvements are less pronounced in tasks with narrow strategy spaces compared to open strategy space tasks. This may be because fitness-based baselines are already effective in constrained strategy spaces.
  • Updating too frequently can destabilize search, as the population may not have accumulated enough new candidates to make the LLM's judgments reliable.
  • The computational cost of strategy embedding and retrieval can be high, especially in large-scale tasks.

Future Work

Future research directions include optimizing the computational efficiency of strategy embedding and retrieval, exploring SeaEvo's application in larger and more complex tasks, and integrating other advanced LLM-driven evolutionary algorithms to enhance overall performance. Additionally, research into automating the generation and updating of strategy descriptions is a promising direction.

AI Executive Summary

In the field of automated algorithm discovery, LLM-driven evolutionary search has emerged as a significant paradigm. However, most systems primarily track search progress through executable programs and scalar fitness. Even when natural-language reflection is used, it is often limited to local mutation prompts or lacks explicit population-level organization of strategic directions. This leads to difficulties in distinguishing syntactically different but strategically similar implementations, preserving lower-fitness but strategically promising directions, or detecting when strategy families have saturated.

SeaEvo introduces a strategy-space layer that elevates natural-language strategy descriptions to a first-class population-level evolutionary state in LLM-driven program search. SeaEvo augments each candidate program with an explicit natural language strategy description and uses this representation in three ways: Strategy Articulation turns mutation into a diagnose-direct-implement process; Stratified Experience Retrieval organizes the archive into strategy clusters and selects inspirations by behavioral complementarity; and Strategic Landscape Navigation periodically summarizes effective, saturated, and underexplored strategy families to guide future mutations.

Across mathematical algorithm discovery, systems optimization, and agent-scaffold benchmarks, SeaEvo improves the underlying evolutionary backbones in most settings, with particularly large gains (21% relative improvement) on open-ended system optimization tasks. These results suggest that persistent strategy representations provide a practical mechanism for improving the robustness and efficiency of LLM-guided evolutionary search, suggesting a path toward compound AI systems that accumulate algorithmic knowledge over time.

SeaEvo's technical contributions include the introduction of persistent strategy representations, semantic clustering, behaviorally complementary retrieval, and landscape-level navigation. These innovations not only achieve significant performance improvements over existing evolutionary backbones but also provide new theoretical guarantees and engineering possibilities for future algorithm discovery.

However, SeaEvo's improvements are less pronounced in tasks with narrow strategy spaces compared to open strategy space tasks. This may be because fitness-based baselines are already effective in constrained strategy spaces. Additionally, the computational cost of strategy embedding and retrieval can be high, especially in large-scale tasks. Future research directions include optimizing the computational efficiency of strategy embedding and retrieval, exploring SeaEvo's application in larger and more complex tasks, and integrating other advanced LLM-driven evolutionary algorithms to enhance overall performance.

Deep Analysis

Background

In recent years, LLM-driven evolutionary search has become an increasingly important paradigm in the field of automated algorithm discovery. By pairing LLMs as mutation operators with programmatic evaluators, recent systems can iteratively propose, execute, evaluate, and refine candidate programs, achieving strong results across mathematical discovery, combinatorial and geometric optimization, systems engineering, and agentic program design. However, most LLM-driven evolutionary systems still represent search state primarily as executable programs and scalar fitness values. While these representations support evaluation and selection, they provide only a limited view of search progress: different programs may instantiate the same underlying strategy, similar scores may correspond to qualitatively different directions, and low-scoring candidates may encode promising ideas that have not yet been refined. Recent work has introduced richer language-level signals, including natural-language heuristic descriptions, reflection over failures, and adaptive sampling, but these signals are typically used as local prompt context or unstructured memory, rather than as a persistent population-level representation of semantic strategy families.

Core Problem

Existing LLM-driven evolutionary search systems suffer from a significant strategy-representation gap. This gap leads to three recurring failure modes: ambiguity, where syntactic variants of an explored idea are mistaken for genuine progress; suppression of useful low-fitness strategies, where selection pressure discards candidates that cover complementary failure modes; and difficulty in detecting strategy family saturation, where per-program fitness fails to reveal whether an entire strategy family has saturated. These limitations suggest that improving LLM-guided evolution requires not only better mutation operators or evaluators but also a richer representation of the evolving strategy landscape.

Innovation

SeaEvo addresses the strategy-representation gap by introducing a strategy-space layer that elevates natural-language strategy descriptions to a population-level evolutionary state. The core innovations of SeaEvo include:


  • �� Persistent Strategy Descriptions: Each candidate program is augmented with an explicit natural-language strategy description, serving as a population-level evolutionary state.

  • �� Strategy Articulation: Turns mutation into a diagnose-direct-implement process, making the strategy direction explicit.

  • �� Stratified Experience Retrieval: Organizes the archive into strategy clusters and selects inspirations by behavioral complementarity, avoiding the 'rich-get-richer' dynamic of fitness-based selection.

  • �� Strategic Landscape Navigation: Periodically summarizes effective, saturated, and underexplored strategy families to guide future mutations.

Methodology

SeaEvo's methodology involves several key steps:


  • �� Strategy Articulation: Each candidate program is augmented with an explicit natural-language strategy description, serving as a population-level evolutionary state. The mutation process is turned into a diagnose-direct-implement process, making the strategy direction explicit.

  • �� Stratified Experience Retrieval: Organizes the archive into strategy clusters and selects inspirations by behavioral complementarity. Through strategy embedding and clustering, it identifies behaviorally complementary strategy families, avoiding the 'rich-get-richer' dynamic of fitness-based selection.

  • �� Strategic Landscape Navigation: Periodically summarizes effective, saturated, and underexplored strategy families to guide future mutations. By using strategy descriptions and fitness scores, it generates structured landscape guidance to help identify saturated and underexplored strategy directions.

Experiments

The experimental design includes the following aspects:


  • �� Datasets: Evaluations are conducted on various datasets from mathematical algorithm discovery, systems optimization, and agent-scaffold benchmarks.

  • �� Baselines: Comparisons are made with existing evolutionary search methods, including GEPA, OpenEvolve, and ShinkaEvolve.

  • �� Evaluation Metrics: Metrics such as average score, best score, and fitness scores are used for evaluation.

  • �� Hyperparameters: Default hyperparameter settings are used in each experiment, with detailed hyperparameter information provided in the appendix.

  • �� Ablation Studies: Ablation studies are conducted to evaluate the contribution of each module and analyze the performance of different module combinations.

Results

Experimental results show that SeaEvo improves the underlying evolutionary backbones in most settings, with particularly large gains (21% relative improvement) on open-ended system optimization tasks. Specifically, in the Prism task, SeaEvo improved the average score by 32% and achieved nearly a 3x improvement in the Prism best score. Ablation studies show that the Strategic Landscape Navigation module provided the largest standalone improvement in average fitness and convergence speed, indicating that landscape-level guidance provides the strongest global search signal. Additionally, the Strategy Articulation and Stratified Experience Retrieval modules respectively improved the reachable solution space and best fitness, demonstrating their contribution to expanding the solution space.

Applications

SeaEvo's application scenarios include:


  • �� Mathematical Algorithm Discovery: In combinatorial and geometric optimization problems, SeaEvo can automatically discover high-performance algorithms, reducing the cost of manually designing heuristics.

  • �� Systems Optimization: In open-ended system optimization tasks, SeaEvo can improve system robustness and efficiency through persistent strategy representations and behaviorally complementary retrieval.

  • �� Agent Program Design: In agent-scaffold tasks, SeaEvo can improve agent behavior and task completion efficiency through strategy-level navigation and retrieval.

Limitations & Outlook

SeaEvo's improvements are less pronounced in tasks with narrow strategy spaces compared to open strategy space tasks. This may be because fitness-based baselines are already effective in constrained strategy spaces. Additionally, updating too frequently can destabilize search, as the population may not have accumulated enough new candidates to make the LLM's judgments reliable. The computational cost of strategy embedding and retrieval can be high, especially in large-scale tasks. Future research directions include optimizing the computational efficiency of strategy embedding and retrieval, exploring SeaEvo's application in larger and more complex tasks, and integrating other advanced LLM-driven evolutionary algorithms to enhance overall performance.

Plain Language Accessible to non-experts

Imagine you're playing a complex puzzle game, where each puzzle piece represents a program or algorithm. Traditional methods are like blindly trying each puzzle piece to see if they fit. But SeaEvo is like a smart assistant that not only tells you which pieces might fit but also gives you better suggestions based on your previous attempts. SeaEvo helps you complete the puzzle faster by analyzing the characteristics of each piece and your past attempts. It's like having a smart friend by your side who not only helps you find the right pieces but also tells you which pieces have already been used and which ones haven't been tried yet. This way, SeaEvo helps you complete the puzzle faster, reducing unnecessary trials and errors.

ELI14 Explained like you're 14

Hey there! Imagine you're playing a super complex game, and each level has different challenges. Traditional methods are like blindly trying every possible move, hoping to find the right solution. But SeaEvo is like a super smart game assistant that not only tells you which moves might work but also gives you better suggestions based on your previous attempts. SeaEvo helps you pass the levels faster by analyzing the effect of each move and your past attempts. It's like having a smart friend by your side who not only helps you find the right moves but also tells you which moves have already been used and which ones haven't been tried yet. This way, SeaEvo helps you pass the levels faster, reducing unnecessary trials and errors. Isn't that cool?

Glossary

LLM (Large Language Model)

A large language model is an AI model trained on vast amounts of text data, capable of understanding and generating natural language text.

In SeaEvo, LLMs are used as mutation operators to help generate and optimize candidate programs.

Evolutionary Search

Evolutionary search is an optimization method based on natural selection and genetic algorithms, iteratively generating, evaluating, and selecting candidate solutions to find the optimal solution.

SeaEvo enhances the efficiency and robustness of evolutionary search by introducing a strategy-space layer.

Strategy Space

Strategy space refers to the collection of strategy descriptions and representations of candidate solutions in evolutionary search.

SeaEvo enriches the representation of strategy space by elevating natural-language strategy descriptions to a population-level evolutionary state.

Strategy Articulation

Strategy articulation is the process of making the strategy direction explicit during mutation through a diagnose-direct-implement process.

SeaEvo uses strategy articulation to improve the transparency and controllability of the mutation process.

Stratified Experience Retrieval

Stratified experience retrieval is the process of organizing archives into strategy clusters and selecting inspirations based on behavioral complementarity.

SeaEvo avoids the 'rich-get-richer' dynamic of fitness-based selection through stratified experience retrieval.

Strategic Landscape Navigation

Strategic landscape navigation is the process of periodically summarizing effective, saturated, and underexplored strategy families to guide future mutations.

SeaEvo provides the strongest global search signal through strategic landscape navigation.

Fitness

Fitness is a metric used to evaluate the quality of candidate solutions in evolutionary search.

SeaEvo improves the accuracy and robustness of fitness evaluation through persistent strategy representations.

Ablation Study

An ablation study is a method of evaluating the impact of system components on overall performance by gradually removing or replacing them.

SeaEvo evaluates the contribution of each module through ablation studies.

Semantic Clustering

Semantic clustering is the process of grouping candidate solutions into similar strategy families based on their strategy descriptions and behavioral characteristics.

SeaEvo identifies behaviorally complementary strategy families through semantic clustering.

Behavioral Complementarity

Behavioral complementarity refers to selecting strategies that are different but complementary to the current strategy to improve search diversity and efficiency.

SeaEvo improves search diversity and efficiency by selecting inspirations based on behavioral complementarity.

Open Questions Unanswered questions from this research

  • 1 How to optimize the computational efficiency of strategy embedding and retrieval in larger and more complex tasks? Current methods may incur high computational costs in large-scale tasks, requiring further research to improve efficiency.
  • 2 How to automate the generation and updating of strategy descriptions? Current strategy descriptions rely on manual design, and future research could explore methods for automated generation and updating.
  • 3 How to integrate other advanced LLM-driven evolutionary algorithms to enhance overall performance? While SeaEvo performs well in most settings, integrating other methods could bring greater performance improvements.
  • 4 How to improve SeaEvo's performance in tasks with narrow strategy spaces? While SeaEvo excels in open strategy space tasks, its improvements are less pronounced in constrained strategy spaces.
  • 5 How to better identify and avoid strategy family saturation? While SeaEvo provides global search signals through strategic landscape navigation, it may lead to search instability in some cases.

Applications

Immediate Applications

Mathematical Algorithm Discovery

SeaEvo can automatically discover high-performance algorithms in combinatorial and geometric optimization problems, reducing the cost of manually designing heuristics.

Systems Optimization

SeaEvo can improve system robustness and efficiency in open-ended system optimization tasks through persistent strategy representations and behaviorally complementary retrieval.

Agent Program Design

SeaEvo can improve agent behavior and task completion efficiency in agent-scaffold tasks through strategy-level navigation and retrieval.

Long-term Vision

Compound AI Systems

SeaEvo paves the way for compound AI systems that accumulate algorithmic knowledge over time, potentially achieving more intelligent and efficient automated systems.

Automated Strategy Generation

With further research, SeaEvo could achieve fully automated strategy generation and updating, reducing reliance on manual design.

Abstract

LLM-guided evolutionary search has emerged as a promising paradigm for automated algorithm discovery, yet most systems track search progress primarily through executable programs and scalar fitness. Even when natural-language reflection is used, it is often used locally in mutation prompts or stored without an explicit population-level organization of strategic directions. As a result, evolutionary search can struggle to distinguish syntactically different implementations of the same idea, preserve lower-fitness but strategically promising directions, or detect when an entire family of strategies has saturated. We introduce \model, a modular strategy-space layer that elevates natural-language strategy descriptions from transient prompt context to first-class population-level evolutionary state in LLM-driven program search. \model augments each candidate program with an explicit natural language strategy description and uses this representation in three ways: Strategy Articulation turns mutation into a diagnose-direct-implement process; Stratified Experience Retrieval organizes the archive into strategy clusters and selects inspirations by behavioral complementarity; and Strategic Landscape Navigation periodically summarizes effective, saturated, and underexplored strategy families to guide future mutations. Across mathematical algorithm discovery, systems optimization, and agent-scaffold benchmarks, \model improves the underlying evolutionary backbones in most settings, with particularly large gains (21% relative improvement) on open-ended system optimization tasks. These results suggest that persistent strategy representations provide a practical mechanism for improving the robustness and efficiency of LLM-guided evolutionary search, suggesting a path toward compound AI systems that accumulate algorithmic knowledge over time.

cs.CL cs.AI cs.NE

References (19)

GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

Lakshya A. Agrawal, Shangyin Tan, Dilara Soylu et al.

2025 135 citations ⭐ Influential View Analysis →

AlphaEvolve: A coding agent for scientific and algorithmic discovery

Alexander Novikov, Ngân V˜u, Marvin Eisenberger et al.

2025 401 citations ⭐ Influential View Analysis →

XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models

Paul Röttger, Hannah Rose Kirk, Bertie Vidgen et al.

2023 338 citations ⭐ Influential View Analysis →

LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics

Niki van Stein, Thomas Bäck

2024 96 citations View Analysis →

CodeEvolve: An open source evolutionary coding agent for algorithm discovery and optimization

Henrique S. Assumpção, Diego Ferreira, Leandro Lacerda Campos et al.

2025 6 citations

HeurAgenix: Leveraging LLMs for Solving Complex Combinatorial Optimization Challenges

Xianliang Yang, Ling Zhang, Haolong Qian et al.

2025 12 citations View Analysis →

Mathematical discoveries from program search with large language models

Bernardino Romera-Paredes, M. Barekatain, Alexander Novikov et al.

2023 826 citations

ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution

Haoran Ye, Jiarui Wang, Zhiguang Cao et al.

2024 225 citations View Analysis →

A Systematic Survey on Large Language Models for Algorithm Design

Fei Liu, Yiming Yao, Ping Guo et al.

2024 64 citations View Analysis →

Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model

Fei Liu, Xialiang Tong, Mingxuan Yuan et al.

2024 234 citations View Analysis →

Reflexion: language agents with verbal reinforcement learning

Noah Shinn, Federico Cassano, Beck Labash et al.

2023 3197 citations View Analysis →

Efficient Memory Management for Large Language Model Serving with PagedAttention

Woosuk Kwon, Zhuohan Li, Siyuan Zhuang et al.

2023 5551 citations View Analysis →

Barbarians at the Gate: How AI is Upending Systems Research

Audrey Cheng, Shu Liu, Melissa Z. Pan et al.

2025 25 citations View Analysis →

HiFo-Prompt: Prompting with Hindsight and Foresight for LLM-based Automatic Heuristic Design

Chentong Chen, Mengyuan Zhong, Jianyong Sun et al.

2025 6 citations View Analysis →

ShinkaEvolve: Towards Open-Ended And Sample-Efficient Program Evolution

R. Lange, Yuki Imajuku, Edoardo Cetin

2025 58 citations View Analysis →

Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language Models

Xuan Wu, Di Wang, Chunguo Wu et al.

2025 13 citations View Analysis →

PACEvolve: Enabling Long-Horizon Progress-Aware Consistent Evolution

Minghao Yan, Bo Peng, Benjamin Coleman et al.

2026 7 citations View Analysis →

Visualizing Data using t-SNE

L. Maaten, Geoffrey E. Hinton

2008 48666 citations

AdaEvolve: Adaptive LLM Driven Zeroth-Order Optimization

M. Cemri, Shubham Agrawal, Akshat Gupta et al.

2026 6 citations View Analysis →