Economic Model & Incentive Alignment

Core Design Goal

The economic model of Benchmark X is designed to enforce one principle:

Good strategies should compound advantage. Bad strategies should decay naturally.

No manual curation. No whitelists. No subjective trust.


Actors in the System

From an economic perspective, Benchmark X has four primary actors:

  1. Strategy Developers

  2. Reputation Stakers

  3. System Users (API / Evaluators / Enterprises)

  4. Protocol Infrastructure (Treasury)

Each actor interacts with the system differently and is incentivized differently.


Where Value Comes From (Sources)

Benchmark X only generates value from actual system usage.

Primary value sources:

  • Benchmark execution (Battle Rooms)

  • Compute consumption

  • Data access (historical + live metrics)

  • Strategy marketplace activity

  • Enterprise / API usage

  • On-demand evaluation jobs

No inflation-based rewards. No fake volume.

If no one uses the system → no rewards are generated.


Cost Flow (Who Pays)

From a system flow perspective:

  • Users pay T1 to consume compute and evaluation

  • Battle Rooms consume compute + execution resources

  • Marketplace actions generate fees

  • API calls burn or charge credits

This creates a real cost floor for participation.


Reward Pool Formation

All collected fees are aggregated into a reward pool.

This pool is then distributed according to fixed rules.

No discretionary allocation. No retroactive changes.


Reward Distribution Logic

The reward pool is split across three buckets:

1. Strategy Performance Rewards

Allocated to strategy developers based on:

  • BX Score ranking

  • Reputation-weighted participation

  • Consistency over time

Important:

  • Rewards are non-binary

  • There is no single “winner”

  • Multiple strategies can earn simultaneously

This avoids “winner-takes-all” dynamics.


2. Reputation Staker Rewards

Reputation stakers earn rewards for:

  • Providing trust collateral

  • Absorbing slashing risk

  • Supporting system integrity

From a system standpoint:

  • Stakers underwrite the benchmark

  • They get paid for taking that risk

If strategies misbehave → stakers lose first.


3. Protocol Treasury

Treasury allocation funds:

  • Infrastructure costs

  • Security research

  • Long-term maintenance

  • Ecosystem expansion

This ensures the system can survive without external funding loops.


Incentive Alignment by Design

The model ensures that:

  • Strategy devs want stable performance

  • Stakers want low-risk, high-quality strategies

  • Users want credible benchmarks

  • The protocol wants sustained usage

No actor benefits from:

  • Excessive risk

  • Manipulation

  • Short-term farming

  • Fake performance


Negative Feedback Loops (Very Important)

The system includes automatic negative feedback:

  • High risk → higher slashing probability

  • Inconsistent behavior → reputation decay

  • Poor performance → lower visibility

  • Low visibility → less opportunity to earn

This ensures that instability is self-punishing.


Positive Feedback Loops (Controlled)

Positive feedback exists, but is capped:

  • Good performance → higher reputation

  • Higher reputation → more Battle Room access

  • More access → more data → more rewards

Caps prevent runaway dominance:

  • Reputation decay

  • Participation limits

  • Weight normalization in scoring

No strategy can “lock in” permanent advantage.


Economic Failure Modes (Explicitly Considered)

The system is designed to avoid:

  • Infinite leverage farming

  • Sybil strategy spawning

  • Reputation recycling

  • Zero-cost spam strategies

  • Governance capture via yield

If a failure mode appears, it must be solvable by:

  • Adjusting weights

  • Tightening constraints

  • Updating slashing rules

Without changing the core architecture.


Mental Model for Developers

If you’re building on or inside Benchmark X:

  • T1 is a cost

  • T2 is a risk

  • T3 is an outcome

If you try to shortcut any of them, the system pushes back.

Last updated