ABM or MFG? The Cases Where the Answer Is Not Clear

Two Frameworks, One Question

When you face a system of interacting agents — traders, firms, particles, cells — you eventually have to choose a modeling language. Agent-based models (ABM) simulate each individual directly: give every agent rules, let them interact, observe what emerges. Mean field games (MFG) take the opposite approach: assume the population is large enough that each agent’s influence on any other is negligible, replace the crowd with a probability distribution, and solve for optimal behavior against that distribution.

The standard advice is clean: use ABM when your population is small or heterogeneous, use MFG when it is large and approximately homogeneous. In practice, a significant fraction of real problems fall between these poles — and recognizing that you are in this grey zone is the first step toward handling it correctly.

Where the Boundary Blurs

Intermediate population size

MFG is formally justified in the limit $N \to \infty$. The approximation error for a finite population of $N$ agents decays on the order of $1/\sqrt{N}$ — slowly. For a market with a few hundred active participants, or an epidemiological model with a few thousand individuals, that error may be non-negligible. But the population is also too large for a fully resolved ABM to be computationally tractable at scale.

The question then becomes whether the mean field error is smaller or larger than the modeling error introduced by the simplifications ABM requires: discrete time, simplified interaction rules, limited agent memory. Neither framework is clean here, and neither should be applied without justification.

Near-homogeneous but not quite

MFG assumes agents are statistically identical — they differ only in their current state, not in their preferences or cost structure. Many real populations are almost homogeneous: traders with similar but not identical risk aversions, firms with the same technology but different capital stocks.

Multi-population MFG can accommodate a finite number of agent types, but modeling complexity grows quickly. ABM handles heterogeneity naturally but sacrifices analytical tractability. If the heterogeneity is genuinely small, a MFG with a perturbed cost function may be a defensible approximation — but quantifying “small” requires problem-specific judgment that is not always straightforward.

Common noise

When all agents face a shared external shock — a macro signal, a market-wide event, a weather system — the standard MFG framework breaks. The mean field is no longer a deterministic flow: it becomes a random measure driven by the common noise, coupling a Hamilton-Jacobi-Bellman equation with a stochastic Fokker-Planck equation.

At this level of complexity, the conceptual distance between MFG and a correlated ABM narrows considerably. Both now track the joint evolution of individual and collective behavior under shared uncertainty. The choice between them becomes partly a question of purpose: MFG gives you equilibrium structure, ABM gives you path-level trajectories.

Network and spatial structure

Standard MFG assumes agents interact through the aggregate distribution of the entire population — a fully connected, uniform mixing assumption. Many real systems are sparse: each agent interacts with k neighbours, not with everyone.

For dense networks where k/N approaches 1, the mean field recovers. For sparse networks, ABM on the graph is more natural. In between, graphon mean field games offer a mathematically rigorous interpolation — but they are technically demanding and not yet standard in applied work. If your system has a network structure that is neither sparse nor dense, you are almost certainly in the grey zone.

Calibration mismatch

Sometimes the grey zone is not about model structure at all. You have ABM simulation output — perhaps from a legacy system or a collaborator — and you want an analytical model that approximates it for fast evaluation or sensitivity analysis. Or you have an MFG solution and need to validate it against agent-level trajectories.

The translation is non-trivial in both directions. ABM output does not map cleanly onto the PDEs of MFG, and MFG equilibria do not always have intuitive agent-level interpretations. If you are working at this interface, you are doing model risk analysis, not just model selection.

A Decision Heuristic

No single framework handles the grey zone cleanly, but the following procedure helps structure the choice:

ABM vs. MFG Selection Algorithm

Input: N, heterogeneity level, interaction structure, data type

1. N < 50 and heterogeneity is high            → ABM
2. N > 5000 and agents are near-homogeneous    → MFG
3. Common noise is dominant                     → MFG with common noise
4. Interaction graph is sparse                  → ABM on graph
5. Interaction graph is dense, N is large       → Graphon MFG
6. None of the above (grey zone):
     a. Build MFG for tractability and equilibrium insight
     b. Validate with targeted ABM simulations at representative N
     c. Quantify mean field approximation error explicitly
     d. Report model uncertainty as part of the result

The grey zone does not call for picking a winner. It calls for running both and being honest about what each captures and what each does not.

Takeaway

The ABM vs. MFG question is often framed as binary. It is not. The most interesting applied problems — financial markets, systemic risk, epidemic dynamics, energy systems — frequently sit in a region where neither framework is dominant. The right response is not to force the problem into one camp, but to use both as complementary lenses: MFG for equilibrium structure and tractability, ABM for trajectory-level behavior and robustness checks. The grey zone is not a weakness of the theory. It is where the theory is still being built.


Building a simulation or calibration pipeline that touches this boundary? I’d be glad to help.