An interactive model of AI agent economics. Agent cost per successful outcome scales with task length, while human cost scales linearly. The shape of that scaling depends on the survival model: exponential (Ord's constant hazard rate) or Weibull with κ<1 (Hamilton's declining hazard rate). Based on Ord's half-life analysis, METR's empirical data, and Hamilton's Weibull reanalysis.
The original model assumes a constant hazard rate (exponential decay). Hamilton's reanalysis of the METR data suggests the hazard rate may decrease over time, following a Weibull distribution with κ≈0.6-0.9 for SOTA models (and ≈0.37 for humans). Crucially, κ does not correlate with model size: scaling reduces the base hazard rate (λ) but does not change the shape of the decay. The two models fit the available data about equally well, but diverge dramatically at the tails.
S(t) = 0.5^(t/T50). Expected attempts grow exponentially. This is the worst case for long tasks.
Adjust these to match your scenario. The model recomputes instantly.
The highlighted row marks where agent cost per success overtakes human cost. Note how the shape of the divergence depends on the survival model.
| Task length | Steps | $/attempt | P(success) | E[attempts] | Agent cost | Human cost | Ratio |
|---|
For a fixed task length, how does the agent-to-human cost ratio change as you vary half-life and cost per step? Under the exponential model the half-life dominates because it acts on the exponent. Under Weibull (k<1), cost per step matters relatively more.
An agent that poses existential-level risk needs compute to operate. Compute costs money. Someone has to pay. The companion calculator traces the economic chain that would need to hold for a dangerous autonomous agent to actually run, and shows what breaks at each link.