How The Model Works
No black boxes. Here's what powers every probability, recommendation, and bracket pick on this site.
Last updated: March 16, 2026 at 04:00 UTC
Win Probability Formula
Where diff is the difference in adjusted efficiency ratings between the two teams. This is a logistic function with a denominator of 15, calibrated so probabilities are spread across a realistic range for tournament matchups.
Quick reference:
Bayesian Seed Prior Blending
Raw efficiency ratings alone can overfit to regular-season performance. We blend the model's win probability with historical seed upset rates using a Bayesian prior with 30% weight.
This ensures our model respects empirical tournament patterns. For example, 12-seeds beat 5-seeds 35% of the time historically — even when ratings disagree, the prior pulls the probability closer to that base rate.
| Matchup | Historical Favorite Win% |
|---|---|
| 1 vs 16 | 99.5% |
| 2 vs 15 | 94.0% |
| 3 vs 14 | 85.0% |
| 4 vs 13 | 79.0% |
| 5 vs 12 | 65.0% |
| 6 vs 11 | 63.0% |
| 7 vs 10 | 61.0% |
| 8 vs 9 | 52.0% |
Bracket Probability Engine
Our primary tournament model is a bracket-path probability engine, not a raw simulation count. Once we compute every pairwise matchup probability, we propagate those probabilities through the bracket tree to estimate each team's chances of reaching every round.
That means championship odds and round advancement rates come from the bracket structure itself, not just noisy sampling. Path difficulty is still captured naturally: a strong team in a brutal region gets penalized because their route to the title is objectively harder.
Probability Pipeline
- Load adjusted efficiency ratings for all 64 teams
- Apply injury adjustments (rating penalties for missing key players)
- For each matchup, compute logistic win probability from rating difference
- Blend with historical seed upset rates (30% Bayesian prior weight)
- Propagate winner distributions through each region and Final Four node
- Estimate advancement %, title-game %, and championship % for every team
- Use simulation secondarily for variance checks and pool-outcome scenarios
Bracket Recommendation Layer
We keep team win probabilities objective, then build bracket picks on top of them. That recommendation layer weighs projected round value, bracket-path strength, and round-by-round ownership pressure.
In other words: the probability model estimates who is most likely to advance. The recommendation engine decides who is smartest to pick for your pool.
Recommendation Inputs
- Objective matchup and advancement probabilities
- Scoring-system weights by round
- Estimated ownership pressure by round, not just champion pick rate
- Pool-size aggression (small pools vs massive pools)
- User preference nudges from likes, fades, champion choices, and chat feedback
Monte Carlo Validation
Monte Carlo is still useful, but it's no longer the headline claim. We use large simulation batches to validate that the bracket-path model and stochastic tournament outcomes agree, and to stress-test pool-level variance where "what your opponents pick" matters.
In plain English: simulation is great for scenario analysis. It's weaker as the sole proof that the underlying team probabilities are well estimated, so we use it as a check, not as the entire methodology.
That means our recommendations are driven by the probability model and recommendation layer first. Simulation helps us understand how those recommendations behave under realistic tournament variance.
Model Inputs
Efficiency Ratings
KenPom-style adjusted offensive and defensive efficiency. Measures points scored/allowed per 100 possessions, adjusted for opponent strength and pace.
Historical Seed Priors
Bayesian blend with historical upset rates by seed matchup. A 12-seed beats a 5-seed 35% of the time historically. The prior keeps the model grounded when raw ratings get too confident.
Injury Adjustments
Player impact estimates based on minutes share, usage rate, and on/off splits. When a key player is OUT, we apply a measurable rating penalty.
Calibration Targets
Honest disclaimer: This is our 2026 tournament model (pre-tournament). Real calibration requires multiple tournaments of verified predictions. The targets below represent what a well-calibrated model should achieve — we'll publish verified results after the 2026 tournament.
When we say a team has a 70% chance in a matchup or round, that estimate should hold up over time.
| Predicted Range | Target Actual Win% | Status |
|---|---|---|
| 0-10% | ~5-10% | Pending — 2026 |
| 10-20% | ~12-18% | Pending — 2026 |
| 20-30% | ~22-28% | Pending — 2026 |
| 30-40% | ~32-38% | Pending — 2026 |
| 40-50% | ~42-48% | Pending — 2026 |
| 50-60% | ~52-58% | Pending — 2026 |
| 60-70% | ~62-68% | Pending — 2026 |
| 70-80% | ~72-78% | Pending — 2026 |
| 80-90% | ~82-88% | Pending — 2026 |
| 90-100% | ~90-95% | Pending — 2026 |
A well-calibrated model should maintain strong Brier and log-loss performance across rounds. We'll publish measured results after the 2026 tournament completes.
Who Built This
Bracket Agent is built by a small team of sports analytics engineers who got tired of bracket advice that was either too generic ("pick the higher seed") or too opaque ("trust our AI").
We believe bracket intelligence should be transparent, testable, and pool-aware. Every number on this site traces back to the formulas on this page. If our model is wrong, you'll know exactly why.
