2025 Bracketology Year in Review

Bracketology is now a numbers game

2025 was not the best year for my bracketology. I selected 66/68 teams correctly and seeded 44 teams correctly for a Paymon score of 350. To an outsider, those may seem like impressive results, but it put me slightly below average amongst bracketologists this year.

I started publishing my bracketology in 2011 back on my old blog. Over my 15 year run as a bracketologist, lots of things have changed. The NET replaced the RPI, quads were introduced and predictive metrics became part of the selection criteria. While you can quibble with some details of the new way of doing things, it is inarguably better than the old way. I’m not even going to try to convince you of this- I can’t imagine any sane college basketball fan would want to go back to the RPI.

There were a few new changes to selection criteria for the 2025 season. T-Rank and WAB were added as official selection criteria. This means that there are now 6 metrics on each team’s resume- the 3 “resume metrics” (KPI, SOR and WAB) and the 3 “predictive metrics” (KenPom, BPI and T-Rank). This is simply a continuation of a decade-long trend of making the selection process increasingly quantifiable.

The genesis for this article was this tweet by Kerry Miller. 

I was amazed that such a crude model would have performed so well. However, I profoundly disagree with his take on this fact. Kerry has been doing bracketology for even longer than I have and is one of the people in the community I have a lot of respect for. 

I think it’s a good thing, not a bad thing, that the committee is moving to a more numbers-based approach. It’s kind of silly to me that the tournament is selected by a group of humans full of conflicts of interests, as this year’s drama surrounding North Carolina AD Bubba Cunningham showed. If I ruled the world, the tournament would be selected entirely by WAB, which Seth Burn lays out a great case for here. The more meritocratic and the less full of human meddling we can get the system, the better.

I also think it was foreseeable that we are moving in this direction. This is the exact rationale I applied behind putting North Carolina into my final bracket projection of the year, which was quite an unpopular opinion at the time. The metrics said that they should get in and the committee is much more metrics-driven than before.

The natural follow-up is to try to model this change in committee behavior to understand how to improve my bracketology going forwards. I’ll take a stab at that below.

Performance of our crude model

Suppose, as mentioned above, we created a crude model which is just the average of all six metrics and used that to seed and select the teams. Let’s get into the details of how such a model would have performed. In the screenshots below, the “actual rank” column is a team’s placement on the selection committee’s seed list, and the “predicted rank” column is a team’s placement under our model.

As you can see, this model would have gotten 66/68 teams correct- it would have had West Virginia and Ohio State in the field as opposed to Texas and San Diego State. However, it performed much better than my own bracketology in terms of seeding, seeding 53 teams correct as opposed to 44. (Note that these results are actually slightly better than what Kerry said they’d be).

This is a pretty good performance. It does quite well across all parts of the seed list. If you score it based on seed and not position on the seed list (as is customary), it is nearly perfect on the top 4 seed lines (switching only Purdue and Clemson), decent in the middle, and perfect on the bottom 4 seed lines.


However, it is easy to spot some patterns here. If you look at the teams that the model underseeds (Memphis, Oregon, Drake etc.) they generally have better resume metrics than predictive metrics. If you look at the teams that the model overseeds (Gonzaga, VCU, North Carolina etc.) they generally have better predictive metrics than resume metrics. Here are the teams who have much better predictive metrics than resume metrics:

You can see that our model is too high on almost all of these teams, most notably Gonzaga.

Now let’s look at the teams that have much better resume metrics than predictive metrics:

Our model is too low on nearly all of these teams. The notable exception is Louisville. They are the true outlier of the year to me- I have no idea how the committee decided they were an 8 seed.

My bracketology philosophy going forward

I think that I, along with other bracketologists, have made bracketology too complicated of a problem. There is so much analysis and over-analysis about lots of different factors, when we have every indication that the committee is getting more mathematical and streamlined in their decision making. 

Going forward, the starting point for my bracketology is going to be ranking the teams by the average of their six metrics. I will then bump up the teams with relatively strong resume metrics and bump down the teams with relatively weak resume metrics. This is going to form the backbone of my bracketology, and I will make relatively few deviations from it. Some reasons for deviations I will make include:

  1. Injuries. The committee’s shocking decision to leave West Virginia out shows that this can still matter.

  2. Extreme values in other key bracketology data. I will define these as Q1 wins and NCSOS. These numbers do have some explanatory power (albeit less than others seem to think) and if a team has abnormally good or bad results in them (e.g. North Carolina) I will move them a few spots accordingly.

  3. Ignoring the end of Champ Week. We have a lot of data (2025 Michigan, 2022 Texas A&M etc.) that the committee largely finalizes the seed list on Friday of conference tournaments and does not consider data beyond that. I will probably not move my seed list based on any results on the final Saturday or Sunday of conference tournament play.

I am excited that bracketology has become a more quantitative process over the years. My aim is to lean into that and adopt a more quantitative approach myself with the goal of improving my bracketology going forwards.

Next
Next

2025 Bracket Picking Guide