Just looking at matchups. First one of note is CA round 2: Jim (assuming he beats Gerald) vs. Escher. (At least) One of them isn’t making the final 8.
Yeah, um. These predictions are cool, but only as accurate as the data provided by matchplay, which at least in our case(Alabama) is highly suspect. Our defending Champ is only given an 21% chance to win. I was a 3 time winner in the past and given only a 2% chance to win. MP ratings are just not a very accurate representation of the real world.
You know what I mean. But Raymond’s chances 10 times as good as Jim’s? Nope. I don’t buy calculating winner’s odds based on Matchplay ratings. Never have. But we’ll see. His rating is better than Karl’s, eh? Riiiight. They’ve been in the same events 54 times; Karl finished higher in 38 of those. They’re both friends of mine, but I know who I’d pick.
You need to keep in mind that this analysis takes into account not only MP Rating, but also seed position in the bracket. Neither Jim or Karl have byes, whereas Raymond does. Even if they are heavily favored in the first round, this hurts their cumulative overall chances of winning. Moreover, the California bracket is fairly lopsided. Andrei, Escher, Jim, and Karl all on one side. Only one of these players can advance to finals, and then they have to win in the finals to get the whole thing. Mathematically, this prevents any one of them from having high odds even though they are all top tier players. These predictions should not be interpreted as a statement about “who is the best player among the field of players?”, but “Who is likely to win given this particular tournament, given past performance and the path of competitors ahead of them?”.
Just to drive home the point, I reran the simulation but swapped Jim and Raymond’s seed positions. Jim’s win chances jump from 2% → 10%, Karl’s from 9% → 11%, while Raymond drops from 23% → 8%. Same players, same ratings, but different predictions.
48 points in rating lower (up to large error bars). Not sure exactly the definition used by Matchplay, but that sounds about right on the math side assuming that the scaling is Elo-like.
(Rating accuracy is another question–but matchplay doesn’t seem that confident about those numbers…)
Here’s a scoresheet that may be helpful for keeping track of matches in the formats used by NACS/WNACS championship tournaments. Kudos to Kim M who recently posted the original sheet. I made a few updates to help make it more clear that the left column is always for the higher seeded player’s score (not necessarily player 1, which was my default assumption when using a similar sheet in the past).
Anyone is welcome to use this one for their upcoming NACS/WNACS tournaments (or other Bo7) if they prefer the layout. It includes one with OLD/MID/NEW selections as well as one without that distinction for states that aren’t using using games split into eras. It also includes an example sheet with everything filled out and a few notes about things that may come up.
That article just links to the spreadsheet at the beginning of this thread.
I’ve added a tab to the spreadsheet that includes all winners and runners-up in an easy to reference list. Here’s the link: IFPA NACS & WNACS 2023 - Google Sheets