Pinburgh is a match play tournament… which I assume is the origin of the 3-2-1-0 scoring format. Simply put… it’s how many people you beat. Coincidence, or design? @bkerins?

But as we’ve seen year after year… the scoring results in incredibly TIGHT bands between players and division cutoffs. Where B division is a 3 point spread entirely (ignoring those dragged up by restrictions)… and the difference between a C qualifier and an A qualifier is only 4 points… 4 places out of a possible 60… or another way… 1-2 games out of 20 played can be the difference between almost 200 places in the rank. So in practice, just 4 out of 60 points (~6%) is the difference between A and C. Or another way… just 10 points (~17%) is the difference between 58% of the field (467 players).

The question I propose is… is this ‘massing at the middle’ the best outcome?

When you look at the point total distribution… it’s largely as you would expect for an ‘average’ minus the top 10 or so of players pulling away. (half points were rounded down here for better viewing)

Would it be better… to create more separation in the player totals by making scoring less linear? Without easier access to the match data, this is not worth the manual effort to simulate… but I assume the math geeks may have done this already?

Would scoring matches something like 0-1-3-5 or 0-1-2-4 significantly change the spreads… and would it be a meaningful change resulting spreading out the player outcomes?

Thoughts?

Edit: note this analysis was done initially based on the open qualifying. It may be more interesting to compare results once you are in the divisions too… but computing those match totals is less convenient with the data on-hand.

Edit Note 2: This is done with almost zero checking of the data for errors… so be gentle if I screwed up

#slowerconvergenceftw

1 Like

In what way Josh?

Does this amplify the initial seeding impacts?

Not exactly on the topic of scoring, but was there an official decision on how the groupings for session 10 will work this year?

2 Likes

Rules aren’t up yet to my knowledge. I do feel like there was plenty of discussion on it last year though

Slower convergence allows the lead horses to run a bit on the difficulty of their groups, so you would end up with “elite players” getting more points per round because they wouldn’t be facing other “elite players” earlier in the tournament. I’m sure someone smarter than myself can pull the average seeded player number that the #1 seed has to play in Day 1 versus the #50 seed, the #100 seed, etc.

[this is does not account for the random draw of Jorian in Session #2 because he had a bad Session #1 and is now the 649th seed]

2 Likes

I covered this a couple years ago. First off, changing the scoring would not have a significant impact. Second, the reason for the bunching up is the reseeding of players each round. That makes it harder for the top end to pull away and also less likely that the bottom end will fall down completely. In round 1, the top seeds could all get 9-12 and the bottom ones 0-3, but once they’ve converged, they’re all getting on average 6 per round. The reseeding is done gradually to allow some dispersion, but it’s a smoothed and therefore tempered process. Third, that’s the whole point of using reseeding, to make more people still “in the hunt” later in the event.

4 Likes

I guess thats what was spurring my thought… i was trying to do away with going 0.500 and getting the same points no matter what your finishes were.

I dont think reseeding mutes entirely this because -all- players are being reseeded. The guy that moved up for winning, still does… as did their peers. The difference is players can put more distance between them and the losers. The peer winners are doing the same.