8 Signs That You’re Good At Fantasy Football – #5 will shock you!

Football season is here at last.

More importantly, fantasy football season is here: that blessed ~5 month period when I get to obsessively check for updates about my team, agonize over start/sit decisions and, occasionally, actually watch a real life televised football game.  (I take the “Moneyball” approach to team management; sports fandom takes a distant backseat.)

Like many, I’ve participated in the same fantasy football league with my group of college friends for years; last year marked our 10 year anniversary.  It’s a competitive league.  Everyone tries hard, does their research, and pays attention.  And everyone thinks they are the best.

Trash talk and gloating are the traditional pastimes of fantasy football leagues.  Not so in mine.  We’re actually pretty polite.  But we do share the same irrational belief which animate those habits: that we each possess some secret sauce which is going to get us that championship.  This year, this time, finally, this is the one.  But of course, if such secret sauce did exist, someone would be getting rich.

Still, being the data nerd that I am, I wanted to see if I could answer the question of “who is the best at fantasy football?” with some degree of objectivity.  I started by collecting core statistics about season finishes: win percentage, season finishes, points scored, etc.  My goal was to diversify my approach, collecting many different metrics which, considered together longitudinally, would create a composite fantasy football score.  As I neared the finish line, I began to experience doubt, not unlike Eve before she ate forbidden fruit from the tree of knowledge.  Part of the fun of fantasy is that each year presents a clean slate of new opportunity; knowing where you truly stack up would, for many, be a bitter fruit indeed.  Also, everyone knows fantasy football is a crap-shoot; what confidence could I have in a ranking system anyway?

Well, I suppose the difference is: Eve ate from the tree of knowledge, and I built the tree.

A detailed explanation of my composite fantasy score methodology will follow.  I encourage you to download my work so that you can review and follow-along.  Helpful note: orange tabs contain the final numbers which comprise my scoring system; red tabs contain visualizations.

Composite Fantasy Metrics

I first developed my composite metric system for fantasy football leagues in 2016 and I’ve iterated since that time.  My latest system uses 8 metrics, which are weighted according to their relative importance at discerning fantasy “skill”.  These are the metrics:

Metric Justification Weight
Draft Efficiency Efficient usage of draft dollars to acquire maximal team: measure of predictive skill. 5
In-Season Pickup Saavy How effective were you at picking up important players from waivers and trades? 5
Weighted Average Season Rank How have done performed across all seasons?  It’s not enough to be stellar in one. 18
Playoff Win Frequency Gotta be clutch to close out the season in style. 8
Average Win Percentage Across all seasons, have you won more than you lost? 8
In-Season Consistency 125, 125, 125 is better than 150, 100, 125. 8
Average Z-Score Points For The most important metric.  A measure of relative performance of owners compared to peers, by the most reliable method: in-season points accumulated. 40
Average Z-Score Points Against We cannot control the points scored against us.  How unlucky were you?  Did you succeed in spite of it?  If you got a lot of points against you, then you get a bump; if you didn’t, then it counts against you. 8

100 possible points across 8 metrics, each attempting to capture an aspect of what characterizes a “good” fantasy owner.  And crucially, I am taking averages across all fantasy seasons for the league.  Grading fantasy owners on their full career rather than a single season allows the highest performers to emerge.

Here are the metric weights visualized:

The weight given to a metric is largely based upon my discretion and judgement.  In deciding weights I think about: A) To what extent will strong performance in this metric be indicative of fantasy skill?  B) How much noise is likely to be present in the data amidst the signal?

Enough already.  WHO’S THE BEST?

Results

Patrick.  Patrick is the best, dammit.  These are your sentences Patrick where I say how good you are at fantasy football on my blog.  I hope you like them.  Ok, you’re done.

The above graph shows the latest composite scores for all 10 owners in my fantasy league.  Here’s how those scores have changed over the past 3 years:

Check out my meteoric rise!

To offer greater visibility into the scores, I’ve provided some dynamic visualization tools in the Excel file.  In tab “Graph Owner Over Time” I show how each metric changed for an owner each year, allowing the impact of individual seasons to be assessed.  One of my graphs is below.  2016 and 2017 were both great seasons for me, which is why you see my average rank and points for scores go up.  But with this system, it’s not enough to be stellar in an individual metric.  To perform well, you need to demonstrate excellence across the board.

Tab “Graph Metric Over Time” shows how each owner changed relative to each other on a specific metric.  Below is what shows for “Average Z-Score Points For”, the most important metric.

Now I want to devote some time to methodological commentary.  My hope is that someone out there is reading this and thinking about adapting my system for their own fantasy league.  To facilitate that end, I want to talk about each metric and highlight challenges, solutions, and caveats.

First, some useful definitions:

Z-Score: This is the number of standard deviations above or below the mean where a raw value resides.  It’s useful here because it allows standardization of scale.  More on this later.

Standard deviation: This is a measure of the variance of a set of numbers.  Very useful in determining relative difference of a set of fantasy owners on a given measure.

Now, on to the metrics!

Draft Efficiency

Relevant context: my league uses an auction draft.  Each owner gets $200 with which to bid on players, with a minimum bid being $1.  We love it.  Auction drafts introduce much more depth than the standard snake draft, because you not only need to pay attention to player values for your own team; you need to make sure other owners don’t get a high-value player for cheap.  But this metric would still work for snake drafts; you would use draft position instead of player price.

For example, here is my draft from this year:

Time will tell how efficient I was

The logic here is that we want to credit owners for nabbing a player at a price/position below their true value, where we are defining a player’s value according to that player’s end of season position rank (e.g. WR5).  Likewise, we penalize owners for taking a player above value.

Some challenges:

  • Consider this scenario: Owner 1 drafts Frank Gore (mid-level RB) well below actual value, Owner 2 drafts Todd Gurley (top tier RB) at value.  Is it right to credit Owner 1 more than Owner 2?

I say no, probably not.  To correct this problem, I added a “top player bonus” on top of their raw draft efficiency score.  The size of the bonus was defined by: [# number of owners in league]*[# of roster spots for corresponding position].  For example, Todd Gurley being the #1 RB in a 10 person league with 2 RB roster spots would yield 20 bonus points.

  • This one is really only relevant for auction drafts: what if an owner does not draft a full roster + bench?  Does the math account for this?

I had to feel this one out based on my gut.  Some owners who drafted incomplete rosters were benefiting from not having to take as many risks from the metric’s perspective; there were fewer opportunities to get caught in a low-value pick-up.  For example, my above draft leaves 2 roster spots empty (because I ran out of money).  To correct this, I ultimately penalized draft efficiency scores .3 points per empty roster spot for the given season.

When all is said and done, the calculation is:

(Average: ( [drafted player position price order] – [drafted player end of season actual position rank] + [top player bonus] )) – ( [# empty roster spots] * 0.3)

In-Season Pickup Savvy

This is a necessary counter-balance to draft efficiency, as it wouldn’t be fair if we only credited owners on their draft day choices.  Many seasons are won based on waivers and trades.  This metric grades owners on their in-season pickup choices, based on the end of season position ranks for players.

In order to do this, I had to collect the final season roster for all owners, and compare to their draft day lineup.  This allows me to isolate any players who were undrafted, and therefore must have been picked up via waivers, free agency or trade.

One challenge and one caveat:

  • Player name normalization was a problem for me, because we often used short-hand when recording player names during a draft.  I  solved this by creating a player lookup table containing all name permutations for the same player, allowing me to normalize all player references.  A bit of work but not complicated.
  • As I did not have access to a slice in time for each season so as to determine player position ranks in 2012, 2013, etc., I made an assumption that the combination of all teams’ final rosters together reflected the best available players at the end of the season.  I think this approach is reasonable, but it therefore excludes any high-value players who may have been injured prior to playoffs, or otherwise dropped for some reason.

The formula ends up being:

(Average: [Position ranks of undrafted players] ) / [number of owners in league]

Weighted Average Season Rank

In earlier versions of my system, I used an unweighted average rank.  It was very simple: I took the season finish of all owners for all season, normalized for the number of owners in the season, and took the average.  But after reflection I realized that we should be weighting season finishes more heavily if an owner performed very well.  First place is WAY better than fourth place, and +3 seems like a pathetic recognition of that achievement.

We therefore instituted weighted ranks.  The following table shows the current weights for a 10 person league.  I use “rank points” for ease of tweaking weights.

Season Finish Rank Points
1st 20
2nd 15
3rd 12
4th 9
5th 7
6th 6
7th 3
8th 2
9th 1
10th 0

Another advantage to heavily weighting a top season finish (along with an impactful metric weight of 18%) is that each year’s update of the composite scores will more closely resemble the result of the most recent season.  We had an issue where the first place finisher would be frustrated to discover that their score for average rank had only marginally improved.  The composite metric system is philosophically opposed to giving more consideration to a recent season, as it’s designed to score a fantasy owner’s full career.  Still, this fluidity makes it a little more fun.

A note regarding certain rank weights: I gave a bump to the rank points for 6th place (because this is the cutoff to make it into the playoffs) and 3rd place (because this is the cutoff to make it into the money).  FYI: My league’s payout structure is currently: 1st place – 50% of buy-ins; 2nd place – 30% if buy-ins; 3rd place – 10% of buy-ins; Regular season champion – 10% of buy-ins.

Playoff Win Frequency

This is an important metric because it’s the only one focused on what occurs during the playoffs.  Most of the other metrics are describing what occurs during regular season.  Playoff win frequency is about being clutch; we value a critical win at a critical time.  To do this, I collected the seed for all owners and compared to their rank; with these two data points, we can determine how many playoff wins they had.

A couple things to note:

  • My league is currently a 10 person league, with a playoff cutoff at 6th.  This means that 1st and 2nd seeds receive byes, which do NOT count as playoff wins.
  • I do count consolation bracket wins as playoff wins.  Though it is the consolation bracket, these wins are still clutch, and these ranks are receiving considerably fewer points via the weighted average rank metric anyway.

To illustrate, let’s use the example of 2014 from my league:

Take TURN DOWN 4 WATT (Krista) for example: She was 4th seed and finished 2nd.  From these two pieces of information we can determine that she won 2 playoff games.  You can see that above: she beat BeadsMeOff (ahem, me), beat IbeSpeakinSqaghili (Patrick), then lost in the final.  This season also has two notable occurrences: DEZstiny (David) got 3 playoff wins, which is very impressive.  And Patrick, despite being the 1st seed, got 0 playoff wins and finished 4th.  Haha, what a loser.

You might think that there’s a collinearity problem between weighted average rank and playoff frequency, but consider that a result of 1 playoff win could be any of the following, depending on circumstance: 9th, 8th, 5th, 4th, 3rd, or 2nd.  Additionally, a significant majority of playoff win frequencies are 0 or 1, as seen below:

After recording playoff wins, the formula is simple: SUM: [owner playoff wins] / [number of seasons for owner].

Overall Win Percentage

This is probably the most straightforward of all metrics: It is simply the ratio of wins to losses across all seasons.  There are only a few things I’d like to note:

  • The win/loss ratio is calculated using regular seasons only.  This is driven largely by the inconvenience of combining regular and playoff results, but that could be done.  I kind of like keeping these separate, however, as playoff wins seem categorically different as they carry such implicit weight.
  • Win percentage is an important counter-balance to the average z-score points against metric, and vice versa.  This is because, naturally, if an owner receives below average competition, they are more likely to win.  More on this later.

The formula is simply: wins / losses across all seasons.

In-Season Consistency

This is the only metric which requires access to your league’s weekly data; all others are possible using season-end data.  This is a measure of an owner’s point scoring consistency across weeks within a season.  As experienced fantasy owners know, it is better to have a team which performs consistently (all things being equal) than otherwise.

To calculate the consistency score, we take the standard deviation of weekly points scored for a given season to create a variance score.  As different seasons will have different scoring systems, we need to take a z-score of these standard deviations for the season to normalize.

There’s one tricky question which I will use an example to illustrate: Who do you think had a more consistent performance in 2016: Mark or Ben?

Their weekly scores actually had nearly identical standard deviations: 20.72 and 20.91 respectively.  But in my opinion, what Mark has achieved here in terms of consistency is more impressive than that.  Mark performed well in 2016, scoring fantasy points .73 standard deviations above the mean for that season; Ben, by comparison, scored -0.58 standard deviations below the mean.  When you have a high-power team like that, you will have weeks when your team “blows up”.  In my view, fantasy players tend to have more variable “ceilings” as compared to their “floors”.  This tendency will act as a penalty to that owner in the context of a consistency score.  To correct for this, I decided to give a small bonus to the variance score for owners having above average points scored, and a small penalty for below average owners.  To bring this back to our example, Mark would receive a bonus resulting in a variance score of 19.45, and Ben receives a penalty resulting in a score of 22.99.  (Low numbers are better than high for this metric.)

This all comes together as follows:

Do this for all owners, all seasons: [Weekly scores standard deviation] – ([z-score points for] * 1.5)

Then, calculate standard deviation of above values by year; use these to calculate z-score

Average Z-Score Points For

You may be sick of seeing the term “z-score” by now.  But it is so critical for doing this kind of analysis!  Maybe you’re wondering why I can’t simply take an owner’s average points scored for for all seasons and call it a day.  To illustrate why not, here are my own actual fantasy points scored over time, and also my z-score points for over time:

Tells a much different story, doesn’t it?  Z-scores are necessary because what really matters is the relative difference of an owner’s performance over their peers in a given season.  And also because each season will have different point behavior, especially when the league has instituted rule changes (which happens frequently in our league).

The calculation is a z-score, and 3 is added to ensure the values are positive:

(( [points scored] – [mean points scored for season] ) / [standard deviation points scored for season] ) + 3

Average Z-Score Points Against

Most metrics are attempting to assess the skill of the fantasy owner with their team management.  But in the cases of weighted average season rank and average win percentage, there is noise obfuscating the truth in the form of points scored against the owner.  You cannot control what your opponents score against you, and this can have a huge impact on your fantasy success.

To illustrate this, see the graph below which plots each fantasy owner by their win percentage rank and their points against rank:

As you can see, win percentage and points against are inversely correlated.  Krista and KJ have the two worst win records, and the two most points against.  I have the best win record, but I’ve also had the easiest time of it, receiving the least points against across all seasons.   The points against composite metric is the only one which can either increase or decrease an owner’s composite score, according to whether that owner has had more points against them than average, or less.

This calculation is just a straightforward z-score, leaving the values negative or positive to represent the penalty or bonus:

( [points against] – [mean points against for season] ) / [standard deviation points against for season]

Conclusion

This data work took many hours of my time to develop and produce, but hopefully what I’ve laid out here will expedite the process for those who want to duplicate or adapt.  I encourage you to set up your spreadsheet in such a way that it makes it easy to load in a new season of data, so that you don’t need to write your formulas anew each time.

By its nature, not everyone will be thrilled at the concept of a scoring system which grades fantasy owners on their performance across seasons.  But I suspect there are many out there who, like me, will be unable to resist the temptation to take from the tree of knowledge and discover who in their league is really the best at fantasy football.

I hope for your sake that your overall winner isn’t as obnoxious as Patrick.