Blue-Chip Recruit Migration 2020 and THE Florida Gators.

With the 2020 recruiting cycle almost in the books, I reflected upon the migratory patterns of the elusive Blue-Chip Recruit (BCR) with a focus on how it relates to the state of Florida and more specifically, the University of Florida.

First up, I wanted to see how the committed (does not include uncommitted recruits as of January 10, 2020) BCRs are distributed (as of Christmas, 2019). Here is the geographic breakdown according to the Composite ratings:

overall BC count 2020

Ok, cool. Florida is doing its thing here. Next, I wanted to see how many of these commits were staying in-state or migrating elsewhere:

exporte bcrs

So, out of the 57 BCRs from the state of Florida, 31 of them have been exported (54%). How well does that compare to the other states? Let’s look:

perc exports

The above map shows the percentage of BCRs that are exported from each state(Of note, states that have no information on them didn’t produce any BCRs, whereas states with a 0.00 percent produced at least one BCR but that recruit didn’t leave the state). Here is a table of how each state that exported a BCR breaks down:

all state export

Importing

Now, I wanted to look at how each state is importing BCRs, as just looking at exporting doesn’t provide a good understanding of the overall migratory patterns. The map below shows how many BCRs were imported by state:

total imports 2020

And the net difference between imports and exports:

net bcrs

Florida had the largest number of BCRs migrate away from the state. South Carolina, Alabama, and Ohio had the largest influx of BCRs (not coincidentally the homes of Clemson, Alabama, Auburn, and Ohio State).

It is plain to see that the state of Florida is getting poached heavily from schools in other states. But does this mean UF is not handling their business or is it simply because FSU and Miami suck and once Florida fills its class, the rest of the quality BCRs are simply avoiding the Seminoles and ‘Canes? I took a quick look to see if UF is taking care of business in the state:

in state keeps

Out of the 26 BCRs that remained in the state, UF had the most and the highest rated on average, as the chart above shows. This is good. Now, I wanted to see if UF was holding its own against other states when it comes to recruiting the state of Florida.

florida bcrs for all states

All in all, UF, is doing pretty good here. They are retaining a lot of highly-rated recruits. The above table shows, however, that Alabama, LSU, Clemson, Georgia, and surprisingly Nebraska are all having some success picking up good players from Florida (thanks, Bowman).

UF is also importing players at a good clip. Here is a comparison of the big 3:

state of florida imports

FL imports breakdown

Ultimately, it looks as if UF recruiting is going fine. The state of Florida produces a significant amount of football talent, as is well-documented. Schools from all over are going to get in on that. However, UF is doing its fair share to retain talent and they are also picking up good talent from out of state.

As always, if you see any errors, just let me know. I’ve also broken all of this down by position and ratings, but that will have to wait for another post. Until then, Go Gators.

2019 SEC Offensive and Defensive Performance vs Expectations

I recently looked at how teams did relative to their overall roster talent in terms of winning percentage. You can check that out here: https://thefaircatch.com/2019/12/29/2019-expectations-vs-performance-based-on-overall-roster-talent-levels/

In this analysis, I drilled down a level and looked at offensive points for (PF) and defensive points against (PA) and compared the season average for each SEC team relative to their offensive and defensive talent rating. The findings kinda confirm what could be seen in watching the games play out. But there were some surprising finds as well.

Offensive PA and Roster Talent (Offense Only):

off talent vs pf

This linear regression model was statistically significant and met all assumptions. However, the goal wasn’t to form a predictive model here, but instead to see where each team’s performance fell relative to their peers and talent level. From the chart above, we can see that LSU way surpassed expectations. Georgia had the biggest negative disparity in points expected (the top number next to each team logo) vs actual PPG (bottom number). Doesn’t mean they had the worst offense- that was Vandy with 16.5 PPG. It just means they were further below expectations than any other team. Florida and Auburn performed right at expected levels.

Defensive PF and Roster Talent (Defense Only):

def tal vs pa

In the above graph, we have the inverse from the offense. In this one, a good performance is below the line. For example, Florida was expected to allow 22.59 PPG, but only allowed 15.46. LSU and Alabama, the Blue-Ribbon winners on offense, allowed a few more points than one would expect given the defensive talent ratings. Arkansas just had a bad season overall. Interesting to me, Missouri outperformed expectations in both metrics.

Limitations

The small sample size is highly subject to variance. PF and PA as a stand-alone metric are not likely to be sufficient to determine the overall quality of the offense or defense. Overall roster talent allows for the inclusion of players that didn’t play (redshirts, transfers, injured, etc.) to influence the expectations but not the performance.

Some Details

Both regression models were fairly strong. The correlation between defensive talent and points allowed was 57%, with 32% of the variance in points against attributable to the model. Offensive talent was correlated with points scored at 69% with 47% of the variance in points scored attributable to the model. What this means, in general, is that relative to this sample, 68% of what goes into points allowed are variables other than overall defensive roster talent and 53% of what goes into points scored are to other variables than overall offensive roster talent.

2020 Thankfulness: A Statistical Look at Mac vs Mullen

I’ve seen some threads on a few Gators forums drumming the idea that former Florida coach Jim McElwain’s first two years were pretty similar to that of current coach Dan Mullen. The eye test certainly doesn’t make it seem like so to me. However, I wanted to take a look at some of the metrics and see if maybe there is some justification for this view. There aren’t many people on board with this view from what I’ve seen, but it did get me curious as to how some of the stats may differ between the two coaches.

The general thinking of those that are opining that Mac and Mullen’s first 2 seasons are similar is that they both had 10* win seasons (assuming Mac would’ve gotten the 10th win in a canceled game). I broke down each of their first two seasons along several dimensions and charted these below:

Wins/Losses:

W.L Mac vs Mullen

As we can see in this bar graph, Mullen has 2 more wins, 3 fewer losses and enjoys a win percentage advantage of 11%.

Margin of Victory (MOV) and Margin of Defeat (MOD):

Here I took a look at how the teams did in victory and defeat under the respective coaches:

modMOV

This chart shows that Mullen is enjoying a larger MOV and smaller MOD than Mac did in his first 2 years. Mullen is averaging 6 more points per victory and 6 fewer points in defeat.

Points For (PF) and Points Against (PA):

Both Mac and Mullen were hired, at least in part, for their offensive acumen. Both have benefited from having good defensive coordinators on staff, with Mac having Geoff Collins and Mullen with Todd Grantham. The main thing I wanted to look at here was the offensive production, but since offenses and defenses do not exist in a vacuum, and how one performs impacts the other, I included the PA data.

PA.PF mac vs mullen

Out of all of the comparisons, the PF in Mullen’s favor seems to be the strongest. I wanted to see if there was a statistically significant difference here because of the disparity in PF being so large (almost 11 PPG). I found there was a significant difference with a large effect size (independent T-Test with p = 0.005, Cohen’s D with .810, assumptions of normality- Shapiro Wilk- and equality of variances- Levene’s Test- were met).

pfPA

As we can see from this descriptive plot, Mullen’s offensive production is far superior. In this plot, the blue lines around the mean are the confidence intervals (CI). Mac’s interval width is 18.73 to 28.38 and Mullen’s is 29.01 to 39.22. The top end of Mac’s CI is less than the lower end of Mullen’s.

Limitations

This analysis does not account for the strength of schedule, injuries, suspensions, etc. Any of those things could impact the overall outcome for each coach. The point here was to look a little closer at the two coaches and their performances at UF two years in. Though Mac had a decent run his first two years, his teams never looked as good (to me) as Mullen’s have. There’s nothing in this analysis that convinced me that I was wrong about that and this puts the disparity between the two into a bit clearer picture. Mullen is superior in win percentage, has, on average, a larger margin of victory, smaller margin of defeat, significantly more points for and ever so slightly more points against (17.73 vs 17.56).

2019 Expectations vs Performance Based on Overall Roster Talent Levels

In 2019, overall talent levels as listed in the Composite 247 ratings were highly correlated with the overall win percentage for the SEC. I ran a more sophisticated model earlier in the year and found a correlation of 51%. At the end of the year, however, a simple linear regression model has shown a very high 82% correlation. (I also looked at expectations offensively and defensively here: https://thefaircatch.com/2020/01/04/2019-sec-offensive-and-defensive-performance-vs-expectations/ )

Here is a look at how SEC teams did relative to their roster talent in 2019:

The Stats:

regstats
The yellow highlights the correlation. The green highlights the effect size (see below).

The tables above are out regression outputs. As stated in the caption, the multiple R is the correlation between the variables. The R Square is the percentage of the outcome (win percentage) that is attributed to the overall roster talent level, which is used as a measure of effect size. A 68% is considered a large effect according to convention. Additionally, the Signficance F is less than 0.05, which achieves statistical significance. This means there is less than a 1% chance that the results of the model were due to randomness.

Scatterplot:

scatter2

This scatterplot shows each team’s actual win percentage relative to what would be expected given their overall roster talent. Vanderbilt, Mississippi State, Tennessee, Auburn, and Georgia all performed right as expected. Missouri, Kentucky, Florida, and LSU all outperformed model expectations. Arkansas, Ole Miss, South Carolina, Texas A&M, and Alabama all underperformed relative to model expectations.

Ranking Achievement:

The distance above (or below) the line indicates how much a team over or underperformed relative to model expectations. Here is how each team’s numbers worked out:

expectationstable

As we can see, LSU was the highest achiever (duh), winning 100% of their games with the model expecting them to win 78.6%. Arkansas had the worst season relative to model expectations, winning only 16.7% of their games while expected to win 40%.

Limitations

The model does not take into consideration SOS and is not intended to be predictive, as the margin of error would be about 4 games. It is solely intended to examine the relationship between two variables- talent and winning. The goal here isn’t to build a model that accounts for every worldly possibility, but instead to look back at the season and see who generally overachieved and who didn’t. All that being said, this simple model with one predictor variable actually performs fairly well, though the sample size of one season is way too small to draw any hard conclusions beyond the stated scope.

Composite Recruit Ratings, Stars, and the NFL Draft part 2

This post is a continuation of a prior analysis I did on the relationship between Composite 247 rating level and the NFL draft:

https://thefaircatch.com/2019/12/22/recruit-star-rankings-and-the-nfl-draft-a-quantitative-perspective/

In this part, I looked at the average rating and star count for each position group by round in which they were drafted. Again, only roughly the top 1000 recruits for each class from 2012 to 2019 were used. This way, all of the 5-stars, 4-stars, and elite 3-stars were included. If a drafted player was not ranked in the top 1000 out of high school, they are categorized as “unrated” for this study.

There were 2035 players drafted over this time according to Pro Football Reference (I assume this includes supplemental drafted players). I removed 36 specialists from the list, as they are not rated by the same standards as other players coming out of high school. Of those remaining, 961 were “Rated” (i.e. in the top 1000 of their class) and 1038 were “Unrated” for a total of 1999 players included in the analysis.

newTable1
Table 1

Table 1 shows each position group and the number of players drafted by round. It includes rated and unrated players. RBs and TEs are generally of lower draft value, while QBs are most likely to be taken in the first round.

avg.rating.pos
Table 2

Table 2 shows the average Composite Rating for players drafted by round and by position. By the heat-mapping, we can see that the higher, on average, a player is generally rated, the higher he will be drafted. An interesting note about this table- virtually every position group except RB and TE average about a .9300 rating in getting drafted in the first round. I will look more into this in the future, but this may be a potential threshold to look for when evaluating college football rosters. Beyond that, the NFL apparently is only comfortable drafting running backs in early rounds that are exceptional. Tight ends are rarely picked in the first round.

avg stars.pos
Table 3

Table 3 is the same as Table 2, but by the average star count.

count.rated
Table 4

Table 4 is the total count of Rated players drafted. Looks like DBs are in high demand for the NFL…

count.unrated
Table 5

Table 5 is the count of Unrated players by position and round drafted. It is apparent that of the drafted players who were not in the top 1000 of their class, they tend to get drafted in the later rounds.

Ugly Bonus Graph: Draft Data by Conference (kickers and punters removed)

The bigger the bubble, the higher the count.

UglyNFL

 

The purple bubbles (SEC) are larger for just about every position group among the P5 conferences.

Ugly Bonus Graph: Draft Data by Team (SEC Only)

The Gators are in 3rd place. Expect that to come up under Dan Mullen. Go Gators.

UglySEC

As always, if you see any data errors, let me know so I can fix them.

Recruit Star Rankings and the NFL Draft: A Quantitative Perspective.

In the never-ending war between ‘Star Gazers’ and ‘Stars Don’t Matter’ crowds, one weapon often used by the Gazers is the NFL draft. The general claim is that the more stars a recruit has upon coming out of high school, the more likely he is to be drafted. Given that the draft is a validation of a player’s talent (and development), this argument is logical. There may be skepticism about the scouting services’ ability to actually evaluate talent. There is, however, considerably less skepticism about an NFL scout’s ability to scout talent (yes, there are misses, but for argument’s sake, we should agree that out of all of those evaluating NFL level talent, NFL scouts are the best at it).

In this analysis, I looked at 7 recent years of roughly the top 1000 recruits from consecutive classes and tracked their path to the NFL. Obviously, not all of them made it. However, many did. Here is a breakdown of how the data shook out. (Of note, part 2 is done and located here: https://thefaircatch.com/2019/12/28/composite-recruit-ratings-stars-and-the-nfl-draft-part-2/ )

Recruit Data

Taken from the Composite Ratings, I collected roughly the top 1000 recruits from 2009 to 2015. I started with 2009 because there was a sharp incline in average recruit ratings for the top 1000 at about that time. I wrote about that here:

https://thefaircatch.com/2019/07/04/the-changing-baseline-of-composite-college-football-recruits/

I limited the classes at 2015, as the majority of those recruits have had the opportunity to get drafted at this point (those seniors were eligible for the 2019 NFL draft). Of course, there are going to be exceptions, such as 2015 recruits who were granted redshirts and haven’t declared for the draft yet, but these are assumed to be very low in number and there wouldn’t move the needle much at all.

NFL Data

This was slightly tricky. The data used was the NFL draft data obtained from Pro Football Reference (excellent site) at https://www.pro-football-reference.com/draft/

I used the draft data from 2012* through 2019. *The 2012 draft data included only the juniors who were drafted from the 2009 class of recruits. By doing so, I was able to accurately capture the number of top 1000 recruits who were drafted from the 2009 class.

The number of players recruited over the 7-year span was 1568 (32 teams x 7 years x 7 rounds).

Data Analysis

There was a total of 6877 top recruits included in the analysis. This represents 98% of the top recruits for each cycle over the 7-year period (some records didn’t scrape accurate from the web- something I will go back and look at, but the missing data is not at all proportionately significant). An overview of the top 1000 broken down:

top1000

Here is how the draft data worked out:

contTable1

In terms of percentages:

draft%s

The above table shows 14% of the total recruits from the top 1000 were drafted. 58% of those 5-stars were drafted. 21% and 9% of the 4-stars and 3-stars, respectively.

drafted

In this small table above, it shows that of the players drafted over the analyzed period, 61% of those players were among the top 1000 recruits (964/1568=61.47). 9% were 5-stars, 31% were 4-stars, and 25% were 3-stars.

It’s easy to see that in terms of percentages, 5-stars are overrepresented in the draft among the top 1000 recruits, as they make up 3.4% of the top 1000 recruits, but 9% of those recruits drafted. 4 and 3-stars are underrepresented. 4-stars make up 41% of the top 100 but only 31% of those players drafted. 3-stars are at 66% and 25%.

Further Analysis

To nail the value of these findings, I conducted an analysis to see if, categorically, star ranking was related to round in which a player was drafted.

The first contingency table shows the expected theoretical proportions for how each of the star-rankings should be drafted by round:

theoretical

The next table shows the actual proportions:

actual proportions

And this 3rd table shows the values of actual proportions relative to expected with heat-mapping:

differences

The graph clearly shows that the difference between expected round one 5-stars and the actual count of round one 5-stars is significant (of note, a chi-square test for independence confirmed, with p < .001, Cramer’s V 0.164, indicating a relatively small effect size). The numbers in the box are the actual numbers minus the expected numbers. Interestingly, each star rank appears to scale down according to round drafted. 5-stars are overrepresented in the early rounds, 4-stars in the middle rounds, and 3-stars in the late rounds, all while the inverse is holding fairly constant. Essentially, it makes sense that if the 5-stars are getting drafted in the early rounds, they will be underrepresented in the late rounds.

I’m still working on this, so if you see any errors, please let me know. I don’t expect this to end the arguments in the Gazers vs Stars Don’t Matter war. But to me, it is undeniable that being elite in high school ultimately ends up in improved chances of getting drafted and getting drafted earlier.