In this analysis, I took the opportunity to look at how the SEC QBs performed in 2019 when controlling for the disparity in games played. As is evident in the table below, of SEC QBs with a minimum of 250 passes, there is a bit of disparity in the number of games played.
Joe Burrow obviously had a fantastic year. But to get a sense of the year each QB had on scale with each other, I took the average number of games played for these 10 qualifying QBs (12.1) and projected each of their statistical performances over that number of games. For my analysis, I only used the categories in gray in the above table. This is to avoid unnecessary repetitiveness, as completions and attempts comprise the category of “Pct” (completion percentage). Y/Comp uses the completion data, so I kept that.
Below is a look at how each of these QBs projected numbers would look:
I then standardized each of the statistical categories except completions, as I no longer needed that. Of note, standardization worked here because each of the categories had data that were approximately normally distributed. Now, the only categories I was interested in were completion percentage, yards, TDs, Interceptions, and Yards per Completion. The standardized score with color-scaling is below:
To see how each of the QBs did relative to their peers, I simply summed each of the standardized scores to achieve an aggregate score. I then graphed each of these to give a sense of proportion to each performance. Burrow and Tua were on a completely different level overall:
Other than Tua and Burrow, only Kyle Trask and Jake Fromm had a net positive rating. Kudos to both. Below are the rankings and aggregate score for each QB:
This analysis serves to highlight the magnitude of the year Burrow had and Tua would likely have had. Furthermore, it shows that Kyle Trask, who started the year as a backup, really did have an outstanding season. To play that well with such limited experience indicates to me that he will potentially have a great year next season. Trask should be considered the SEC’s leading QB going into 2020 in my opinion. Of course, no QB performs in a vacuum, but looking at the 2019 performances from a statistical standpoint is certainly encouraging for Florida fans and possibly the Cincinnati Bengals (Burrow) and Miami Dolphins (Tua).
This shows us again how impressive Trask was in 2019. He ranked 16th nationally among P5 QBs. Of note, the aggregate score changed because it is based upon the relative national scores instead of the relative SEC scores as in the previous section. Furthermore, this table shows how Tua and Burrow were both dominant at the national level as well. Other takeaways for me were Sam Howell of North Carolina performing so well as a true freshman and Trevor Lawrence being *only* at number 10.
As always, let me know if there are any errors. Go Gators.
Using a naive Bayes machine learning model that I constructed on historical data with 71% accuracy (https://thefaircatch.com/2020/02/01/reviewing-the-recruiting-services-how-do-they-stack-up/), I put together predictions for top 150 ranked players by recruiting services (ESPN, Rivals, 24-7, and the Composite). The table below shows the model’s probability percentage for the player to be drafted. Of note, I’ll update missing/incorrect college team once NSD is over and everyone is settled in.
I decided to look at how each of the services and the Composite have done in predicting which players will get drafted.
I used the ratings for each of the services previously mentioned. I took the top 150 players from each service between the years 2012 and 2015. This time frame was selected because it was modern and included all draft-eligible players (of note, a few 2015 recruits, such as Gators’ Receiver Van Jefferson are draft eligible this year, but that number is likely to be very low and won’t impact this study).
Players that were listed among the top 150 recruits for each of the services (ESPN, Rivals, 247, and Composite) were given 4 “votes”. Players that were in 3 were given 3 votes and so on. After that, analytics began. Of note, the blue arrow line next to each table indicates the direction in which the heatmapping flows. So, in figure 1, the color scale would be interpreted per column as the arrow is vertical. For tables that have a horizontal arrow bar, the color scale is applied to rows.
Breaking Down Outcomes by Recruiting Service
Figure 1 shows how many players in each position group were ranked by the individual services in the data set. The math there adds up (150 x 4 = 600 each, 4 services x 600 = 2400). Though there were 2400 data points, many of these players were included in multiple services (more on that later). This left me with 908 different individual players overall. Figure 2 shows how many players out of each position group and service were drafted. There were 816 players drafted from the overall data set, for a group accuracy of 34%. Figure 3 depicts the accuracy percentage below.
As we can see in figure 3, ESPN has the lowest overall accuracy rate and 247 has the highest. They are all fairly close, but ESPN is behind here. To further explore this, I created a quick chart to map out the differences in mean (average) of each service’s top 150 to get drafted:
Put to scale, we can see that ESPN has done a relatively poor job in including players in their top 150 who would go on to get drafted. ESPN did tie with 247 for the most accuracy in predicting DBs drafted, so gotta give them that.
Breaking Down Outcomes by “Votes”
I was curious as to the variance among the services in putting different players in their top 150. I created a ‘vote’ count by simply tallying up which players were included in which service ranking. A player that was in the top 150 for all 4 services got 4 votes, a player in 3 of the 4 got 3 votes and so on. It became very clear to me upon charting the data that players that were consensus top 150 players (those with 4 votes) got drafted at a much higher rate.
Figure 5 shows that if a player had 4 votes, they were drafted 45% of the time, up significantly from the group average of 34%. The more votes a player received, the more likely they were to be drafted. Interestingly enough, out of the 908 unique players in the study, 272 (30%) received only 1 vote. So, there is definitely some variance in how each service evaluates prospects.
Figures 6 and 6a above shows the draft count data for each position group by the number of votes. In terms of pure numbers, Defensive lineman were the most drafted position with 163 and Tight Ends had the least with 29. Of note, the red arrow line on the right indicates how you should interpret the color scale for the total column on the right side of the graph only.
Figure 7 above shows the relative proportions of drafted players by the number of votes and positions. It is clear that having 4 votes was a much stronger indicator of draft potential for each position group.
Charting Vote Count Accuracy
When I initially started the analysis, I charted the data and utilized a classification tree to try to see if there was any delineation in the data that would bring to light any significant groups (clusters). An initial scatter plot was pretty busy but did show some clustering in the lower left quadrant:
This scatterplot showed me that there was probably something going on with the association between being highly ranked and drafted early.
Once it was clear something was going by in terms of the vote count, as set forth above, I looked a little more into this. A scatterplot of the data as in Figure 8 but parsed out by vote count shows just how impactful this metric is:
It is easy to see in Figure 9 that those players with 4 votes bunch up toward the higher draft picks and have overall higher counts, as we’ve seen. The lower the vote count, the further to the right the data drifts.
On the horizontal axis (“x” axis), which is labeled ‘Avg Rank’, this is the recruits’ rank averaged across all of the services in which he was ranked within the top 150. On the vertical axis (“y” axis), labeled ‘Pick’, this is where that recruit was ultimately drafted overall (i.e. 35th pick in the draft). The different colored numbers show how many votes that recruit received coming out of high school.
The Florida Gators
Of the current UF commits in any services top 150, here is how many votes they have and from each service:
Figure 10 shows that Gervon Dexter, Xzavier Henderson, and Jahari Rogers are each ranked in the top 150 by all four services. Derek Wingo has 3 votes. Ethan Pouncey and Issiah Walker have 2 votes and Antwaun Powell and Jaquavion Fraziars each have one vote.
One Step Further
I have continued to play around with the data. I built a machine-learning algorithm to see if the data is helpful in predicting which players will get drafted and which players will not. I used a naive Bayes classifier on binary outcomes (UD = ‘Undrafted’, Drafted = well, drafted). The model was impressive at predicting who doesn’t get drafted but was not necessarily helpful in predicting who will get drafted. However, it was an overall accurate model at 71%, which is pretty cool.
The resulting confusion matrix was:
The way you interpret a confusion matrix is diagonal. The model predicted 22 ‘drafted’ correctly and 38 incorrectly. It predicted ‘UD’ correctly 113 times and was wrong 18 times. This makes sense when you look at the corresponding graphs. There is a lot of overlay among the variables contributing to the ‘drafted’ status.
In Figure 12 above, it is easy to see that the probability of getting drafted is much higher when 4 votes are obtained. The drafted vs undrafted lines really start to separate at about 3.5 or so. However, a large chunk of drafted players are still under 3 votes, but the disparity at the tails is marked.
With the 2020 recruiting cycle almost in the books, I reflected upon the migratory patterns of the elusive Blue-Chip Recruit (BCR) with a focus on how it relates to the state of Florida and more specifically, the University of Florida.
First up, I wanted to see how the committed (does not include uncommitted recruits as of January 10, 2020) BCRs are distributed (as of Christmas, 2019). Here is the geographic breakdown according to the Composite ratings:
Ok, cool. Florida is doing its thing here. Next, I wanted to see how many of these commits were staying in-state or migrating elsewhere:
So, out of the 57 BCRs from the state of Florida, 31 of them have been exported (54%). How well does that compare to the other states? Let’s look:
The above map shows the percentage of BCRs that are exported from each state(Of note, states that have no information on them didn’t produce any BCRs, whereas states with a 0.00 percent produced at least one BCR but that recruit didn’t leave the state). Here is a table of how each state that exported a BCR breaks down:
Now, I wanted to look at how each state is importing BCRs, as just looking at exporting doesn’t provide a good understanding of the overall migratory patterns. The map below shows how many BCRs were imported by state:
And the net difference between imports and exports:
Florida had the largest number of BCRs migrate away from the state. South Carolina, Alabama, and Ohio had the largest influx of BCRs (not coincidentally the homes of Clemson, Alabama, Auburn, and Ohio State).
It is plain to see that the state of Florida is getting poached heavily from schools in other states. But does this mean UF is not handling their business or is it simply because FSU and Miami suck and once Florida fills its class, the rest of the quality BCRs are simply avoiding the Seminoles and ‘Canes? I took a quick look to see if UF is taking care of business in the state:
Out of the 26 BCRs that remained in the state, UF had the most and the highest rated on average, as the chart above shows. This is good. Now, I wanted to see if UF was holding its own against other states when it comes to recruiting the state of Florida.
All in all, UF, is doing pretty good here. They are retaining a lot of highly-rated recruits. The above table shows, however, that Alabama, LSU, Clemson, Georgia, and surprisingly Nebraska are all having some success picking up good players from Florida (thanks, Bowman).
UF is also importing players at a good clip. Here is a comparison of the big 3:
Ultimately, it looks as if UF recruiting is going fine. The state of Florida produces a significant amount of football talent, as is well-documented. Schools from all over are going to get in on that. However, UF is doing its fair share to retain talent and they are also picking up good talent from out of state.
As always, if you see any errors, just let me know. I’ve also broken all of this down by position and ratings, but that will have to wait for another post. Until then, Go Gators.
In this analysis, I drilled down a level and looked at offensive points for (PF) and defensive points against (PA) and compared the season average for each SEC team relative to their offensive and defensive talent rating. The findings kinda confirm what could be seen in watching the games play out. But there were some surprising finds as well.
Offensive PA and Roster Talent (Offense Only):
This linear regression model was statistically significant and met all assumptions. However, the goal wasn’t to form a predictive model here, but instead to see where each team’s performance fell relative to their peers and talent level. From the chart above, we can see that LSU way surpassed expectations. Georgia had the biggest negative disparity in points expected (the top number next to each team logo) vs actual PPG (bottom number). Doesn’t mean they had the worst offense- that was Vandy with 16.5 PPG. It just means they were further below expectations than any other team. Florida and Auburn performed right at expected levels.
Defensive PF and Roster Talent (Defense Only):
In the above graph, we have the inverse from the offense. In this one, a good performance is below the line. For example, Florida was expected to allow 22.59 PPG, but only allowed 15.46. LSU and Alabama, the Blue-Ribbon winners on offense, allowed a few more points than one would expect given the defensive talent ratings. Arkansas just had a bad season overall. Interesting to me, Missouri outperformed expectations in both metrics.
The small sample size is highly subject to variance. PF and PA as a stand-alone metric are not likely to be sufficient to determine the overall quality of the offense or defense. Overall roster talent allows for the inclusion of players that didn’t play (redshirts, transfers, injured, etc.) to influence the expectations but not the performance.
Both regression models were fairly strong. The correlation between defensive talent and points allowed was 57%, with 32% of the variance in points against attributable to the model. Offensive talent was correlated with points scored at 69% with 47% of the variance in points scored attributable to the model. What this means, in general, is that relative to this sample, 68% of what goes into points allowed are variables other than overall defensive roster talent and 53% of what goes into points scored are to other variables than overall offensive roster talent.
I’ve seen some threads on a few Gators forums drumming the idea that former Florida coach Jim McElwain’s first two years were pretty similar to that of current coach Dan Mullen. The eye test certainly doesn’t make it seem like so to me. However, I wanted to take a look at some of the metrics and see if maybe there is some justification for this view. There aren’t many people on board with this view from what I’ve seen, but it did get me curious as to how some of the stats may differ between the two coaches.
The general thinking of those that are opining that Mac and Mullen’s first 2 seasons are similar is that they both had 10* win seasons (assuming Mac would’ve gotten the 10th win in a canceled game). I broke down each of their first two seasons along several dimensions and charted these below:
As we can see in this bar graph, Mullen has 2 more wins, 3 fewer losses and enjoys a win percentage advantage of 11%.
Margin of Victory (MOV) and Margin of Defeat (MOD):
Here I took a look at how the teams did in victory and defeat under the respective coaches:
This chart shows that Mullen is enjoying a larger MOV and smaller MOD than Mac did in his first 2 years. Mullen is averaging 6 more points per victory and 6 fewer points in defeat.
Points For (PF) and Points Against (PA):
Both Mac and Mullen were hired, at least in part, for their offensive acumen. Both have benefited from having good defensive coordinators on staff, with Mac having Geoff Collins and Mullen with Todd Grantham. The main thing I wanted to look at here was the offensive production, but since offenses and defenses do not exist in a vacuum, and how one performs impacts the other, I included the PA data.
Out of all of the comparisons, the PF in Mullen’s favor seems to be the strongest. I wanted to see if there was a statistically significant difference here because of the disparity in PF being so large (almost 11 PPG). I found there was a significant difference with a large effect size (independent T-Test with p = 0.005, Cohen’s D with .810, assumptions of normality- Shapiro Wilk- and equality of variances- Levene’s Test- were met).
As we can see from this descriptive plot, Mullen’s offensive production is far superior. In this plot, the blue lines around the mean are the confidence intervals (CI). Mac’s interval width is 18.73 to 28.38 and Mullen’s is 29.01 to 39.22. The top end of Mac’s CI is less than the lower end of Mullen’s.
This analysis does not account for the strength of schedule, injuries, suspensions, etc. Any of those things could impact the overall outcome for each coach. The point here was to look a little closer at the two coaches and their performances at UF two years in. Though Mac had a decent run his first two years, his teams never looked as good (to me) as Mullen’s have. There’s nothing in this analysis that convinced me that I was wrong about that and this puts the disparity between the two into a bit clearer picture. Mullen is superior in win percentage, has, on average, a larger margin of victory, smaller margin of defeat, significantly more points for and ever so slightly more points against (17.73 vs 17.56).