Florida Gators Blog

10-year review: A look at the Big 3.

To kick off the 150th season of college football, Florida played a week 0 game against old rival Miami Hurricanes. Florida won 24-20, which is great. This season is a rarity for Florida; they get to face FSU and Miami just like the old days. To commemorate this season, I decided to look at the state of each program over the last decade. I was curious as to how different each team has done both in recruiting and on the field relative to each other. Of course, they’ve all had their ups and downs since 2009. But have they been that much different overall? Let’s see what the numbers show us…

Recruiting

I first wanted to see how each of the teams fared in recruiting since 2009. I picked 2009 as a starting point because there was a sharp change in recruiting at that time. I don’t know exactly what changed, but I do know that the average rating for the top 1000 recruits went up significantly that year and has maintained a fairly steady climb. Check out https://thefaircatch.com/2019/07/04/the-changing-baseline-of-composite-college-football-recruits/ for a detailed outline.

Anyways, the chart below shows the 4-year moving average for each of the teams’ composite recruiting ratings with heatmapping applied.

B3 rec ratings

A quick glance at the colors shows that Florida was the strongest up until 2014 when FSU surpassed them. Miami has been bringing up the rear for the entire cycle but is improving. I also looked at the data with the numbers standardized to get an idea of what degree of separation there was between the teams.

B3 rec data

What these standardized numbers mean is that the average for the group is zero. Relative to zero, we can see how good or bad each score over the years was, relative to all of the years by each of the schools. For instance, in 2009, the Gators were 0.49 standard deviations above the group mean, while FSU (-0.34) and Miami (0.21) were slightly below. It makes it easy to see that the 2012 Florida team held the best 4-year average for any squad included at 1.63 standard deviations above the mean. Miami’s 2013 team took the honors (?) for the worst class at -1.67. FSU’s 2018 team had their best 4-year average over the span (and were rewarded with a 5-7 record…).

Winning

I then looked at how each team did in terms of win percentage over the same period of time. 2009 was Tim Tebow’s final year at Florida, and UF’s decline after that is apparent. The chart below shows each school’s win percentage for that season.

b3 win perc

Again, the heat mapping helps see the degree of separation. The 2013 ‘Noles had the best season of the group, going undefeated and picking up a Natty. Terribly, UF had the worst season that year as well winning 33% of their games (4-8). Here is a look at that data standardized as before:

B3 win graph standard

Recruiting and Winning

Next, I wanted to see if each team experienced the ups and downs in recruiting consistently with their own ups and downs in winning. Intuitively, it would make sense. However, each of the teams faired differently in regard to their wins-to-recruiting ratio.

Miami Hurricanes:

UM

In this graph (and the others to follow) the horizontal line with the numbers (1 through 10, one for each year) is just below zero, which, again is the average wins and recruiting rating standardized. For Miami, their recruiting (blue line) was consistently below the average for the group without much fluctuation in their win percentage. If you look at the 5th season (2013), their recruiting was at its lowest, but wins were actually up. Miami’s winning rate is also on the climb.

Florida State Seminoles:

The ‘Noles started off the period with average winning and below-average recruiting. However, the recruiting has sky-rocketed over the last ten years, starting in 2011. Florida State also experienced a rise in winning percentage while they had Jameis Winston. However, they haven’t been able to maintain winning at a rate commiserate with their recruiting levels.

fsu

The disparity between FSU’s 4-year moving average in recruit rating and win percentage in 2018 is significant.

Florida Gators:

Florida has had a precipitous and well-documented drop in recruiting since Will Muschamp was landing top-5 classes. However, those classes have not performed well in terms of winning. Florida has shown some recovery in both recruiting and winning in the first year under Dan Mullen.

UF

Florida is the only team of the 3 to trend upward in recruiting and winning as of 2018. This is encouraging, as it will put them in the top position among the Big 3. Beyond that, however, there is still work to do, but that is for another analysis. For now, let’s hope Mullen can keep things going in the right direction. Go Gators.

People owe an apology to Feleipe Franks. An analysis of week 1 QB performances

Florida had the benefit and curse of playing in the first game of the year in week 0. After Florida’s win over the Miami Hurricanes, Gators’ QB Feleipe Franks was widely criticized. Some of the criticism was valid, but a lot of the criticism was excessive and personal. With everyone else’s week one in the books (except the Labor Day Monday game), I decided to see just how poorly Franks’ performance compared to other QBs who faced power 5 opponents.

Taking the rankings from https://www.sports-reference.com/cfb/years/2019-passing.html I used this metric as a baseline for overall performance by the quarterback in week one. Franks ranked 44th overall.

Here are the top 50 performers:

rate

As we know, however, a QBs performance is highly influenced by the opposing defense. To account for this variable, I utilized the 2018 team defensive rankings taken from https://www.sports-reference.com/cfb/years/2018-team-defense.html

Here are the top 20 teams from last year (Power 5):

defense

The P5_Re-Rank is the new ranking once non-P5 teams were removed. These teams needed to be re-ranked. Since non-P5 teams play easier schedules, their defensive rankings are not an indicator of how good the defense actually is- and since the quality of the defense is a key measure in this analysis, I could not have this metric skewed.

Then, each QB who played in week one against a P5 opponent was extracted from the overall list. Their week one ranking was re-ranked from the overall group to just those that faced P5 opponents. It was clear upon running an initial analysis that QBs who faced non-P5 opponents did better than those who did.

PassRatingvsP5

It is easy to see that point in the chart above. The group on the horizontal (x) axis to the left were those who faced non-P5 opponents. They averaged a higher passer rating than QBs who did face P5 opponents. Ironically, a 5-star recruit playing against a non-P5 opponent had the worst day of all (Hunter Johnson of Northwestern).

I also wanted to see if the QBs recruiting ratings coming out of high school were predictive of success in week one. So, this metric was included as well. Now, the variables were set. I was looking to see how well each QB did in week one while controlling for the quality of defense faced. I just threw the recruit-rating in to see if it had any predictive power for how the player performed.

There were two important assumptions made: Power 5 defenses are generally better than non-P5 defenses, and that last year’s defensive rankings are indicative of this year’s defensive strength. Of course, there is fluctuation, but this assumption is necessary to quantify the level of opposition QBs faced in week one. On to the findings…

Opponent Strength

The first look was to see if opponent defensive strength (stored as variable P5_order) was correlated with QB performance. In the subsequent regression analysis, it was statistically significant (p = 0.04).P5 def and week one

As the chart shows us, the higher (worse) a defense was ranked in 2018, the better the QBs performance against them was. Ok, great. Though not a perfect correlation (at all), it was still strong enough and statistically supported to apply it to the analysis.

Recruit Rating (RR)

As stated, this was more of a curiosity. What I found was that the rating of the QB (composite) was statistically significant in its correlation to performance against P5 defenses in week one (p = 0.04).

RR and wk one

As this scatterplot shows, the higher a QB was rated, the more likely he was to have success in week one. That little dot in the lower right-hand corner is the aforementioned Hunter Johnson. Of note, if a player was unrated, I assigned a .7900 rating and 2 stars. That is why you see the line of players to the left.

Feleipe Franks

After all was said and done, Franks ranked number one overall when controlling for the strength of opposing defense faced in week one. When adjusting for P5 opponents, Franks had the 8th best performance while facing the 13th best defense. I simply added these two ranks scores together, giving a score of 21 points (fewer points are better because the lower something is ranked, the better it is). Here are the final standings:

Player School Opponent Opp_def_Rank QB_Rank_Adj Score Final Rank
Feleipe Franks Florida Miami (FL) 13 8 21 1
Jarren Williams Miami (FL) Florida 14 11 25 2
Levi Lewis Louisiana Mississippi State 2 27 29 3
Tyler Huntley Utah BYU 18 13 31 4
K.J. Costello Stanford Northwestern 24 7 31 4
Zach Smith Tulsa Michigan State 5 30 35 6
Sam Howell North Carolina South Carolina 38 3 41 7
Tua Tagovailoa Alabama Duke 41 1 42 8
Ryan Willis Virginia Tech Boston College 32 12 44 9
Quentin Harris Duke Alabama 7 40 47 10
Riley Neal Vanderbilt Georgia 10 38 48 11
J’mar Smith Louisiana Tech Texas 33 16 49 12
Jake Fromm Georgia Vanderbilt 35 14 49 12
Colin Hill Colorado State Colorado 40 10 50 14
Kenny Pickett Pitt Virginia 15 36 51 15
Anthony Brown Boston College Virginia Tech 48 4 52 16
Woody Barrett Kent State Arizona State 29 25 54 17
Chris Robison Florida Atlantic Ohio State 31 26 57 18
Josh Adkins New Mexico State Washington State 25 32 57 18
Gresch Jensen Texas State Texas A&M 26 31 57 18
Drew Plitt Ball State Indiana 45 15 60 21
Cole McDonald Hawaii Arizona 55 6 61 22
Bryce Perkins Virginia Pitt 42 21 63 23
Desmond Ridder Cincinnati UCLA 58 5 63 23
Bryce Perkins Virginia Pitt 42 21 63 23
Hunter Johnson Northwestern Stanford 22 42 64 26
Jorge Reyna Fresno State USC 37 28 65 27
Spencer Sanders Oklahoma State Oregon State 63 2 65 27
Dan Ellington Georgia State Tennessee 43 23 66 29
Jordan Love Utah State Wake Forest 57 9 66 29
Carson Strong Nevada Purdue 47 20 67 31
Sean Chambers Wyoming Missouri 30 37 67 31
Tyler Vitt Texas State Texas A&M 26 41 67 31
Hank Bachmeier Boise State Florida State 52 18 70 34
Stephen Calvert Liberty Syracuse 36 34 70 34
Jake Luton Oregon State Oklahoma State 54 17 71 36
Cephus Johnson South Alabama Nebraska 50 24 74 37
D’Eriq King Houston Oklahoma 56 19 75 38
Randall West Massachusetts Rutgers 51 33 84 39
Brady White Memphis Ole Miss 61 29 90 40
Jake Bentley South Carolina North Carolina 59 35 94 41
Kato Nelson Akron Illinois 62 39 101 42

Conclusion

This analysis does not prove or even claim that Franks is better than any other QB, or that we can draw conclusions based upon one game. The aim was to investigate the validity of the pervailing narrative regarding Franks’ alleged “horrendous” opening performance. The findings of this analysis strongly contradict that narrative and, at a minimum, offer some context regarding drawing hard and fast conclusions based on small sample sizes.

Methodology Note:

Each player was assigned a random ID number and the analysis was initially conducted using only ID numbers in order to avoid potential biasing of the outcome. Correlational analyses were conducted to validate the inclusion of the predictor variables but did not influence ranking or scoring. The intention is to conduct this analysis weekly. As such, going forward raw sums of rankings will not be used. Standardized scores will be used to ensure equal weighting of the variables.

The Changing Baseline of Composite College Football Recruits

The recruiting rankings, which are led by the Composite rating service, are likely evolving their analytical techniques. If this is true, then there should be some identifiable changes in output over time. I took a look at the top 1000 recruits from 2005 through 2020 and averaged their ratings. I then standardized the ratings to see if any significant movement is occurring over time.

scores

The above chart shows the raw scores for each year. It is easy to see there was a fairly sharp upward trend in average ratings from 2005 to 2009. From there, it looks like things have leveled off for the most part.

standardized

When I standardized the scores, we can see just how sharp the rise was. It also looks like things have generally trended upward since 2017. Because there is some rise, it may be wise to take the baseline changes into considerations when comparing classes over time. That being said, the effect does not look to be too dramatic.

avgs.PNG

The above table shows the raw and standardized scores. The numbers in the bright green boxes are the group average and standard deviations. All in all, the top 1000 average out fairly consistently, though there is definitely an overall upward trend.

When we look at this more deeply and parse out by positions, the trends stay the same. allpositions

In this table, we can easily see that with the exception of center and fullback, all position groups are trending toward higher average ratings since 2009. Something must’ve happened in 2009 that lead to a change in how recruits were rated. Every look so far at the data shows a sharp climb from that point on. In looking at the positions combined into position groups, we see the trend continue.

The table below shows the average rating for top 1000 recruits by position group. The offensive skill group consists of QB, RB, APB, WR, and TE. The others are self-explanatory.

posgroupraw

When we standardize these scores and apply heat-mapping, the contrast is again clear- from 2009 on there was a sharp rise.

posgroupstand

Since there was a clear delineation in ratings from 2009 on, I charted the data without years prior to that. There’s a bit of variance in that time. 2018 and 2019 were both pretty high above average for the group, and 2011 far below. So far 2020 appears to be reverting to the mean.

post2009

The key takeaway is that when comparing class ratings over time, it might be a good idea to control for rating inflation. This is easy to do by simply standardizing each year’s data and moving forward with standardized values rather than raw values.

 

Revisiting Stars and All American Achievement

I’ve heard/read that 5-star recruits are much more likely to be All-Americans in college football than other recruits of lower star designation. I don’t doubt that has been historically true, but I was curious as to whether that still stands true today. Short answer, it does.

I took roughly the top 1000 rated recruits (Composite) from 2006 through 2017 and compiled their recruit rating and star designation. I then pulled the All-American teams from 2010 through 2018 from https://www.sports-reference.com/cfb/.

Out of 11,843 recruits, here is what I got:

Stars 5 4 3
Count 397 3432 8014
Percent 3% 29% 68%

So we see that 5-stars make up 3% of this sample, 4-stars 29%, and 3-Stars 68%.

Then I tallied up the counts for All-Americans. Kickers and Punters were removed, as they can’t be ranked higher than 3-stars in the composite rankings for some silly reason.

Stars 5 4 3 2 Total
Count 36 65 68 48 217
Percent 17% 30% 31% 22% 100%

We can see here that, relative to the percentage of the sample, 5-stars are over-represented, 4-stars are consistent and 3-stars are under-represented. 2-stars are probably under-represented, but since my comparison population didn’t include 2-stars, I don’t know what percentage of all players they represent.

Here is how the All-Americans breakdown by position and stars:

allamericanandstars

When looking at relative success rates, 5-stars come out ahead. Out of the 397 5-stars drawn, 36 of them were All-Americans. But wait! What if a player was All-American more than once? Well, that happened 15 times. One was a punter (Tom Hackett), so that record was removed. Out of the 14 remaining players that were 2x All-Americans, 2 were 5-stars, 4 were 4-stars, 7 were 3-stars and one was a 2-star*. Position-wise, there were 2 WR, 3, LB, 2 RB, 3 OL, 3 DL, and 1 DB.

Adjusted for duplicates, the table now looks like this:

allamericanandstars2

Stars 5 4 3 2 Total
Count 34 61 61 47 203
Percent 17% 30% 30% 23% 100%

 

As we can see, removing duplicates doesn’t change the takeaway. If you breakdown the relative success rates for players from the sample, here is what you get:

Stars 5 4 3
Representation 9% 2% 1%

9% of the 5-stars from the sample went on to be All-Americans. By far the most. So, yea. 5-stars are generally more successful if you agree that being an All-American is a metric for success.

Breaking down the size differences among college football recruits by star rankings

As part of the data gathered on one of the previous analyses, I ended up with some information on the size differences between the 3 categories of star-rankings for college football recruits. The sample size is 4,964 and covers the top 1000 or so recruits from 2015 to 2020.

counts table

height table

weight table

The grey columns on the right of the height and weight tables are the standard deviations. The height and weight scores in the table are heat-mapped- darker green is a higher score, yellow is lower. All in all, it is easy to see that higher ranked kids are usually bigger in almost every position group.

State Matters: College Football Recruit ratings

I analyzed the Composite recruiting data from 2015 to the current 2020 rankings across several dimensions: Position, Height, Weight, Home State, and Star-ranking. Punters, kickers, and fullbacks were excluded because punters and kickers cannot have more than 3-stars and there are very few fullbacks, so why bother with them. Incomplete records or those with missing data were removed. The sample size was sufficiently large (N= 4964) to not be impacted by removals. Also, each year was analyzed for the top 1000 recruits, so not all the 3-stars were included. *Data from the 2019 class was corrupted, so only the top 50 players were included from that class- I will update once I fix the issue.

Here are some insights I found.

First, the talent, if you go by the composite rankings, is certainly not evenly distributed among the states. I took some of the states with varying total numbers and applied some heat mapping. I included states with differing number of recruits as to allow for the contrast. I then mapped each segment by state.

states

3star

All

4stars

5stars

We can easily see that, in this sample, California and Georgia are overrepresented in terms of 5-star players. Cali has 10% of the overall number of top players, but 14% of the 5-stars. Georgia is the same. Georgia, however, has only 9% of the 4-stars, while Cali has 11%.  This is even more marked given Georgia’s overall population (credit: Wikipedia):

population

So, are Cali and Georgia kids bigger than the other state’s ‘croots? Nope.

z1

z2

When standardizing the average weight for each position by state, Cali and Jawja aren’t necessarily putting out heavier players. I was too lazy at this point to do the same for height. Maybe I’ll add that later. But the red squares are incidents in which the average for that group is lower than the overall group average (all the states). Yellow is above the average. Out of the 17 position groups, Cali kids were lighter than average in 13 of them. Georgia kids were lighter in 7 of them.  Yes, this is just weight, and that certainly isn’t a measure of ability. But it can be considered as a measure of physical development to some degree.

Is there a bias for these states’ recruits? I don’t know, but there is certainly some level is disproportionality for some reason.

So, kid… you want to be a 5-star recruit?

 

Great. Then don’t play these positions:

Safety (7)

Athlete (6)

Dual-threat QB (6)

Inside Linebacker (5)

Guard (4)

All-Purpose Back (3)

Center (2)

Tight End (1)

Spanning from 2015 to the 2020 class, I looked at 4964 recruits generally made up from the top 1000 for each year (some were dropped for being kickers, punters, and fullbacks, and if the data was incomplete/missing on a player). These positions only had the number in parentheses designated as 5-stars.

Do play these positions:

Offensive Tackle (26)

Wide Receiver (23)

Defensive Tackle (21)

Cornerback (20)

Running Back (19)

Strong-side Defensive End (15)

Outside Linebacker (14)

Weak-side Defensive End (14)

Pro-Style QB (10)

Also, if you play:

RB      DT       ILB      OLB    SDE            WR      OT       CB       ATH            WDE   PRO    OG APB

Then be taller and heavier than everyone else. Each of these positions 5-stars averaged higher heights and weights than 4 and 3 stars.

If you are a CB or OC, then you need to be significantly (greater than 1 standard deviation) heavier than your peers. If you’re a tight end, you need to be significantly taller and heavier (though this may not be true since only 1 tight end is the 5-star sample size, but whatever).