Florida Gators Blog

Charting success rate by downs and quarters: UF vs UGA edition

I’ve been charting Florida’s play-by-play success all season. I’m using this data for an end-of-season analysis but decided to throw the UGA game on here as a preview.

These quick charts are visual indicators of how well Florida performed by down and by quarter in terms of run and pass for both offense and defense.

Success is defined as follows:

1st down- Achieve 40% of the needed yards to convert or score.

2nd down- Achieve 60% of the needs yards to convert or score.

3rd & 4th down- Achieve 100% of the yards needed to convert or score.

*Kneel-down plays are omitted. Penalties, sacks, and turnovers are not counted as either pass or run plays (these are all being charted as ‘fail’ plays in general, and not presented here, but will be presented in the final analysis at the end of the season). The point here is to chart how often an executed play achieved its objective.

If you spot any errors, please let me know. I use a code to scrape the web and analyze the data. I QA the data as best as I can, but I would like to know if there are any errors so I can fix my code.

Florida Offense

uga.uf.off.down

Rushing Success Table by Down-Florida

Down # Rush Att. Success Fail Run success rate
1 6 2 4 33%
2 6 2 4 33%
3 4 1 3 25%
4 0 0 0 N/A

 

Passing Success Table by Down-Florida

Down # Pass Att. Success Fail Pass success rate
1 18 12 6 67%
2 8 3 5 38%
3 7 2 5 29%
4 3 2 1 67%

uga.uf.off.qtr

Rushing Success Table by Quarter-Florida

Quarter # Rush Att. Success Fail Run success rate
1 4 1 3 25%
2 3 1 2 33%
3 2 0 2 0%
4 7 3 4 43%

Passing Success Table by Quarter-Florida

Quarter # Pass Att. Success Fail Pass success rate
1 6 3 3 50%
2 5 2 3 40%
3 10 7 3 70%
4 15 7 8 47%

 

Georgia Offense

uga.uf.def.down

 

Rushing Success Table by Down-Georgia

Down # Rush Att. Success Fail Run Success Rate
1 16 6 10 38%
2 16 6 10 38%
3 5 3 2 60%
4 0 0 0 N/A

 

Passing Success Table by Down-Georgia

Down # Pass Att. Success Fail Pass Success Rate
1 11 8 3 73%
2 7 3 4 43%
3 13 9 4 69%
4 0 0 0 N/A

uga.uf.def.qtr

Rushing Success Table by Quarter-Georgia

Quarter Run Success Fail Run success rate
1 9 2 7 22%
2 11 6 5 55%
3 7 2 5 29%
4 10 5 5 50%

Passing Success Table by Quarter-Georgia

Quarter # Pass Att. Success Fail Pass success rate
1 9 5 4 56%
2 11 7 4 64%
3 8 5 3 63%
4 3 3 0 100%

 

Roster Talent and Win Percentage: The Non-Linear Relationship Between Recruiting Success and On-Field Performance in College Football

The recruitment of highly rated high school football players by college programs is big business. Major college football programs expend considerable budget in their efforts to recruit the most talented high school players. Blue-chip players, those rated by scouting services as 4 or 5-star players are relatively rare. As of October 10th, 2019, there were 347 Blue-Chip prospects according to the Composite 247 Ratings, the established industry standard for prospect scouting. There are approximately 1,006,013 high school football players in the United States (https://www.statista.com/statistics/267955/participation-in-us-high-school-football/

Some basic math lets us know that Blue-Chip players make up far less than 1% of all players. While validation for the rating methodology appears to be scant, the fact that so many big-time programs are recruiting these players offers logical support for the conclusion that these are indeed the better prospects. Coaches obviously want the best players, as that makes winning easier. The goal for this paper is to explore the degree to which recruiting success plays into winning on the field.

Among college football fans, it generally presumed that there is a direct (i.e. linear) relationship between recruiting Blue-Chip players and winning. In popular college football blogs and multiple articles, there are countless “analyses” of how “it is all about the Jimmy’s and Joe’s and not X’s and O’s”. Many die-hard fans appear to judge their teams’ recruiting rankings as a sure-fire indicator of whether the team is good or bad or even if the coach should be fired. The reality is that recruiting does not have a linear relationship to winning. But it does have a significant impact on winning- usually. This analysis found that in 3 out of 5 Power 5 (the most powerful) conferences in college football the on-roster talent was not statistically significantly correlated with winning percentage.

Overview

To conduct this study, roster talent ratings were obtained from the Composite 247 website for the top 50 most talented teams from 2016 to 2019 (as of October 25th, 2019). Each of the teams’ corresponding win percentage was then recorded. Each team was codified according to year and talent ranking. For example, the highest-rated team from 2016 was assigned a code of 2016_1. The 30th most talented team in 2018 had a code of 2018_30. And so on. Descriptive statistics and exploratory data analysis showed some interesting things at the conference level.

descriptives

What the above table shows us is the number of teams that were represented and their mean (average) roster talent score. So, over the last four years, the ACC, with 14 teams, has a possible number of 56 representatives in the data set (top 50 most talented teams over 4 years). They are represented 41 times (73%). The Big 12, with 10 teams (what does the ’12’ stand for again?) has 26 out of 40 possible representatives (65%). The Big 10 (14 representatives) has 39 out of 56 (70%), the PAC 12 (which actually has 12 teams- crazy) has 35 out of 48 (73%). The SEC has 55 out of 56 (98%). So the SEC has more, quantity-wise, talented teams.

The mean roster talent ratings are next. The Big 12 was the least talented group at 86.2, but virtually in a 3-way tie with the ACC and Big 10. Again, the SEC leads the way. Knowing there are some very talented teams in the other conferences (non-SEC), I wanted to look at how that talent is distributed among the conferences.

conf.histo

The above graph is a histogram for each conference. Not surprisingly, all of the other conferences are multi-modal whereas the SEC is approximately normally distributed. What this means is that the other conferences have more than one peak and valley, whereas the SEC is generally more bell-shaped (though quite a bit flatter here; low kurtosis but decent skewness). While there are certainly the ‘haves’ (Alabama, Georgia, LSU) and ‘have nots’ (Vanderbilt, Arkansas) in the SEC, the separation between the ‘haves’ and ‘have nots’ appear to be greater in the other conferences.

Regression Analyses

A regression analysis was then conducted for the entire sample. A simple linear regression found that There was a statistically significant correlation between roster talent and win percentage (p < 0.001, R2= 0.178).

overview

While the statistical significance indicates the correlation between roster talent and winning is unlikely to be a result of chance, the low R-squared value (a measure of effect size) indicates roster talent has a weak impact on winning percentage. It also indicates that the relationship may not be exactly linear. This was suspected to be the case, as the impact of roster talent may not have the same effect for each conference. Therefore, a polynomial regression analysis of the same data was conducted and found to be an improvement of the linear regression (p < 0.001, R2= 0.206).

overviewpolynomial

The polynomial regression was an improvement over the linear regression; however, the findings still indicate that roster talent has a weak correlation to winning on the field. At this point, it was necessary to look at the conferences individually to explore the relationship more granularly.

Conference by Conference

The below scatterplots show the same graph as above but filtered to only one conference at a time. The statistical output (significance and effect size) for each is below the graph:

acc

(ACC: p =0.08, R2= 0.164)

big12

(BIG 12: p =0.308, R2= 0.147)

big10

(BIG 10: p =0.017, R2= 0.247)

pac12

(PAC 12: p =0.386, R2= 0.092)

sec

(SEC: p <0.001, R2= 0.508)

Out of the 5 conferences, roster talent was only statistically significantly (at a standard threshold of 0.05) in the Big 10 (0.017) and the SEC (<0.001). The effect size for the Big 10 is 0.247, or 24.7%. For the SEC, it is .508, or 50.8%. This means that on-roster talent accounts for 24.7% of the wins in the Big 10 and 50.8% in the SEC from 2016 to present (October 25th, 2019). The key takeaway is that SEC fans are justified in fretting over recruiting success much more so than the other conferences.

A Deeper Look at the Southeastern Conference

The findings led me to want to look deeper into what is going on in the SEC. The plot below is the same as the SEC plot above but drilled down to the team level:

SECbyteamoutput

What this chart shows is that SEC teams generally perform to the relative level of their talent except for Tennessee and Florida. Tennessee generally underperforms while Florida generally overperforms except for 2017 when the Gators went 4-7. Kentucky’s 2018 year was a high achievement. An analysis was performed to explore how each team performed relative to their roster talent level for each of the years.

graph1

graph2

When summing the overall achievement for each team over the study period, Florida is the most over-achieving team in the sample.

achievementscale

If there is any doubt about Florida Head Coach Dan Mullen’s ability to maximize the talent on hand, it should be erased. The Gators are winning the most relative to roster talent even though they were 4-7 in 2017. Furthermore, Mississippi State is 3rd. A quick look at the SEC scatterplot shows us that MSU was overperforming when Mullen was a coach there but has since dropped off. Florida has done the opposite.

There is no doubt that the best place to be in is where Alabama is: maxed out on Talent and Wins, so there is no chance at really overachieving. After all, winning is what it is all about. And, in the SEC at least, recruiting success plays a big role in that. If you can’t recruit at an elite level, you’d better have a coach like Dan Mullen to give you a chance.

Additional:

As requested by Reddit user stevejust, I have added the team name and year to each of the conference scatterplots:

acc2

ACC

big10.2

Big 10

big12.2

Big12

pac12.2

PAC 12

*Added disclaimer:

The purpose of this study was to examine the linearity or lack thereof, between talent rating and winning percentage. The purpose is not to find the best model possible to predict winning- that is a worthy and separate endeavor! The primary curiosity was to explore if college football fans overstate/understate/state just perfectly their concerns about the recruiting performance of their favorite teams. The interesting find concerned the general lack of linearity between quantified talent level and winning. I have received considerable feedback (virtually all of it positive, which is awesome) and several suggestions regarding other potential variables to consider, models and methods that may be superior or an improvement over the polynomial regression. Each of these that I have read has been, to varying degrees, perfectly legitimate and well-thought-out. However, it is important to emphasize, that this study was not about the model of best fit- it was about the supposed linearity between talent ratings and winning. Thank you to all who have commented and provided excellent ideas for future approaches. I am learning from you and gathering great ideas. Cheers!

10-year review: A look at the Big 3.

To kick off the 150th season of college football, Florida played a week 0 game against old rival Miami Hurricanes. Florida won 24-20, which is great. This season is a rarity for Florida; they get to face FSU and Miami just like the old days. To commemorate this season, I decided to look at the state of each program over the last decade. I was curious as to how different each team has done both in recruiting and on the field relative to each other. Of course, they’ve all had their ups and downs since 2009. But have they been that much different overall? Let’s see what the numbers show us…

Recruiting

I first wanted to see how each of the teams fared in recruiting since 2009. I picked 2009 as a starting point because there was a sharp change in recruiting at that time. I don’t know exactly what changed, but I do know that the average rating for the top 1000 recruits went up significantly that year and has maintained a fairly steady climb. Check out https://thefaircatch.com/2019/07/04/the-changing-baseline-of-composite-college-football-recruits/ for a detailed outline.

Anyways, the chart below shows the 4-year moving average for each of the teams’ composite recruiting ratings with heatmapping applied.

B3 rec ratings

A quick glance at the colors shows that Florida was the strongest up until 2014 when FSU surpassed them. Miami has been bringing up the rear for the entire cycle but is improving. I also looked at the data with the numbers standardized to get an idea of what degree of separation there was between the teams.

B3 rec data

What these standardized numbers mean is that the average for the group is zero. Relative to zero, we can see how good or bad each score over the years was, relative to all of the years by each of the schools. For instance, in 2009, the Gators were 0.49 standard deviations above the group mean, while FSU (-0.34) and Miami (0.21) were slightly below. It makes it easy to see that the 2012 Florida team held the best 4-year average for any squad included at 1.63 standard deviations above the mean. Miami’s 2013 team took the honors (?) for the worst class at -1.67. FSU’s 2018 team had their best 4-year average over the span (and were rewarded with a 5-7 record…).

Winning

I then looked at how each team did in terms of win percentage over the same period of time. 2009 was Tim Tebow’s final year at Florida, and UF’s decline after that is apparent. The chart below shows each school’s win percentage for that season.

b3 win perc

Again, the heat mapping helps see the degree of separation. The 2013 ‘Noles had the best season of the group, going undefeated and picking up a Natty. Terribly, UF had the worst season that year as well winning 33% of their games (4-8). Here is a look at that data standardized as before:

B3 win graph standard

Recruiting and Winning

Next, I wanted to see if each team experienced the ups and downs in recruiting consistently with their own ups and downs in winning. Intuitively, it would make sense. However, each of the teams faired differently in regard to their wins-to-recruiting ratio.

Miami Hurricanes:

UM

In this graph (and the others to follow) the horizontal line with the numbers (1 through 10, one for each year) is just below zero, which, again is the average wins and recruiting rating standardized. For Miami, their recruiting (blue line) was consistently below the average for the group without much fluctuation in their win percentage. If you look at the 5th season (2013), their recruiting was at its lowest, but wins were actually up. Miami’s winning rate is also on the climb.

Florida State Seminoles:

The ‘Noles started off the period with average winning and below-average recruiting. However, the recruiting has sky-rocketed over the last ten years, starting in 2011. Florida State also experienced a rise in winning percentage while they had Jameis Winston. However, they haven’t been able to maintain winning at a rate commiserate with their recruiting levels.

fsu

The disparity between FSU’s 4-year moving average in recruit rating and win percentage in 2018 is significant.

Florida Gators:

Florida has had a precipitous and well-documented drop in recruiting since Will Muschamp was landing top-5 classes. However, those classes have not performed well in terms of winning. Florida has shown some recovery in both recruiting and winning in the first year under Dan Mullen.

UF

Florida is the only team of the 3 to trend upward in recruiting and winning as of 2018. This is encouraging, as it will put them in the top position among the Big 3. Beyond that, however, there is still work to do, but that is for another analysis. For now, let’s hope Mullen can keep things going in the right direction. Go Gators.

People owe an apology to Feleipe Franks. An analysis of week 1 QB performances

Florida had the benefit and curse of playing in the first game of the year in week 0. After Florida’s win over the Miami Hurricanes, Gators’ QB Feleipe Franks was widely criticized. Some of the criticism was valid, but a lot of the criticism was excessive and personal. With everyone else’s week one in the books (except the Labor Day Monday game), I decided to see just how poorly Franks’ performance compared to other QBs who faced power 5 opponents.

Taking the rankings from https://www.sports-reference.com/cfb/years/2019-passing.html I used this metric as a baseline for overall performance by the quarterback in week one. Franks ranked 44th overall.

Here are the top 50 performers:

rate

As we know, however, a QBs performance is highly influenced by the opposing defense. To account for this variable, I utilized the 2018 team defensive rankings taken from https://www.sports-reference.com/cfb/years/2018-team-defense.html

Here are the top 20 teams from last year (Power 5):

defense

The P5_Re-Rank is the new ranking once non-P5 teams were removed. These teams needed to be re-ranked. Since non-P5 teams play easier schedules, their defensive rankings are not an indicator of how good the defense actually is- and since the quality of the defense is a key measure in this analysis, I could not have this metric skewed.

Then, each QB who played in week one against a P5 opponent was extracted from the overall list. Their week one ranking was re-ranked from the overall group to just those that faced P5 opponents. It was clear upon running an initial analysis that QBs who faced non-P5 opponents did better than those who did.

PassRatingvsP5

It is easy to see that point in the chart above. The group on the horizontal (x) axis to the left were those who faced non-P5 opponents. They averaged a higher passer rating than QBs who did face P5 opponents. Ironically, a 5-star recruit playing against a non-P5 opponent had the worst day of all (Hunter Johnson of Northwestern).

I also wanted to see if the QBs recruiting ratings coming out of high school were predictive of success in week one. So, this metric was included as well. Now, the variables were set. I was looking to see how well each QB did in week one while controlling for the quality of defense faced. I just threw the recruit-rating in to see if it had any predictive power for how the player performed.

There were two important assumptions made: Power 5 defenses are generally better than non-P5 defenses, and that last year’s defensive rankings are indicative of this year’s defensive strength. Of course, there is fluctuation, but this assumption is necessary to quantify the level of opposition QBs faced in week one. On to the findings…

Opponent Strength

The first look was to see if opponent defensive strength (stored as variable P5_order) was correlated with QB performance. In the subsequent regression analysis, it was statistically significant (p = 0.04).P5 def and week one

As the chart shows us, the higher (worse) a defense was ranked in 2018, the better the QBs performance against them was. Ok, great. Though not a perfect correlation (at all), it was still strong enough and statistically supported to apply it to the analysis.

Recruit Rating (RR)

As stated, this was more of a curiosity. What I found was that the rating of the QB (composite) was statistically significant in its correlation to performance against P5 defenses in week one (p = 0.04).

RR and wk one

As this scatterplot shows, the higher a QB was rated, the more likely he was to have success in week one. That little dot in the lower right-hand corner is the aforementioned Hunter Johnson. Of note, if a player was unrated, I assigned a .7900 rating and 2 stars. That is why you see the line of players to the left.

Feleipe Franks

After all was said and done, Franks ranked number one overall when controlling for the strength of opposing defense faced in week one. When adjusting for P5 opponents, Franks had the 8th best performance while facing the 13th best defense. I simply added these two ranks scores together, giving a score of 21 points (fewer points are better because the lower something is ranked, the better it is). Here are the final standings:

Player School Opponent Opp_def_Rank QB_Rank_Adj Score Final Rank
Feleipe Franks Florida Miami (FL) 13 8 21 1
Jarren Williams Miami (FL) Florida 14 11 25 2
Levi Lewis Louisiana Mississippi State 2 27 29 3
Tyler Huntley Utah BYU 18 13 31 4
K.J. Costello Stanford Northwestern 24 7 31 4
Zach Smith Tulsa Michigan State 5 30 35 6
Sam Howell North Carolina South Carolina 38 3 41 7
Tua Tagovailoa Alabama Duke 41 1 42 8
Ryan Willis Virginia Tech Boston College 32 12 44 9
Quentin Harris Duke Alabama 7 40 47 10
Riley Neal Vanderbilt Georgia 10 38 48 11
J’mar Smith Louisiana Tech Texas 33 16 49 12
Jake Fromm Georgia Vanderbilt 35 14 49 12
Colin Hill Colorado State Colorado 40 10 50 14
Kenny Pickett Pitt Virginia 15 36 51 15
Anthony Brown Boston College Virginia Tech 48 4 52 16
Woody Barrett Kent State Arizona State 29 25 54 17
Chris Robison Florida Atlantic Ohio State 31 26 57 18
Josh Adkins New Mexico State Washington State 25 32 57 18
Gresch Jensen Texas State Texas A&M 26 31 57 18
Drew Plitt Ball State Indiana 45 15 60 21
Cole McDonald Hawaii Arizona 55 6 61 22
Bryce Perkins Virginia Pitt 42 21 63 23
Desmond Ridder Cincinnati UCLA 58 5 63 23
Bryce Perkins Virginia Pitt 42 21 63 23
Hunter Johnson Northwestern Stanford 22 42 64 26
Jorge Reyna Fresno State USC 37 28 65 27
Spencer Sanders Oklahoma State Oregon State 63 2 65 27
Dan Ellington Georgia State Tennessee 43 23 66 29
Jordan Love Utah State Wake Forest 57 9 66 29
Carson Strong Nevada Purdue 47 20 67 31
Sean Chambers Wyoming Missouri 30 37 67 31
Tyler Vitt Texas State Texas A&M 26 41 67 31
Hank Bachmeier Boise State Florida State 52 18 70 34
Stephen Calvert Liberty Syracuse 36 34 70 34
Jake Luton Oregon State Oklahoma State 54 17 71 36
Cephus Johnson South Alabama Nebraska 50 24 74 37
D’Eriq King Houston Oklahoma 56 19 75 38
Randall West Massachusetts Rutgers 51 33 84 39
Brady White Memphis Ole Miss 61 29 90 40
Jake Bentley South Carolina North Carolina 59 35 94 41
Kato Nelson Akron Illinois 62 39 101 42

Conclusion

This analysis does not prove or even claim that Franks is better than any other QB, or that we can draw conclusions based upon one game. The aim was to investigate the validity of the pervailing narrative regarding Franks’ alleged “horrendous” opening performance. The findings of this analysis strongly contradict that narrative and, at a minimum, offer some context regarding drawing hard and fast conclusions based on small sample sizes.

Methodology Note:

Each player was assigned a random ID number and the analysis was initially conducted using only ID numbers in order to avoid potential biasing of the outcome. Correlational analyses were conducted to validate the inclusion of the predictor variables but did not influence ranking or scoring. The intention is to conduct this analysis weekly. As such, going forward raw sums of rankings will not be used. Standardized scores will be used to ensure equal weighting of the variables.

The Changing Baseline of Composite College Football Recruits

The recruiting rankings, which are led by the Composite rating service, are likely evolving their analytical techniques. If this is true, then there should be some identifiable changes in output over time. I took a look at the top 1000 recruits from 2005 through 2020 and averaged their ratings. I then standardized the ratings to see if any significant movement is occurring over time.

scores

The above chart shows the raw scores for each year. It is easy to see there was a fairly sharp upward trend in average ratings from 2005 to 2009. From there, it looks like things have leveled off for the most part.

standardized

When I standardized the scores, we can see just how sharp the rise was. It also looks like things have generally trended upward since 2017. Because there is some rise, it may be wise to take the baseline changes into considerations when comparing classes over time. That being said, the effect does not look to be too dramatic.

avgs.PNG

The above table shows the raw and standardized scores. The numbers in the bright green boxes are the group average and standard deviations. All in all, the top 1000 average out fairly consistently, though there is definitely an overall upward trend.

When we look at this more deeply and parse out by positions, the trends stay the same. allpositions

In this table, we can easily see that with the exception of center and fullback, all position groups are trending toward higher average ratings since 2009. Something must’ve happened in 2009 that lead to a change in how recruits were rated. Every look so far at the data shows a sharp climb from that point on. In looking at the positions combined into position groups, we see the trend continue.

The table below shows the average rating for top 1000 recruits by position group. The offensive skill group consists of QB, RB, APB, WR, and TE. The others are self-explanatory.

posgroupraw

When we standardize these scores and apply heat-mapping, the contrast is again clear- from 2009 on there was a sharp rise.

posgroupstand

Since there was a clear delineation in ratings from 2009 on, I charted the data without years prior to that. There’s a bit of variance in that time. 2018 and 2019 were both pretty high above average for the group, and 2011 far below. So far 2020 appears to be reverting to the mean.

post2009

The key takeaway is that when comparing class ratings over time, it might be a good idea to control for rating inflation. This is easy to do by simply standardizing each year’s data and moving forward with standardized values rather than raw values.

 

Revisiting Stars and All American Achievement

I’ve heard/read that 5-star recruits are much more likely to be All-Americans in college football than other recruits of lower star designation. I don’t doubt that has been historically true, but I was curious as to whether that still stands true today. Short answer, it does.

I took roughly the top 1000 rated recruits (Composite) from 2006 through 2017 and compiled their recruit rating and star designation. I then pulled the All-American teams from 2010 through 2018 from https://www.sports-reference.com/cfb/.

Out of 11,843 recruits, here is what I got:

Stars 5 4 3
Count 397 3432 8014
Percent 3% 29% 68%

So we see that 5-stars make up 3% of this sample, 4-stars 29%, and 3-Stars 68%.

Then I tallied up the counts for All-Americans. Kickers and Punters were removed, as they can’t be ranked higher than 3-stars in the composite rankings for some silly reason.

Stars 5 4 3 2 Total
Count 36 65 68 48 217
Percent 17% 30% 31% 22% 100%

We can see here that, relative to the percentage of the sample, 5-stars are over-represented, 4-stars are consistent and 3-stars are under-represented. 2-stars are probably under-represented, but since my comparison population didn’t include 2-stars, I don’t know what percentage of all players they represent.

Here is how the All-Americans breakdown by position and stars:

allamericanandstars

When looking at relative success rates, 5-stars come out ahead. Out of the 397 5-stars drawn, 36 of them were All-Americans. But wait! What if a player was All-American more than once? Well, that happened 15 times. One was a punter (Tom Hackett), so that record was removed. Out of the 14 remaining players that were 2x All-Americans, 2 were 5-stars, 4 were 4-stars, 7 were 3-stars and one was a 2-star*. Position-wise, there were 2 WR, 3, LB, 2 RB, 3 OL, 3 DL, and 1 DB.

Adjusted for duplicates, the table now looks like this:

allamericanandstars2

Stars 5 4 3 2 Total
Count 34 61 61 47 203
Percent 17% 30% 30% 23% 100%

 

As we can see, removing duplicates doesn’t change the takeaway. If you breakdown the relative success rates for players from the sample, here is what you get:

Stars 5 4 3
Representation 9% 2% 1%

9% of the 5-stars from the sample went on to be All-Americans. By far the most. So, yea. 5-stars are generally more successful if you agree that being an All-American is a metric for success.