How do I count thee? Let me count the ways?

Sheldon Cooper's favorite number

      If you are a fan of the television series "The Big Bang Theory", then you know Sheldon often wears a shirt with 73 ...

Showing posts with label baseball. Show all posts
Showing posts with label baseball. Show all posts

Thursday, July 18, 2024

Radar charts and five-tool baseball players

Radar charts and five-tool baseball players by Jerry Tuttle

      I was looking for an opportunity to practice with radar charts and I came across an article on five-tool baseball players, so this seemed like a perfect application for this kind of chart.

      A radar chart is an alternative to a column chart to display three or more quantitative variables. The chart graphs the values in a circular manner around a center point.

      The five tools in baseball are: (1) hitting for average; (2) hitting for power; (3) defense; (4) throwing; and (5) speed. A five-tool player excels in all five of these.

      Among current players, Mike Trout is considered a five-tool player. The measurement of Trout’s five tools can be displayed in the following radar chart:

      Trout is rated at 80 for hitting for average, 70 for hitting for power, and his lowest scores are 60 for defense, throwing and speed. This is based on a 20-to-80 rating system, where 80 is elite, 70 is plus-plus, and 60 is plus. Sorry - I could not get the points to line up with the concentric pentagons.

      For comparison, here is a display of Aaron Judge's ratings.

      Judge is rated at 80 for hitting for power, 70 for hitting for average, 60 for defense, 70 for throwing, and 50 for speed, where the 50 is average at the major league level.

      The results of several players can be displayed in a single radar chart, but this becomes hard to read. Three players are probably the maximum for readability.

      The alternative to visualizing several players is either to create several individual radar charts or else to create a bar (horizontal) chart or a column (vertical) chart.

      Each of the five tools is generally rated on a 20-to-80 scale, where 50 is average (for a major leaguer), 80 is elite, and every 10 points is supposed to represent one standard deviation. I suspect the standard deviation concept is more judgmental than mathematical. There is not a single rating system; some use traditional baseball statistics, and others use modern motion tracking data.

      The numerical data above was obtained from an article by Jake Mintz in 2022 for Fox Sports https://www.foxsports.com/stories/mlb/trout-betts-rodriguez-the-definition-of-mlbs-five-tool-players . In Mintz's data, all numbers are shown rounded to the nearest 10. Mintz only has five current players as five-tool players: Mike Trout, Mookie Betts, Trea Turner, Byron Buxton, and Julio Rodriguez. I tried graphing all five players in a single radar chart, but this was too hard to read. Mintz thinks a true five-tool player should have a grade of at least 60 in each of the five categories. By this measure, Aaron Judge is not quite a five-tool player due to a 50 in speed, and a number of elite major leaguers have at least one 50. Note that each category is considered separately. If there were some sort of weighting system, many people would weigh hitting with power as most important, followed by hitting for average, although perhaps the weights should vary by position with higher weights for defense and throwing for catcher, middle infielders, and center fielder. Pitchers have a different grading system.

      What about Shohei Ohtani? At the time of his article, Mintz did not have sufficient data on Ohtani.

      Mintz observes that Mike Trout worked one winter to improve his throwing, and Julio Rodriguez worked to increase his speed. This suggests that the ratings probably change over the life of a player and are dependent on when they are measured.

      Other authors suggest that there is a sixth tool of exceptional players such as mental makeup and character. Another tool might be situational game awareness.

      Modern motion tracking data by Statcast and others did not exist until fairly recently. Willie Mays is generally considered the greatest five-tool player. Using statistical measures, author Herm Krabbenhoft suggests Tris Speaker, Ty Cobb, and Honus Wagner should be considered as five tool players, although Krabbenhoft measures hitting for power with SLG (slugging percentage) and ISO (isolated power), not home runs https://sabr.org/journal/article/honus-wagner-baseballs-prototypical-five-tooler/ . A very different measure of hitting with power would be something like home run distance greater than 425 feet or launch angle and velocity.

      What about Babe Ruth? We know Babe Ruth's career numbers are .342 batting average and 714 home runs. I have not read anything about his defense, throwing, or speed. He did steal 123 bases, including home 10 times; maybe he was faster than we realize. He is remembered for getting thrown out stealing second to end the 1926 World Series, but perhaps the hit-and-run play was on, and Bob Meusel, the batter, swung and missed the pitch? See https://baseballegg.com/2019/10/30/babe-ruths-failed-stolen-base-attempt-ended-the-1926-world-series-or-is-that-what-really-happened/ . Ruth had 204 assists as an outfielder, which sounds like a lot. I wonder how he would have ranked in defense, throwing, and speed?

      Here is my R code. I do like radar charts for comparing one to three observations over five variables, as a change of pace from column charts. I used the fmsb library for the radar charts. There is also a ggradar library, but I did not like its visualization. One of the quirks of fmsb is that the axis for each variable can have its own scale. Originally I used each variable's max and min values, but the axes were out of sync, so I replaced this with the grand max and min. Also, I could not get the values, which are all multiples of ten, to line up on the concentric pentagons.

library(fmsb)
library(scales)

group = c("Hit_avg", "Hit_power", "Defense", "Throwing", "Speed")
player_names = c("Trout","Betts","Judge")
players <- data.frame(
   row.names = player_names,
   Hit_avg = c(80, 70, 70),
   Hit_power = c(70,60,80),
   Defense = c(60,70,60),
   Throwing = c(60,80,70),
   Speed = c(60,70,50))
players

# The row 1 should contain the maximum values for each variable
# The row 2 should contain the minimum values for each variable
# Data for cases or individuals should be given starting from row 3
# Define the variable ranges: maximum and minimum; however, want axes to have equal scales

max_min <- data.frame(
   Hit_avg = c(max(players), min(players)),
   Hit_power = c(max(players), min(players)),
   Defense = c(max(players), min(players)),
   Throwing = c(max(players), min(players)),
   Speed = c(max(players), min(players)))

rownames(max_min) <- c("Max", "Min") # row 1 has max's, row 2 has min's.
df <- rbind(max_min, players)
df

player1_data <- df[c("Max", "Min", player_names[1]), ]
player2_data <- df[c("Max", "Min", player_names[2]), ]
player3_data <- df[c("Max", "Min", player_names[3]), ]

chart <- function(data, color, title){
     radarchart(data, axistype = 0,
       pcol = color, pfcol = scales::alpha(color, 0.5), plwd = 2, plty = 1,
        vlabels = colnames(data), vlcex = 1.5,
       cglcol = "black", cglty = 1, cglwd = 0.8,
        caxislabels = NULL,
       title = title)
}

# Plot the data for players 1, 2, and 3 separately
chart(data=player1_data, color="#00AFBB", title="MIKE TROUT 5 Tools")
chart(data=player2_data, color="#E7B800", title="MOOKIE BETTS 5 Tools")
chart(data=player3_data, color="#FC4E07", title="AARON JUDGE 5 Tools")

# Plot the data for three players
chart(data=df, color=c("#00AFBB", "#E7B800", "#FC4E07"), # blue-green, red-green, red-green
     title="TROUT, BETTS, JUDGE 5 Tools")
legend(
     x = "bottom", legend = rownames(df[-c(1,2),]), horiz = FALSE,
     bty = "n", pch = 20 , col = c("#00AFBB", "#E7B800", "#FC4E07"),
     text.col = "black", cex = 1.25, pt.cex = 1.5)

###########################################

# column graphs

library(tibble)
library(tidyr)
library(ggplot2)
# Reshape data to long format
players_long <- players %>%
     rownames_to_column("player") %>%
     pivot_longer(cols = -player, names_to = "group", values_to = "value")

# Common theme for graphs
common_theme <- theme(
   legend.position="right",
   plot.title = element_text(size=15, face="bold"),
   axis.title = element_text(size=15, face="bold"),
   axis.text = element_text(size=15, face="bold"),
   legend.title = element_text(size=15, face="bold"),
   legend.text = element_text(size=15, face="bold"))

# Create column graph: Tool Ratings by Player
ggplot(players_long, aes(x = player, y = value, fill = group, title = "Tool Ratings by Player")) +
   geom_col(position = "dodge") +
   labs(x = "Player", y = "Rating", fill = "Group") +
   common_theme

# Create the column graph: Player Ratings for each Tool
ggplot(players_long, aes(x = group, y = value, fill = player)) +
   geom_col(position = "dodge") +
   labs(x = "Group", y = "Rating", fill = "Player", title = "Player Ratings for each Tool") +
   common_theme

### END

##################################################################################

Saturday, September 17, 2022

How do I count thee? Let me count the ways?

by Jerry Tuttle   

In Major League Baseball, a player who hits 50 home runs in a single season has hit a lot of home runs. Suppose I want to count the number of 50 homer seasons by team, and also the number of 50 homer seasons by New York Yankees. (I will count Maris and Mantle in 1961 as two.) Here is the data including Aaron Judge's 62 in 2022 :

You would think base R would have a count function such as count(df$Team) and count(df$Team == "NYY") but this gives the error "could not find function 'count'". Base R does not have a count function.

Base R has at last four ways to perform a count:

1. The table function will count items in a vector.    table(df$Team) presents results horizontally, and data.frame(table(df$Team)) presents results vertically.    table(df$Team == "NYY") displays results 37 false and 10 true, while table(df$Team == "NYY")[2] just displays the result 10 true.

2. The sum function can be used to count the number of rows meeting a condition.    sum(df$Team == "NYY") displays the result 10. Here df$Team == "NYY" is creating a logical vector, and sum is summing the number of true = 1.

3. Similar to sum, nrow(df[df$Team == "NYY", ]) counts the number of rows meeting the NYY condition.

4. The length function counts the number of elements in an R object.    length(which(df$Team == "NYY")) , length(df$Team[df$Team == "NYY"]) , and length(grep("NYY", df[ , "Team"])) are all ways that will count the 10 Yankees.

The more direct solution to counting uses the count function in the dplyr library. Note that dplyr's count function applies to a data frame or tibble, but not to a vector. After loading library(dplyr) ,

1. df %>% count(Team) lists the count for each team.

2. df %>% filter(Team = "NYY") lists each Yankee, and you can see there are 10.

3. df %>% count(Team == "NYY") displays 37 false and 10 true, while df %>% filter(Team == "NYY") %>% count() just displays the 10 true.

The following is a bar chart of the results by team for teams with at least 1 50 homer season:

Finally, "How do I count thee? Let me count the ways?" is of course adapted from Elizabeth Barrett Browning's poem "How do I love thee? Let me count the ways?" But in her poem, just how would we count the number of times "love" is mentioned? The tidytext library makes counting words fairly easy, and the answer is ten, the same number of 50 homer Yankee seasons. Coincidence?

The following is all the R code. Happy counting!

library(dplyr)
library(ggplot2)
library(tidytext)

df <- data.frame(
   Player=c('Ruth','Ruth','Ruth','Ruth','Wilson','Foxx','Greenberg','Foxx','Kiner','Mize','Kiner','Mays','Mantle','Maris', 'Mantle','Mays','Foster','Fielder','Belle','McGwire','Anderson','McGwire','Griffey','McGwire','Sosa','Griffey', 'Vaughn','McGwire','Sosa','Sosa','Bonds','Sosa','Gonzalez','Rodriguez','Rodriguez','Thome','Jones','Howard','Ortiz', 'Rodriguez','Fielder','Bautista','Davis','Stanton','Judge','Alonso','Judge'),
   Year=c(1920,1921,1927,1928,1930,1932,1938,1938,1947,1947,1949,1955,1956,1961,1961,1965,1977,1990,1995,1996,1996,1997,1997, 1998,1998,1998,1998,1999,1999,2000,2001,2001,2001,2001,2002,2002,2005,2006,2006,2007,2007,2010,2013,2017,2017,2019,2022),
   Homers=c(54,59,60,54,56,58,58,50,51,51,54,51,52,61,54,52,52,51,50,52,50,58,56,70,66,56,50,65,63,50,73,64,57,52,57,52,51, 58,54,54,50,54,53,59,52,53,62),
   Team=c('NYY','NYY','NYY','NYY','CHC','PHA','DET','BOS','PIT','NYG','PIT','NYG','NYY','NYY','NYY','SF','CIN','DET','CLE', 'OAK','BAL','OAK/SLC','SEA','SLC','CHC','SEA','SD','SLC','CHC','CHC','SF','CHC','ARI','TEX','TEX','CLE','ATL','PHP', 'BOR','NYY','MIL','TOR','BAL','MIA','NYY','NYM','NYY'))

head(df)

# base R ways to count:

table(df$Team)    # shows results horizontally
data.frame(table(df$Team))    #shows results vertically
table(df$Team == "NYY")    # displays 37 false and 10 true
table(df$Team == "NYY")[2]

sum(df$Team == "NYY")    # displays the result 10.

nrow(df[df$Team == "NYY", ])    # counts the number of rows meeting the NYY condition.

length(which(df$Team == "NYY"))     # which returns a vector of indices which are true
length(df$Team[df$Team == "NYY"])
length(grep("NYY", df[ , "Team"]))     # grep returns a vector of indices that match the pattern

# dplyr R ways to count; remember to load library(dplyr):

df %>% count(Team)    # lists the count for each team.

df %>% filter(Team == "NYY")    # lists each Yankee, and you can see there are 10.

df %>% count(Team == "NYY")    # displays 37 false and 10 true, while
df %>% filter(Team == "NYY") %>% count()    # just displays the 10 true.

# barplot of all teams with at least 1 50 homer season; remember to load library(ggplot2)

df %>%
    group_by(Team) %>%
    summarise(count = n()) %>%
    ggplot(aes(x=reorder(Team, count), y=count, fill=Team)) +
    geom_bar(stat = 'identity') +
    ggtitle("Count of 50 Homer Seasons") +
    xlab("Team") +
    scale_y_continuous(breaks=c(1,2,3,4,5,6,7,8,9,10)) +
    coord_flip() +
    theme(plot.title = element_text(face="bold", size=18)) +
    theme(axis.title.y = element_text(face="bold")) +
    theme(axis.title.x = element_blank()) +
    theme(axis.text.x = element_text(size=12, face="bold"),
    axis.text.y = element_text(size=12, face="bold")) +
    theme(legend.position="none")

# count number of times "love" is mentioned in Browning's poem; remember to load library(tidytext)

textfile <- c("How do I love thee? Let me count the ways.",
"I love thee to the depth and breadth and height",
"My soul can reach, when feeling out of sight",
"For the ends of being and ideal grace.",
"I love thee to the level of every day's",
"Most quiet need, by sun and candle-light.",
"I love thee freely, as men strive for right.",
"I love thee purely, as they turn from praise.",
"I love thee with the passion put to use",
"In my old griefs, and with my childhood's faith.",
"I love thee with a love I seemed to lose",
"With my lost saints. I love thee with the breath,",
"Smiles, tears, of all my life; and, if God choose,",
"I shall but love thee better after death.")

df<-data.frame(line=1:length(textfile), text=textfile)
df_words <- df %>% unnest_tokens(word, text)
cleaned_words <- df_words %>% anti_join(get_stopwords())
cleaned_words %>% count(word, sort = TRUE) %>% head(6)
cleaned_words %>% filter(word == "love") %>% count()

More great R blogs at r-bloggers.com

Monday, July 6, 2020

Outliers and Domain Knowledge

      I would like to share some thoughts about outliers and domain knowledge.
      One of the common steps during the data exploration stage is the search for outliers. Some analysis methods such as regression are very sensitive to outliers. As an example of sensitivity, in the following data (10,10) is an outlier. Including the outlier produces a regression line y = .26 + .91x, while excluding the outlier produces the very different regression line y = 2.

x <- c(1,1,1,2,2,2,3,3,3,10)
y <- c(1,2,3,1,2,3,1,2,3,10)
df <- data.frame(cbind(x,y))
lm(y ~ x, df)
plot(x,y, pch=16)
abline(lm(y ~ x, df)

      Statistics books sometimes define an outlier as being outside the range of Q1 ± 1.5*IQR or Q1 ± 3*IQR, where Q1 is the lower quartile (25th percentile value), Q3 is the upper quartile (75th percentile value), and the interquartle range IQR = Q3 – Q1.
      What does one do with an outlier? It could be bad data. It is pretty unlikely that there is a graduate student who is age 9, but we don’t know whether the value should be 19 (very rare, but possible), or 29 (likely), or 39 or more (not so rare). If we have the opportunity to ask the owner of the data, perhaps we can get the value corrected. More likely is we can not ask the owner. We can delete the entire observation, or we can pretend to correct the value with a mode or median value or a judgmental value.



      Perhaps the outlier is not bad data but rather just an unusual value. In a portfolio of property or liability insurance claims, the distribution is often positively skewed (mean greater than mode, a long tail to the positive side of the mode). Most claims are small, but occasionally there is that one enormous claim. What does one do with that outlier value? Some authors consider data science to be the Venn diagram intersection among math/statistics, computer science, and domain knowledge (see for example Drew Conway, above, in )http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram. If the data scientist is not the domain expert, he or she should consult with one. With insurance claims there are several possibilities. One is that the enormous claim is one that is unlikely to reoccur for any number of reasons. Hopefully there will never be another September 11 type destruction of two World Trade Center buildings owned by a single owner. Another example is when the insurance policy terms are revised to literally prohibit a specific kind of claim in the future. Another possibility is that the specific claim is unlikely to reoccur (the insurance company stopped insuring wheelchairs, so there won’t be another wheelchair claim), but that claim is representative of another kind of claim that is likely to occur. In this case, the outlier should not be deleted. One author has said it takes Solomon-like wisdom to discern which possibility to believe.
      An interesting example of outliers occurs with sports data. For many reasons, US major league baseball player statistics have changed over the years. There are more great home run seasons nowadays than decades ago, but there are fewer great batting average seasons. Baseball fanatics know the last .400 hitter (40% ratio of hits divided by at bats over the entire season) was Ted Williams in 1941. If we have 80 years of baseball data and we are predicting the probability of another .400 hitter, we would predict close to zero. It’s possible, but extremely unlikely, right? Actually no. Assuming there will still be a shortened season in 2020, a decision that may change, this author is willing to forecast that there will be a .400 hitter in a shortened season. This is due to the theory that batters need less time in spring training practice to be at full ability than pitchers, and it is easier to achieve .400 in a small number of at bats earlier in the season when the pitchers are not at full ability. This is another example of domain expertise as a lifetime baseball fan.