Can an actuary / mathematician / data analyst say anything objective and data-oriented about the 2024 US presidential campaign?
Yes, if I confine my remarks to a numerical text analysis of the candidates' words, rather than attempt to comment on the candidates' political and economic views. This analysis is based solely on data that I collected from the two convention speeches.
I performed a sentiment analysis of the candidates' emotional words. Here is a graphical summary, discussion to follow:
Sentiment analysis
Most frequent words
Summary statistics
This is intended as an objective analysis. I am not trying to make either candidate look good, or bad. I have used these same text analysis techniques in other projects such as analyzing Hamlet, analying short stories, and analyzing the Twitter tweets of a radio talk show host.
For each of Trump and Harris, I started with a transcript of their convention sppeches. I believe the transcripts are their spoken transcripts, not their written transcripts, based on their very first few sentences. I used various computer packages in R such as tm and tidytext to tokenize the documents into individual sentences, words, and characters. I was guided by the works of Silge and Robinson in Text Mining with R and Williams in Data Science Desktop Survival Guide.
A summary and a comparison of the of the tokenization of the speeches is the following, repeated from before.
Sentiment analysis ia analyzing text to determine its emotional tone. Linguistics experts have built dictionaries that associate a large list of words with eight emotions (anger, anticipation, disgust, fear, joy, sadness, surprise, and trust) and also negative and positive sentiments. For example, the word "ache" is associated with sadness. Some words have more than one emotion; for example "admirable" is associated with both joy and trust. Further, some words have either negative or positive associations.
There are some limitations in sentiment analysis. The sentiment dictionary does not recognize sarcasm, and I am limiting my analysis to single words so I am not picking up negation (such as "not expensive") or other instances where the emotion requires several words. A conclusion from the sentiment distribution graph is that the candidates are surprisingly similar in most of these emotions. The biggest differences are that Trump has a greater portion of his words categorized as anger and negative than Harris has.
Most frequent positive and megative words
Trump has a larger percentage of negative words (negative divided by positive plus negative) than Harris (43% to 37%). These positive and negative lists seem consistent with my memory of their speeches.
Most frequent words
Distribution of word sizes
Final thoughts
It is hard to be indifferent about the 2024 US presidential election. You have your opinion, and I have mine.
Much of what the candidates say is their opinion, or their plan if elected, and these things are not things we can
verify.
Some things the candidates say as if they are facts, are stated in a way that is open to interpretation. A good example is that "You are better (or worse) off financially today than four years ago." I can choose one measure, collect some data, and show I am better off; or I can choose a very different measure, collect some data, and show I am worse off.
Some things the candidates say as facts ARE verifiable. I am in no position to do such verifying, but a number of third parties do this. Here are a few links. I can not vouch for their reliability or bias.
The following is my R code:
# Trump convention speech, July 19, 2024
# Harris convention speech, August 23, 2024
library(tidytext)
speaker <- readline(prompt = "Enter Trump, or enter Harris: ")
docs <- Corpus(VectorSource(text_df$text))
custom_colors <- c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728", "#9467bd",
common_theme <- theme(
docs_df <- data.frame(text = sapply(docs, as.character)) # Convert Corpus to data.frame
wordcountdist <- wordcountfile %>% count(numbchars)
syls <- nsyllable(wordfile$word, language = "en")
# Flesch-Kincaid reading ease formula
# Function to find the grade based on score; vlookup
score_to_find <- flesch
# delete stop words
wordfreq <- wordfile
unique_words <- nrow(wordfreq)
graphtitle <- paste(speaker, "Word Frequency")
# sentiments; note mother is both positive and negative!
sentiment_colors <- c(
title <- paste(speaker, "- Sentiment Plot")
df4 <- df3 %>% filter(sentiment == "positive" | sentiment == "negative")
title <- paste(speaker, "- Most Frequent Positive and Negative Words")
if (speaker == "Trump"){
# the results of stemming and lemmatizing were not used in the report
# lemmatize
# if word is not in dictionary, then leave word as is; otherwise, use stemmed word.
# End
# https://www.nytimes.com/2024/07/19/us/politics/trump-rnc-speech-transcript.html
# https://singjupost.com/full-transcript-kamala-harriss-2024-dnc-speech/?singlepage=1
library(tm)
library(dplyr)
library(nsyllable)
library(SnowballC)
library(ggplot2)
library(forcats)
library(ggpubr)
if (speaker == "Trump" | speaker == "Harris") print(speaker) else print("Invalid input")
trump_file <- "C:/Users/Jerry/Desktop/Harris_Trump/trump_convention_speech.txt"
harris_file <- "C:/Users/Jerry/Desktop/Harris_Trump/harris_convention_speech.txt"
textfile <- ifelse(speaker=="Trump", trump_file, harris_file)
textfile <- readLines(textfile)
text_df <- data.frame(line = 1:length(textfile), text=textfile)
names(text_df)[2] <- "text"
docs <- tm_map(docs, content_transformer(tolower))
docs <- tm_map(docs, removeNumbers)
docs <- tm_map(docs, removePunctuation, ucp=TRUE)
docs <- tm_map(docs, stripWhitespace)
inspect(docs[1:8])
"#8c564b", "#e377c2", "#7f7f7f", "#bcbd22", "#17becf",
"#6a3d9a", "#ff9e1b", "#f6c6c7", "#8dd3c7", "#ffffb3",
"#bebada", "#fb8072", "#80b1d3", "#fdb462", "#b3e2cd",
"#ccebc5")
legend.position="NULL",
plot.title = element_text(size=15, face="bold"),
plot.subtitle = element_text(size=12.5, face="bold"),
axis.title = element_text(size=15, face="bold"),
axis.text = element_text(size=15, face="bold"),
legend.title = element_text(size=15, face="bold"),
legend.text = element_text(size=15, face="bold"))
wordfile <- unnest_tokens(docs_df, word, text, token = "words")
wordfile %>% count(word, sort=TRUE)
wordcountfile <- mutate(wordfile, numbchars = nchar(word)) # characters per word
long1 <- wordcountfile[which(wordcountfile$numbchars == max(wordcountfile$numbchars)),1] # longest word
long2 <- wordcountfile[which(wordcountfile$numbchars == max(wordcountfile$numbchars)),2]
numberchars <- sum(wordcountfile$numbchars)
numberwords <- sum(count(wordcountfile, word, sort=TRUE)$n) # no. words
avgcharperword <- round(numberchars / numberwords, digits=2)
sentencefile <- unnest_tokens(text_df, sentence, text, token = "sentences")
sentencecount <- sum(count(sentencefile, sentence, sort=TRUE)$n)
avgwordpersent <- round(numberwords/sentencecount,2)
wordcountdist$numbchars <- as.factor(wordcountdist$numbchars)
title <- paste(speaker, "- Distribution of Word Size")
subtitle <- paste("Longest word: ", long1, long2, "characters")
ggplot(wordcountdist, aes(numbchars, n, fill=numbchars)) +
geom_bar(stat="identity", position = "dodge", width=0.5) +
labs(title=title, subtitle=subtitle) +
xlab("number of characters per word") + ylab("") +
scale_fill_manual(values = custom_colors) +
theme(legend.position = "none") +
common_theme
syls[which(wordfile$word == "jd")] <- 2 # used because nsyllable generated error here
syls[which(wordfile$word == "nd")] <- 1 # used because nsyllable generated error here
syls[which(wordfile$word == "st")] <- 3 # s/b 21st; used because nsyllable generated error here
syls[which(wordfile$word == "gasolinepowered")] <- 5 # used because nsyllable erred here
long2 <- min(syls[(syls == max(syls, na.rm=TRUE))], na.rm=TRUE)
w <- min(which(syls == long2))
long1 <- wordfile$word[w]
avgsylperword <- round(sum(syls)/numberwords, digits = 2)
avgsylperword
syls <- data.frame(syls) %>% count(syls)
syls$syls <- as.factor(syls$syls)
colnames(syls) <- c("syllables", "n")
title <- paste(speaker, "- Distribution of No. Syllables per Word")
subtitle <- paste("Most syllables: ", long1, long2, "syllables")
ggplot(syls, aes(syllables, n, fill = syllables)) +
geom_bar(stat="identity", position = "dodge", width=0.5) +
labs(title=title, subtitle=subtitle) +
xlab("number of syllables per word") + ylab("") +
scale_fill_manual(values = custom_colors) +
theme(legend.position = "none") +
common_theme
flesch <- round(206.835 - 1.015*(numberwords/sentencecount) - 84.6*(sum(syls$n)/numberwords),2) # Flesch reading ease
flesch
flesch_df <- data.frame(score = c(0,30,50,60,70,80,90,100),
grade = c("College graduate","College","10th - 12th grade","8th - 9th grade",
"7th grade","6th grade","5th grade","below 5th grade"))
find_grade <- function(score, flesch_df) {
idx <- findInterval(score, flesch_df$score)
if (idx == 0) {
return("below 5th grade") # Handle case where score is below the minimum
} else {
return(flesch_df$grade[idx])
}
}
flesch_grade <- find_grade(score_to_find, flesch_df)
flesch_grade
docs_df <- data.frame(text = sapply(docs, as.character)) # Convert Corpus to data.frame
wordfile <- unnest_tokens(docs_df, word, text, token = "words")
stop_words <- data.frame(tidytext::stop_words) # more words than tm
my_stop_words <- data.frame(word = c("theyre", "hes", "dont",
"didnt","youre","cant", "im","whats", "weve", "theyve", "youve",
"couldnt", "wont", "youd"))
wordfile <- anti_join(wordfile, stop_words)
wordfile <- anti_join(wordfile, my_stop_words)
wordfreq <- count(wordfreq, word, sort=TRUE) # word frequency excl stop words
wordfreqdf <- data.frame(wordfreq)
portion_unique_words <- round(unique_words / numberwords, digits=2)
wordfreqdf20 <- wordfreqdf[1:21,] # Think about threshold
wordfreqdf20
wordfreqdf20$word <- fct_reorder(wordfreqdf20$word, wordfreqdf20$n, .desc = FALSE)
ggplot(data=wordfreqdf20, aes(x=word, y=n, fill=word)) +
geom_bar(stat="identity", position = "dodge", width=0.5) +
coord_flip() +
common_theme +
xlab("") + ylab("Frequency") +
ggtitle(graphtitle) +
scale_fill_manual(values = custom_colors) +
theme(legend.position = "none")
df1 <- data.frame(wordfile)
colnames(df1) <- "word"
df2 <- get_sentiments("nrc")
df3 <- merge(x=df1, y=df2, by="word", all.x=TRUE, stringsAsFactors=FALSE)
df3 <- subset(df3, !is.na(sentiment))
table(df3$sentiment)
w <- data.frame(table(df3$sentiment))
colnames(w) <- c("sentiment", "n")
"Anger" = "red",
"Anticipation" = "green",
"Disgust" = "brown",
"Fear" = "purple",
"Joy" = "yellow",
"Negative" = "gray",
"Positive" = "lightblue",
"Sadness" = "blue",
"Surprise" = "pink",
"Trust" = "turquoise")
ggplot(w, aes(sentiment, n)) +
geom_bar(stat = "identity", position = "dodge", width = 0.5, fill = sentiment_colors) +
ggtitle(title) +
ylab("") +
common_theme +
theme(axis.text.x = element_text(angle = 45, hjust=1))
w <- with(df4, table(sentiment))
neg <- w[1]
pos <- w[2]
neg_ratio <- round(w[1] / (w[1] + w[2]), digits=2)
df5 <- df4 %>% group_by(sentiment) %>% count(word, sort=TRUE)
pos_freq <- df5 %>% filter(sentiment=="positive") %>% top_n(10, wt = n) %>% slice_head(n = 10)
neg_freq <- df5 %>% filter(sentiment=="negative") %>% top_n(10, wt = n) %>% slice_head(n = 10) # ties
pos_freq$word <- fct_reorder(pos_freq$word, pos_freq$n, .desc = FALSE)
neg_freq$word <- fct_reorder(neg_freq$word, neg_freq$n, .desc = FALSE)
p1 <- ggplot(pos_freq, aes(word, n)) +
geom_bar(stat="identity", position = "dodge", width=0.5, fill="darkgreen") +
ggtitle("Positves") +
common_theme +
xlab("") +
coord_flip()
p2 <- ggplot(neg_freq, aes(word, n)) +
geom_bar(stat="identity", position = "dodge", width=0.5, fill="red") +
ggtitle("Negatives") +
common_theme +
xlab("") +
coord_flip()
plot <- ggarrange(p1,p2, ncol=2, nrow=1, legend=NULL)
annotate_figure(plot, top = text_grob(title,
color = "black", face = "bold", size = 14))
t <- data_frame(speaker, numberwords, avgwordpersent, avgcharperword, avgsylperword, flesch, flesch_grade, portion_unique_words, neg_ratio)
print(t)
} else {h <- data_frame(speaker, numberwords, avgwordpersent, avgcharperword, avgsylperword, flesch, flesch_grade, portion_unique_words, neg_ratio)
conclusion <- data.frame(rbind(t,h))
conclusion <- t(conclusion)
colnames(conclusion) <- c("Trump", "Harris")
conclusion <- conclusion[-1,]
print(conclusion)
}
# stemming
wordfile <- wordfile %>%
mutate(stem = wordStem(word)) %>%
count(stem, sort = TRUE)
df1 <- wordfile # df1 has col named stem
url <- "https://raw.githubusercontent.com/michmech/lemmatization-lists/master/lemmatization-en.txt"
df2 <- read.table(url, header = FALSE, sep = "\t", quote = "", stringsAsFactors = FALSE)
names(df2) <- c("stem", "word")
df3 <- merge(x = df1, y = df2, by = "stem", all.x = TRUE, stringsAsFactors=FALSE)
df3$word <- ifelse(is.na(df3$word), df3$stem, df3$stem)
How do I count thee? Let me count the ways?
Sheldon Cooper's favorite number
If you are a fan of the television series "The Big Bang Theory", then you know Sheldon often wears a shirt with 73 ...
Sunday, August 25, 2024
Text analysis of 2024 US Presidential convention speeches
Sunday, February 12, 2023
Some different graph types in R
I wanted a really small dataset to experiment in R, so I used numbers of days in office for US presidents who were assassinated. Students
of American history may want to pause reading this post and think about whether you can name the Presidents (and estimate the number of days), before continuing reading. Kennedy and Lincoln are most well-known, but there were others.
I decided I wanted a column chart with images on the x-axis, a wordcloud with the font size proportional to the number of days, a lolliplot chart which is a variation of a column chart but with a line instead of a bar and a dot at the end, and a donut chart which is a variation of a pie chart but where your eye focuses on the length of the arc rather than on the area of the sector.
The first chart requires images. I grabbed the images I needed (hopefully these are either old or Federal and therefore not subject to copyright prohibitions) and saved them as png files so they could be read with a readPNG from the png package.
Of course there are many more chart types that are beyond the scope of this blog post. One reference is Top 50 ggplot2 Visualizations - The Master List (With Full R Code). A very cool chart type is the radar chart which you can see at How to Create Radar Charts in R (With Examples).
Here is my output (click to enlarge) and my R code:
setwd("C:/Users/ ... ")
suppressMessages(library(dplyr))
suppressMessages(library(ggplot2))
library(png)
library(ggtext)
df <- data.frame(President = c("Lincoln", "Garfield", "McKinley", "Kennedy"),
Days_in_office = c(1503,199,1654,1036) )
# column chart with images on x-axis
p <- df %>%
ggplot() +
geom_col(mapping=aes(President, y=Days_in_office, fill=President)) +
scale_fill_manual(values=c("blue", "yellow", "red", "black")) +
labs(title="Axis Labels as Images") +
theme(plot.title = element_text(hjust = .5))
->
garfield <- readPNG("garfield.png")
kennedy <- readPNG("kennedy.png")
lincoln <- readPNG("lincoln.png")
mckinley <- readPNG("mckinley.png")
# in the following labels statement, please replace q with <, and replace z with >
labels <-
c("qimg src='garfield.png', width='35' /z","qimg src='kennedy.png', width='35' /z","qimg src='lincoln.png', width='35' /z","qimg src='mckinley.png', width='40', height='42' /z")
p <- p +
scale_x_discrete(labels = labels) +
theme(axis.text.x = ggtext::element_markdown())
p # takes a moment to draw
->
# ==============================================================
suppressMessages(library(wordcloud))
df %>% with(wordcloud(words=President, freq=Days_in_office, random.order=FALSE, random.color=FALSE, rot.per = 0,
colors = c("blue","black", "red")))
# =====================================================
# lolliplot plot
df %>%
ggplot() +
geom_segment(mapping=aes(x=President, xend=President, y=0, yend=Days_in_office), color=c("blue", "yellow", "red", "black") ) +
geom_point(aes(x=President, y=Days_in_office), size=4, color=c("blue", "yellow", "red", "black") ) +
ylab("Days_in_office") +
theme(axis.text = element_text(face="bold")) +
theme(text = element_text(size =16)) +
labs(title="Lolliplot Plot") +
theme(plot.title = element_text(hjust = .5))
df
# ===========================================================
# donut plot
donut <- br="" df="">
donut$fraction = donut$Days_in_office / sum(donut$Days_in_office)
->
# Compute the cumulative percentages (top of each rectangle)
donut$ymax = cumsum(donut$fraction)
# Compute the bottom of each rectangle
donut$ymin = c(0, head(donut$ymax, n=-1))
# Compute label position
donut$labelPosition <- 2="" br="" donut="" ymax="" ymin=""> ->
# Create label
donut$label <- ays_in_office="" br="" donut="" n="" paste0="" resident="" value:=""> ->
# Make the plot
ggplot(donut, aes(ymax=ymax, ymin=ymin, xmax=4, xmin=3, fill=President)) +
geom_rect() +
geom_label( x=3.5, aes(y=labelPosition, label=label), size=6) +
scale_fill_brewer(palette=4) +
coord_polar(theta="y") +
xlim(c(2, 4)) +
theme_void() +
theme(legend.position = "none") +
theme(text = element_text(size =16)) +
labs(title="Donut Plot") +
theme(plot.title = element_text(hjust = .5))