How do I count thee? Let me count the ways?

Change is good, so don't change my change

      One of my former insurance colleagues once said, "When a company makes a change, it's probably not going to benefit you....

Thursday, October 23, 2025

Change is good, so don't change my change

      One of my former insurance colleagues once said, "When a company makes a change, it's probably not going to benefit you."

      Assuming the above photo is legitimate, and of course nowadays who knows, McDonald's says it will be rounding cash change to the nearest five cents. McD is probably not the only one doing it. So if your change has last digit {1, 2, 6, 7} McD will round down and you will lose 1 or 2 cents; if your change has last digit {3, 4, 8, 9} McD will round up and you will gain 1 or 2 cents; if your change has last digit {0, 5} there is no effect of rounding. So in 40% of the last digits you lose, in 40% you gain, and in 20%, there is no effect. The issue is, are the ten last digits 0 through 9 equally likely?

      All I need is a nice sample of McD receipts. On cash transactions. Well, that's not going to happen. Let's try a different approach.

      Similar to other retail prices, I assume many meal prices end in 99 cents, such as $5.99, to appear less expensive. (I was never asked to do this as an actuary.) Marketers know there is a left-digit effect, where customers focus on the leftmost number, so a price like $5.99 feels significantly cheaper than $6.00. See Psychological pricing. However, even if a majority of meal prices end in 99 cents, I believe the effect of that majority will be neutralized by the following.

      I have not been to a McDonald's in years. But I do know there are many different menu items, I have no idea what the frequency is of say a Big Mac versus Chicken McNuggets purchase, I have no idea of the frequency of permutations of all possible menu item purchases, prices vary based on location because franchises set their own pricing, and there is also a sales tax that differs by state and sometimes by city within state. No doubt a McD data analyst has access to all this data, but I don't. So what follows is not an exact analysis, but rather an approach that is intended to be unbiased.

      At first I thought Benford's Law might be useful. This law applies to the distribution of first digits, not last digits, and says under certain conditions (such as, when values are distributed across multiple orders of magnitude, which is probably not true for a place like McDonald's), P(1) = 30.1%, P(2) = 17.6%, P(3) = 12.5%, etc. This is interesting, but not useful here. See Benford for more on Benford.

      In How to Solve It, George Polya suggests, "If you cannot solve the proposed problem, try to solve first some related problem."

      I discovered if I limit the problem to a single meal item, a McDonald's Big Mac Meal (MBMM), I could get an average (well, more of a representative) price by state. I could also apply an average (again, representative) sales tax by state.

      For each state, I took the MBMM price, added tax, and applied the McD rounding rule to the last digit. For each state I define Cents as after-tax cents portion of the MBMM price, CentsRounded as the cents portion after the McD rounding rule, and Rounding Difference = CentsRounded - Cents. The unweighted state average Rounding Difference was .04 cents. Not 4 cents, but 4% of a penny. This is positive, indicating a slight gain to McDonald's, but barely greater than zero. This .04 cents is per transaction. Of course, McDonald's does sell a lot of hamburgers.

      This study had a lot of limitations, some of which I noted above, so it is certainly not an exhaustive study. But to the extent I did what I could, my friend in the opening paragraph was right: "When a company makes a change, it's probably not going to benefit you."

      The R code is as follows:


library(dplyr)

# --- 1. Big Mac Meal Prices by State (approx, USD) ---
meal_prices <- tibble::tribble(
  ~State, ~Price,
  "Alabama", 9.49, "Alaska", 11.59, "Arizona", 12.99, "Arkansas", 8.79,
  "California", 10.69, "Colorado", 9.89, "Connecticut", 9.79, "Delaware", 8.99,
  "Florida", 9.39, "Georgia", 9.49, "Hawaii", 10.99, "Idaho", 9.29,
  "Illinois", 9.59, "Indiana", 8.99, "Iowa", 8.89, "Kansas", 9.09,
  "Kentucky", 7.79, "Louisiana", 9.69, "Maine", 9.19, "Maryland", 9.49,
  "Massachusetts", 9.99, "Michigan", 8.59, "Minnesota", 9.19, "Mississippi", 9.29,
  "Missouri", 8.99, "Montana", 9.09, "Nebraska", 8.59, "Nevada", 9.69,
  "New Hampshire", 8.99, "New Jersey", 9.49, "New Mexico", 9.09, "New York", 9.89,
  "North Carolina", 9.29, "North Dakota", 10.59, "Ohio", 8.89, "Oklahoma", 8.99,
  "Oregon", 10.69, "Pennsylvania", 9.19, "Rhode Island", 9.49, "South Carolina", 9.29,
  "South Dakota", 9.09, "Tennessee", 9.79, "Texas", 9.19, "Utah", 9.39,
  "Vermont", 9.19, "Virginia", 8.99, "Washington", 9.69, "West Virginia", 8.99,
  "Wisconsin", 9.19, "Wyoming", 8.99
)

# --- 2. Combined State + Local Sales Tax Rates (approx, fraction) ---
tax_rates <- tibble::tribble(
  ~State, ~TaxRate,
  "Alabama",0.0944,"Alaska",0.0182,"Arizona",0.0837,"Arkansas",0.0948,
  "California",0.0885,"Colorado",0.0780,"Connecticut",0.0635,"Delaware",0.0000,
  "Florida",0.0700,"Georgia",0.0739,"Hawaii",0.0450,"Idaho",0.0602,
  "Illinois",0.0874,"Indiana",0.0700,"Iowa",0.0689,"Kansas",0.0874,
  "Kentucky",0.0600,"Louisiana",0.1011,"Maine",0.0550,"Maryland",0.0600,
  "Massachusetts",0.0625,"Michigan",0.0600,"Minnesota",0.0749,"Mississippi",0.0707,
  "Missouri",0.0813,"Montana",0.0000,"Nebraska",0.0696,"Nevada",0.0849,
  "New Hampshire",0.0000,"New Jersey",0.0660,"New Mexico",0.0777,"New York",0.0852,
  "North Carolina",0.0698,"North Dakota",0.0696,"Ohio",0.0724,"Oklahoma",0.0908,
  "Oregon",0.0000,"Pennsylvania",0.0634,"Rhode Island",0.0700,"South Carolina",0.0744,
  "South Dakota",0.0640,"Tennessee",0.0961,"Texas",0.0819,"Utah",0.0702,
  "Vermont",0.0636,"Virginia",0.0567,"Washington",0.0947,"West Virginia",0.0648,
  "Wisconsin",0.0572,"Wyoming",0.0556
)

# --- 3. Merge datasets ---
df <- inner_join(meal_prices, tax_rates, by = "State")

# --- 4. Compute totals and apply rounding rule ---
options(dplyr.width = Inf)
df <- df %>%
  mutate(
    # Total in cents, rounded to nearest cent
    Total_cents = round(Price * (1 + TaxRate) * 100),
    Dollars = Total_cents %/% 100,    # whole dollars
    Cents = Total_cents %% 100,       # cents 0-99
    
    # Apply 5-cent rounding rule
    CentsRounded = sapply(Cents, function(x) {
      last_digit <- x %% 10
      if (last_digit %in% c(1,2,6,7)) {
        return(floor(x / 5) * 5)
      } else if (last_digit %in% c(3,4,8,9)) {
        return(ceiling(x / 5) * 5)
      } else {
        return(x)
      }
    }),
    
    # Final total in dollars
    TotalRounded = Dollars + CentsRounded / 100,
    
    # Rounding difference relative to nearest-cent total (in cents)
    RoundingDiff = CentsRounded - Cents
  )
head(df)

# --- 5. Summaries ---
mean_diff <- mean(df$RoundingDiff)   # positive is benefit to company
sd_diff   <- sd(df$RoundingDiff)
avg_abs   <- mean(abs(df$RoundingDiff))

cat("Average rounding difference (¢):", round(mean_diff,3), "\n")
cat("SD of rounding difference (¢):", round(sd_diff,3), "\n")
cat("Average absolute rounding (¢):", round(avg_abs,3), "\n\n")

# Distribution of rounded cents
print(table(df$CentsRounded))

# --- 6. Histogram of rounding differences ---
bin_colors <- c("red", "green", "blue", "yellow", "purple")
hist(df$RoundingDiff,
     breaks = seq(-2.5, 2.5, 0.5),
     col = bin_colors,
     main = "Distribution of Rounding Differences\nPositive is benefit to company",
     xlab = "Rounding Difference (¢)",
     ylab = "Number of States",
     font.lab = 2)

# --- 7. State-by-state table of rounding effects ---
state_table <- as.data.frame(df) %>%
  select(State, Price, TaxRate, Total_cents, TotalRounded, RoundingDiff) %>%
  arrange(desc(RoundingDiff))

print(state_table)
  
  
End

No comments:

Post a Comment