One year later...

About a year ago, I started recording my sleep schedule in a Google sheet. Something in the back of my unemployed mind told me that this would be good content for my new website, which I was building mainly in hopes of finding a job – after all, what potential employer wouldn’t want to see the kind of healthy bedtimes I was managing? I didn’t end up using the data because – surprisingly enough – it would be a long time before I had enough to amount to anything interesting. But I kept it up, and along the way I started tracking a few other things, like my exercise habits, reading lists, and pain management (for a chronic issue with my hip). Now that it’s been a year, I thought I’d share what I found with some brief reflections and some pretty charts.

First, reading and cleaning the data…

library(tidyverse)
library(googlesheets)
library(lubridate)
sheet_key <- extract_key_from_url(url)
gs <- gs_key(sheet_key)

work <- 
  gs %>% 
  gs_read(ws = "Work") %>% 
  select(-X1) %>% 
  rename_all(fix_name) %>% 
  rename(clock_in = `in`, clock_out = out) %>% 
  mutate(
    date = mdy(date)
  ) %>% 
  filter(!is.na(clock_out))

health <-
  gs %>% 
  gs_read(ws = "Health") %>% 
  select(-X1) %>% 
  rename_all(fix_name) %>% 
  mutate(
    date = mdy(date)
  ) %>% 
  filter(!is.na(bedtime)) %>% 
  filter(date >= mdy("07-01-2018")) 

books <-
  read_csv(file_library) %>% 
  separate(
    authors,
    into = c("author_1", "author_2", "author_3"), 
    sep = ","
  ) %>% 
  arrange(completed_date) %>% 
  filter(!is.na(pages)) %>% 
  mutate(
    month = month(completed_date, label = TRUE),
    year = year(completed_date),
    title_alpha = title,
    title = case_when(
      str_detect(title, ", The") ~ title %>% 
        str_remove(", The") %>% 
        str_c("The ", .),
      str_detect(title, ", A") ~ title %>% 
        str_remove(", A") %>% 
        str_c("A ", .),
      TRUE ~ title
    ),
    tags = case_when(
      is.na(tags) ~ "physical",
      TRUE ~ tags
    ),
    pages = case_when(
      # correcting a data error that bothered me
      title == "Homage to Catalonia" ~ 220 %>% as.double(), 
      TRUE ~ pages %>% as.double()
    )
  )

Here’s what that sleep data looks like – apparently I’ve never been able to stick to a consistent wake time for more than a few weeks.

set.seed(94305)

health %>% 
  filter(!is.na(bedtime)) %>% 
  mutate(wake_time = lead(wake_time)) %>% 
  filter(!is.na(wake_time)) %>% 
  mutate(
    noise = rnorm(sum(!is.na(health$bedtime)) - 1, 0, 1 / 60),
    bedtime_hour = hour(bedtime) + (minute(bedtime)) / 60 + noise,
    wake_time_hour = hour(wake_time) + (minute(wake_time)) / 60 + noise,
    bedtime_hour = if_else(bedtime_hour > 12, bedtime_hour - 24, bedtime_hour - 0),
    sleep_length = wake_time_hour - bedtime_hour
  ) %>% 
  ggplot(aes(date, wake_time_hour)) + 
  geom_segment(
    aes(yend = bedtime_hour, xend = date, color = sleep_length),
    show.legend = FALSE
  ) + 
  geom_point(color = custom_palette[1], size = .7) + 
  geom_point(aes(y = bedtime_hour), color = custom_palette[8], size = .7) +
  scale_y_continuous(
    breaks = c(-2.5, 0, 2.5, 5, 7.5, 10), 
    labels = c("9:30 PM", "12:00 AM", "2:30 AM", "5:00 AM", "7:30 AM", "10:00 AM")
  ) +
  scale_color_gradient(high = "#8B9BA8", low = "#0A081C") +
  coord_flip() + 
  labs(
    x = NULL,
    y = NULL,
    title = paste0("Sleep from ", min(health$date), " to ", max(health$date))
  ) +
  custom_theme

The most painful day of the past year happened to be last Tuesday. Fun!

start_label <- 
  tribble(
    ~ wday, ~ week, ~ pain, ~ label,
    "Sat", 0, max(health$pain), "Start →"
  )

health %>% 
  mutate(
    week = week(date) + (year(date) - 2018) * 52 - 25,
    wday = wday(date, label = TRUE) %>% 
      fct_relevel("Mon", "Tue", "Wed", "Thu", "Fri", "Sat")
  ) %>% 
  ggplot(aes(wday, week, alpha = pain)) + 
  geom_tile(fill = custom_palette[4]) + 
  geom_text(data = start_label, aes(label = label), color = "#8B9BA8", size = 2.5) +
  scale_y_reverse() +
  coord_fixed(ratio = .2) +
  custom_theme + 
  theme(
   # axis.text.y = element_blank(),
    axis.ticks = element_blank()
  ) + 
  guides(alpha = guide_legend(override.aes = list(label = ""))) + 
  labs(
    y = "Week", 
    x = NULL,
    title = "Pain intensity over time",
    subtitle = "July 2018 - July 2019",
    alpha = "Intensity\n(0-5)"
  )

Working on the campaign in Nevada was a fantastic experience, but there is something to be said about the wonders of a steady 9-5.

work %>% 
  ggplot(aes(date, clock_in)) + 
  geom_segment(aes(xend = date, yend = clock_out), color = custom_palette[8], size = .2) +
  geom_point(color = custom_palette[2], size = .5) + 
  geom_point(aes(y = clock_out), color = custom_palette[2], size = .5) + 
  coord_flip() + 
  custom_theme +
  labs(
    x = NULL,
    y = NULL,
    title = "Work hours",
    subtitle = "August 2018 - July 2019"
  ) 

I used an app called Libib to track my reading – you can scan ISBN barcodes with your phone to build your collection, complete with data about authors, publishers, page numbers, etc. Since settling into life in Seattle, I’ve been taking advantage of my off-hours and reading a lot more.

books %>% 
  filter(!is.na(began_date)) %>% 
  arrange(completed_date) %>% 
  mutate(cum_pages = cumsum(pages)) %>% 
  ggplot(aes(completed_date, cum_pages)) +
  geom_point(color = custom_palette[1]) + 
  geom_line(color = custom_palette[1]) + 
  geom_vline(xintercept = mdy("11/08/2018"), color = custom_palette[2], lty = 2) +
  geom_text(
    data = tribble(
      ~ completed_date, ~ cum_pages, ~ text,
      mdy("12/10/2018"), 15000, "End of campaign,\nmoved to Seattle"
    ),
    aes(label = text),
    color = "#8B9BA8",
    size = 3
  ) +
  scale_y_continuous(labels = scales::comma) +
  custom_theme + 
  labs(
    title = "Cumulative page progression since July 2018",
    x = NULL,
    y = "Cumulative Pages Read"
  )

Here’s a more detailed look at the books I’ve read, in order.

books %>%
  filter(!is.na(began_date)) %>% 
  arrange(desc(completed_date)) %>% 
  mutate(
    title_alpha = if_else(
      str_detect(title_alpha, "\\:"), 
      str_extract(title_alpha, ".*(?=\\:)"), 
      title_alpha
    )
  ) %>% 
  ggplot(
    aes(
      began_date, 
      reorder(title_alpha, completed_date), 
      xend = completed_date, 
      yend = title_alpha,
      size = pages
    )
  ) +
  geom_segment(color = custom_palette[1]) +
  custom_theme +
  theme(axis.text.y = element_text(size = 7)) +
  labs(
    x = NULL,
    y = NULL,
    size = "Pages",
    title = "Book by book progression",
    subtitle = "July 2018 - July 2019"
  )

And here’s everything broken down by author, as well.

books %>% 
  filter(completed_date >= mdy("07/01/2018")) %>% 
  group_by(author_1) %>% 
  summarise(
    tot_pages = sum(pages),
    n_books = n()
  ) %>% 
  ggplot(aes(reorder(author_1, tot_pages), tot_pages, size = n_books)) +
  geom_point(color = custom_palette[1]) + 
  coord_flip() + 
  scale_y_continuous(labels = scales::comma) + 
  scale_size_continuous(range = c(.5, 2)) +
  labs(
    x = NULL, 
    y = "Pages read",
    title = "Most-read authors by page count",
    subtitle = "July 2018 - July 2019",
    size = "# of Books"
  ) + 
  custom_theme + 
  theme(axis.text.y = element_text(size = 7))

This has been a year of transition: I graduated from Stanford, moved to Las Vegas to work on a campaign, and eventually settled in Seattle to work at a biotech company. This data turned out to be interesting because it challenged my assumptions about what’s really changed during that time. Some things are obvious: working in the private sector has given me a much more predictable work schedule, and this has given me more time to read, for instance. At the same time, I’m a little surprised to see that my sleep times haven’t really been affected all that much since I switched jobs, and that my pain appears to have gotten a little worse since I moved to Seattle.

Anyway, this project has been fun, and as I’ve written here before, I think personal data is an underrated way to encourage self-reflection. I’m not sure if I’ll be able to keep up the spreadsheet habit any longer, but I’m looking forward to the next year nonetheless.

Related