rstudio::conf 2020 Tidy Forecasting Workshop (Day 2) (Part 1)

Data Science
R
Time Series
Author

Robert Lankford

Published

February 5, 2020

This post documents my experience in the Tidy Time Series and Forecasting in R workshop at rstudio::conf 2020 that took place in San Francisco, CA during January 27-30, 2020.

Following what was presented at the two-day workshop, this second post (part one of day two) will be “everything that is forecasting”. The second day focused on how to use the the tidyverts packages in forecasting. Due to the length of the material covered in the second day, it has been split into two posts. The first post will begin covering modeling algorithms.


Setup

The following packages are required:

The following data sets are used in this post.

  1. Monthly retail turnover for department stores in Victoria, Australia
  2. Yearly economic indicator values for Australia
victoria_dept_stores_tsbl <- aus_retail %>% 
  filter(
    Industry    == "Department stores",
    State       == "Victoria",
    year(Month) >= 2000
  ) %>% 
  select(Month, Turnover)

victoria_dept_stores_tsbl
# A tsibble: 228 x 2 [1M]
      Month Turnover
      <mth>    <dbl>
 1 2000 Jan     228 
 2 2000 Feb     196.
 3 2000 Mar     224.
 4 2000 Apr     261.
 5 2000 May     265.
 6 2000 Jun     299.
 7 2000 Jul     206.
 8 2000 Aug     240.
 9 2000 Sep     251.
10 2000 Oct     270.
# … with 218 more rows
aus_economy_tsbl <- global_economy %>% 
  filter(Country == "Australia") %>% 
  select(Year, Growth, CPI, Imports, Exports) %>% 
  drop_na() %>% 
  as_tsibble(index = Year)

aus_economy_tsbl
# A tsibble: 57 x 5 [1Y]
    Year Growth   CPI Imports Exports
   <dbl>  <dbl> <dbl>   <dbl>   <dbl>
 1  1961   2.49  8.14    15.0    12.4
 2  1962   1.30  8.12    12.6    13.9
 3  1963   6.21  8.17    13.8    13.0
 4  1964   6.98  8.40    13.8    14.9
 5  1965   5.98  8.69    15.3    13.2
 6  1966   2.38  8.98    15.1    12.9
 7  1967   6.30  9.29    13.9    12.9
 8  1968   5.10  9.52    14.5    12.3
 9  1969   7.04  9.83    13.3    12.0
10  1970   7.17 10.2     13.2    13.0
# … with 47 more rows

Time-Series Decomposition

Many time-series can be broken into three components:

  1. Seasonality: the periodic repeated fluctuations that occur at a known, fixed period (e.g. monthly patterns)
  2. Trend-Cycle: the progression of the series over the long-term (trend) and the non-periodic repeated fluctuations (cycle)
  3. Remainder: the random, irregular “noise” that is leftover after removing the previous two components

The feasts package provides the STL() function to easily decomposes a time-series. STL stands for Seasonal and Trend Decomposition using Loess.

victora_dept_stores_stl_mbl <- victoria_dept_stores_tsbl %>% 
  model(STL = STL(Turnover))

victora_dept_stores_stl_mbl
# A mable: 1 x 1
      STL
  <model>
1   <STL>

After fitting an STL model, passing the resulting mable() (model-table) into the components() function (from the fabletools package) will return a dable() (decomposition-table), containing all of the decomposition components.

components(victora_dept_stores_stl_mbl)
# A dable: 228 x 7 [1M]
# Key:     .model [1]
# :        Turnover = trend + season_year + remainder
   .model    Month Turnover trend season_year remainder season_adjust
   <chr>     <mth>    <dbl> <dbl>       <dbl>     <dbl>         <dbl>
 1 STL    2000 Jan     228   271.      -43.1      0.279          271.
 2 STL    2000 Feb     196.  272.      -84.0      8.10           280.
 3 STL    2000 Mar     224.  273.      -34.0    -14.6            258.
 4 STL    2000 Apr     261.  274.      -19.1      5.82           280.
 5 STL    2000 May     265.  275.      -14.1      4.42           279.
 6 STL    2000 Jun     299.  276.        2.94    19.6            296.
 7 STL    2000 Jul     206.  277.      -28.4    -42.6            234.
 8 STL    2000 Aug     240.  278.      -50.9     13.0            291.
 9 STL    2000 Sep     251.  279.      -39.3     11.2            290.
10 STL    2000 Oct     270.  280.      -12.9      2.73           283.
# … with 218 more rows

The dable can then be passed into the autoplot() function to plot the original series and the decomposed components.

victora_dept_stores_stl_mbl %>% 
  components() %>% 
  autoplot() +
  theme_bw()

Note that the trend and seasonality of a decomposition do not necessarily have to be fixed. They are allowed to change over time. The helper functions trend and season can be added to the STL function to specify the window. The window is the number of consecutive observations (time-periods) used when estimating the seasonal or the trend-cycle component. The shorter the window, the quicker the trend or seasonality is allowed to change. At the extreme, setting the seasonality window to infinity is essentially assuming that the seasonality is identical across each time-period.

victoria_dept_stores_tsbl %>% 
  model(STL = STL(Turnover ~ trend(window = 24) + season(window = 48))) %>% 
  components() %>% 
  autoplot() +
  theme_bw()

Baseline Methods

The fable package contains several “basic” methods of forecasting. These methods, at times, can prove quite useful, and can even produce good forecasts; however, they will most often be used as baselines to compare the performance of other, often more complicated, forecasting methods. These methods include:

  • MEAN() (mean): the mean of the entire series is projected forward as the forecast
  • NAIVE() (naive): the last value of the series is projected forward as the forecast
  • SNAIVE() (seasonal naive): the last value of each season of the series (e.g. the quarter or month last year) is projected forward as the forecast
  • RW() (random walk): the last value of the series, plus the average change across the whole series, is projected forward as the forecast

All of these methods can be fit to the data at the same time within the model() function. Notice that the random walk model requires the use of the drift helper function. Without drift, a random walk is just the naive model.

victoria_dept_stores_baseline_mbl <- victoria_dept_stores_tsbl %>% 
  model(
    mean     = MEAN(Turnover),
    naive    = NAIVE(Turnover),
    snaive   = SNAIVE(Turnover),
    rw_drift = RW(Turnover ~ drift())
  )

victoria_dept_stores_baseline_mbl
# A mable: 1 x 4
     mean   naive   snaive      rw_drift
  <model> <model>  <model>       <model>
1  <MEAN> <NAIVE> <SNAIVE> <RW w/ drift>

Once we have these models, we can forecast across an arbitrary time horizon with the forecast() function. This function returns the titular fable() (forecast-table). The fable contains the model type, the time index, the point forecast, and the parameters of the distribution of the forecast as a <dist> list-column.

victoria_dept_stores_baseline_fbl <- victoria_dept_stores_baseline_mbl %>%
  forecast(h = "2 years")

victoria_dept_stores_baseline_fbl
# A fable: 96 x 4 [1M]
# Key:     .model [4]
   .model    Month      Turnover .mean
   <chr>     <mth>        <dist> <dbl>
 1 mean   2019 Jan N(348, 10513)  348.
 2 mean   2019 Feb N(348, 10513)  348.
 3 mean   2019 Mar N(348, 10513)  348.
 4 mean   2019 Apr N(348, 10513)  348.
 5 mean   2019 May N(348, 10513)  348.
 6 mean   2019 Jun N(348, 10513)  348.
 7 mean   2019 Jul N(348, 10513)  348.
 8 mean   2019 Aug N(348, 10513)  348.
 9 mean   2019 Sep N(348, 10513)  348.
10 mean   2019 Oct N(348, 10513)  348.
# … with 86 more rows

A fable with multiple models is automatically sorted by model. Can resort by date to see the forecasts of the different models for the same month.

victoria_dept_stores_baseline_fbl %>% 
  group_by(.model) %>% 
  filter(row_number() %in% 1:3) %>% 
  as_tibble() %>% 
  mutate(
    Month = as.character(Month),
    .mean = as.numeric(.mean)
  ) %>% 
  kable() %>% 
  kable_styling(bootstrap_options = c("striped", "hover"), full_width = FALSE)
.model Month Turnover .mean
mean 2019 Jan N(348, 10513) 348.0303
mean 2019 Feb N(348, 10513) 348.0303
mean 2019 Mar N(348, 10513) 348.0303
naive 2019 Jan N(724, 16030) 723.7000
naive 2019 Feb N(724, 32059) 723.7000
naive 2019 Mar N(724, 48089) 723.7000
snaive 2019 Jan N(354, 335) 354.3000
snaive 2019 Feb N(278, 335) 277.8000
snaive 2019 Mar N(365, 335) 364.9000
rw_drift 2019 Jan N(726, 16167) 725.8837
rw_drift 2019 Feb N(728, 32475) 728.0674
rw_drift 2019 Mar N(730, 48926) 730.2511

A fable can be passed directly into the autoplot() function to plot the forecasts. Note that we get confidence intervals for the forecasts. The point forecast is the mean of the interval.

victoria_dept_stores_baseline_fbl %>% 
  autoplot(victoria_dept_stores_tsbl) +
  facet_wrap(~ .model, ncol = 1, scales = "free_y") +
  theme_bw() +
  scale_y_continuous(
    labels = scales::label_dollar(accuracy = 0.1, scale = 1e-3, suffix = "B")
  ) +
  labs(
    title    = "Two-Year Forecast of Yearly Retail Turnover for Victoria",
    subtitle = "Models: Mean, Naive, Seasonal Naive, and Random Walk w/ Drift",
    x        = "",
    y        = "Australian Dollars"
  )

The plotted confidence intervals can be turned off by setting level = NULL in the autoplot() function.

victoria_dept_stores_baseline_fbl %>% 
  autoplot(victoria_dept_stores_tsbl, level = NULL) +
  facet_wrap(~ .model, ncol = 1) +
  theme_bw() +
  scale_y_continuous(
    labels = scales::label_dollar(accuracy = 0.1, scale = 1e-3, suffix = "B")
  ) +
  labs(
    title    = "Two-Year Forecast of Yearly Retail Turnover for Victoria",
    subtitle = "Models: Mean, Naive, Seasonal Naive, and Random Walk w/ Drift",
    x        = "",
    y        = "Australian Dollars"
  )

Complex Methods

We can now start looking at more complicated algorithms. These models take a bit more computational time and power to fit, and are more difficult to interpret; however, they can often outperform the baseline models. The models introduced in the workshop were:

  • Exponential Smoothing (ETS)
  • Autoregressive Integrated Moving Average (ARIMA)
  • Dynamic Regression/Regression with ARIMA Errors (RegARIMA)

Note that the Dynamic Regression method is covered in the next post, which details the second half of day two.

Exponential Smoothing

ExponenTial Smoothing models, also called Error, Trend, and Seasonality models and abbreviated ETS, attempt to forecast a time-series based on its “Level”, “Trend”, and “Seasonal” components. Smoothing parameters are used to control the rates of change of those components. ETS models are similar to models where the forecast is based off of weighted past observations; however, for ETS, the most recent observations have higher weights than the earliest observations. These components can be:

  • Additive: the error, trend, and seasonality are added together to get the resulting time-series
  • Multiplicative: the error, trend, and seasonality are multiplied together to get the resulting time-series
  • Damped: specifically for the trend component, instead of increasing indefinitely, the long-term forecasts will eventually “flatten out”

There are 15 combinations of Exponential Smoothing models available in the fable package using the ETS() function. Within the ETS() function, you can specify the error, trend, and season. If you do not specify these options, the ETS() function will automatically pick the optimal values based on minimizing the AICc. The options are:

  • Additive: method = "A"
  • Multiplicative: method = "M"
  • Additive Damped: method = "Ad"
  • Multiplicative Damped: method = "Md"

The typical way of expressing an Exponential Smoothing model is ETS(Error, Trend, Seasonal), so an ETS(M,N,A) has multiplicative errors, no trend, and additive seasonality. Note below that we do not have any models with both additive errors and multiplicative seasonality. Dr. Hyndman recommended against using this combination, as they almost never have good results.

victoria_dept_stores_ets_mbl <- victoria_dept_stores_tsbl %>% 
  model(
    ANN  = ETS(Turnover ~ error("A") + trend("N") + season("N")),
    AAN  = ETS(Turnover ~ error("A") + trend("A") + season("N")),
    AAdN = ETS(Turnover ~ error("A") + trend("Ad") + season("N")),
    ANA  = ETS(Turnover ~ error("A") + trend("N") + season("A")),
    AAA  = ETS(Turnover ~ error("A") + trend("A") + season("A")),
    AAdA = ETS(Turnover ~ error("A") + trend("Ad") + season("A")),
    
    MNN  = ETS(Turnover ~ error("M") + trend("N") + season("N")),
    MAN  = ETS(Turnover ~ error("M") + trend("A") + season("N")),
    MAdN = ETS(Turnover ~ error("M") + trend("Ad") + season("N")),
    MNA  = ETS(Turnover ~ error("M") + trend("N") + season("A")),
    MAA  = ETS(Turnover ~ error("M") + trend("A") + season("A")),
    MAdA = ETS(Turnover ~ error("M") + trend("Ad") + season("A")),
    MNM  = ETS(Turnover ~ error("M") + trend("N") + season("M")),
    MAM  = ETS(Turnover ~ error("M") + trend("A") + season("M")),
    MAdM = ETS(Turnover ~ error("M") + trend("Ad") + season("M")),
  )

glimpse(victoria_dept_stores_ets_mbl)
Rows: 1
Columns: 15
$ ANN  <model> [ETS(A,N,N)]
$ AAN  <model> [ETS(A,A,N)]
$ AAdN <model> [ETS(A,Ad,N)]
$ ANA  <model> [ETS(A,N,A)]
$ AAA  <model> [ETS(A,A,A)]
$ AAdA <model> [ETS(A,Ad,A)]
$ MNN  <model> [ETS(M,N,N)]
$ MAN  <model> [ETS(M,A,N)]
$ MAdN <model> [ETS(M,Ad,N)]
$ MNA  <model> [ETS(M,N,A)]
$ MAA  <model> [ETS(M,A,A)]
$ MAdA <model> [ETS(M,Ad,A)]
$ MNM  <model> [ETS(M,N,M)]
$ MAM  <model> [ETS(M,A,M)]
$ MAdM <model> [ETS(M,Ad,M)]

We can see what the smoothing parameters are by selecting a specific model and using the report() function.

victoria_dept_stores_ets_mbl %>% 
  select(MAdA) %>% 
  report()
Series: Turnover 
Model: ETS(M,Ad,A) 
  Smoothing parameters:
    alpha = 0.1257331 
    beta  = 0.000100141 
    gamma = 0.1437031 
    phi   = 0.9799871 

  Initial states:
     l[0]     b[0]     s[0]  s[-1]     s[-2]     s[-3]     s[-4]     s[-5]
 272.4276 2.041316 297.5088 35.676 -15.14924 -44.51365 -55.06139 -17.59358
     s[-6]     s[-7]    s[-8]     s[-9]    s[-10]    s[-11]
 -3.003334 -18.80118 -17.5564 -31.61395 -90.18424 -39.70783

  sigma^2:  0.0018

     AIC     AICc      BIC 
2471.476 2474.749 2533.204 

You can also use the tidy() function.

victoria_dept_stores_ets_mbl %>% 
  select(MAdA) %>% 
  tidy() %>% 
  mutate(across(where(is.numeric), round, digits = 2)) %>% 
  kable() %>% 
  kable_styling(bootstrap_options = c("striped", "hover"), full_width = FALSE)
.model term estimate
MAdA alpha 0.13
MAdA beta 0.00
MAdA gamma 0.14
MAdA phi 0.98
MAdA l[0] 272.43
MAdA b[0] 2.04
MAdA s[0] 297.51
MAdA s[-1] 35.68
MAdA s[-2] -15.15
MAdA s[-3] -44.51
MAdA s[-4] -55.06
MAdA s[-5] -17.59
MAdA s[-6] -3.00
MAdA s[-7] -18.80
MAdA s[-8] -17.56
MAdA s[-9] -31.61
MAdA s[-10] -90.18
MAdA s[-11] -39.71

Selecting just a few of these models, we can see how the forecasts over the next few years play out.

victoria_dept_stores_ets_mbl %>% 
  select(ANA, AAdA, MAdA, MAM) %>% 
  forecast(h = "2 years") %>% 
  autoplot(victoria_dept_stores_tsbl) +
  facet_wrap(~ .model, ncol = 1) +
  theme_bw() +
  scale_y_continuous(
    labels = scales::label_dollar(accuracy = 0.1, scale = 1e-3, suffix = "B")
  ) +
  labs(
    title    = "Two-Year Forecast of Yearly Retail Turnover for Victoria",
    subtitle = "Models: ETS(A,N,A), ETS(A,Ad,A), ETS(M,Ad,A), ETS(M,A,M)",
    x        = "",
    y        = "Australian Dollars"
  )

ARIMA

ARIMA models are composed of three terms:

  • Autoregressive (AR): lagged observations of the series
  • Integrated (I): differencing of the series to make it stationary
  • Moving Average (MA): lagged errors

Stationarity is a key assumption of this model, and the reason for the “I” term. Essentially, stationarity is when all the observations in the series that come after any observation ‘t’ do not depend on what the value of ‘t’ was. In other words, patterns are not really predictable in the long-run (the past cannot necessarily predict the future) and the time-series is roughly horizontal with constant variance. If this assumption is not met, the data can be differenced (the observation at t+1 minus the observation at time t) and the result of the differencing will often be stationary. The maximum number of times data should be differenced is twice, according to Dr. Hyndman.

The feasts package provides the unitroot_kpss() function to test the stationarity assumption. The KPSS Test tests for a unit root, where the null hypothesis is essentially that the series is stationary. If the test returns a p-value less than 0.05, we should difference the data and test again.

victoria_dept_stores_tsbl %>% 
  features(Turnover, unitroot_kpss) %>% 
  mutate(across(where(is.numeric), round, digits = 2)) %>% 
  kable() %>% 
  kable_styling(bootstrap_options = c("striped", "hover"), full_width = FALSE)
kpss_stat kpss_pvalue
1.98 0.01

Should we receive a result with a p-value of less than 0.05 (as above), we can use the difference() function within features() to difference the series and then apply the KPSS test again.

victoria_dept_stores_tsbl %>% 
  features(difference(Turnover), unitroot_kpss) %>% 
  mutate(across(where(is.numeric), round, digits = 2)) %>% 
  kable() %>% 
  kable_styling(bootstrap_options = c("striped", "hover"), full_width = FALSE)
kpss_stat kpss_pvalue
0.04 0.1

Once we have determined if differencing is required, we can move to fitting ARIMA models. We can pick the AR, differencing, and MA terms using the pdq function within the ARIMA() function. In ARIMA terminology:

  • Parameter ‘p’ is the number of AR terms (lags of the response variable)
  • Parameter ‘d’ is the number of times to difference the series
  • Parameter ‘q’ is the number of MA terms (lags of the error terms)

Note that these lags are linear in time, meaning they are the period(s) directly before the base period, and they do not account for seasonality. For example, if we have monthly data, a lag of one will take the month before, a lag of two will take two months before, and so on. If we want to consider seasonality we need to use seasonal lags. For monthly data, we would take the 12th lag for the month in the previous year. Again, fable allows us to select the seasonal terms using the PDQ function within the ARIMA() function. In ARIMA terminology:

  • Parameter ‘P’ is the number of seasonal AR terms
  • Parameter ‘D’ is the number of times to difference the seasons of the series (for example, take the difference between this month and the same month last year)
  • Parameter ‘Q’ is the number of seasonal MA terms

If you do not want to select the p, d, q, P, D, and Q parameters yourself, you can just pass the response variable into the ARIMA() function and the underlying algorithm will select the values that minimize the AICc.

victoria_dept_stores_arima_mbl <- victoria_dept_stores_tsbl %>% 
  model(
    arima_211_000 = ARIMA(Turnover ~ pdq(2,1,1) + PDQ(0,0,0)),
    arima_012_002 = ARIMA(Turnover ~ pdq(0,1,2) + PDQ(0,0,2)),
    arima_200_010 = ARIMA(Turnover ~ pdq(2,0,0) + PDQ(0,1,0)),
    arima_auto    = ARIMA(Turnover)
  )

glimpse(victoria_dept_stores_arima_mbl)
Rows: 1
Columns: 4
$ arima_211_000 <model> [ARIMA(2,1,1)]
$ arima_012_002 <model> [ARIMA(0,1,2)(0,0,2)[12]]
$ arima_200_010 <model> [ARIMA(2,0,0)(0,1,0)[12] w/ drift]
$ arima_auto    <model> [ARIMA(2,1,1)(1,1,2)[12]]

Like before, we can use the report() function to check out a particular model.

victoria_dept_stores_arima_mbl %>% 
  select(arima_auto) %>% 
  report()
Series: Turnover 
Model: ARIMA(2,1,1)(1,1,2)[12] 

Coefficients:
          ar1      ar2      ma1     sar1    sma1     sma2
      -0.2336  -0.1432  -0.7237  -0.8059  0.1900  -0.5586
s.e.   0.0939   0.0852   0.0712   0.2862  0.2785   0.1707

sigma^2 estimated as 193.8:  log likelihood=-872.18
AIC=1758.35   AICc=1758.89   BIC=1781.94

Also like before, the tidy() function can be used.

victoria_dept_stores_arima_mbl %>% 
  select(arima_auto) %>% 
  tidy() %>% 
  mutate(across(where(is.numeric), round, digits = 2)) %>% 
  kable() %>% 
  kable_styling(bootstrap_options = c("striped", "hover"), full_width = FALSE)
.model term estimate std.error statistic p.value
arima_auto ar1 -0.23 0.09 -2.49 0.01
arima_auto ar2 -0.14 0.09 -1.68 0.09
arima_auto ma1 -0.72 0.07 -10.16 0.00
arima_auto sar1 -0.81 0.29 -2.82 0.01
arima_auto sma1 0.19 0.28 0.68 0.50
arima_auto sma2 -0.56 0.17 -3.27 0.00

Once we have a model we like, we can forecast ahead and plot the results.

victoria_dept_stores_arima_mbl %>% 
  forecast(h = "2 years") %>% 
  autoplot(victoria_dept_stores_tsbl) +
  facet_wrap(~ .model, ncol = 1) +
  theme_bw() +
  scale_y_continuous(
    labels = scales::label_dollar(accuracy = 0.1, scale = 1e-3, suffix = "B")
  ) +
  labs(
    title    = "Two-Year Forecast of Yearly Retail Turnover for Victoria",
    subtitle = str_glue("Models: ARIMA(2,1,1), ARIMA(0,1,2)(0,0,2)[12],  
                        ARIMA(2,0,0)(0,1,0), ARIMA(2,1,1)(1,1,2)[12]"),
    x        = "",
    y        = "Australian Dollars"
  )

Recap

The second day of the workshop focused on forecasting methods, model algorithms, validating those models, and producing and checking the accuracy of forecasts. This first post on the second day covered:

  1. Building baseline models
  2. Building complex models
  3. Producing forecasts

Notes

This post is based on a presentation that was given on the date listed. It may be updated from time to time to fix errors, detail new functions, and/or remove deprecated functions so the packages and R version will likely be newer than what was available at the time.

The R session information used for this post:

R version 4.2.1 (2022-06-23)
Platform: aarch64-apple-darwin20 (64-bit)
Running under: macOS 14.1.1

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices datasets  utils     methods   base     

other attached packages:
 [1] kableExtra_1.3.4  ggplot2_3.4.0     fable_0.3.2       feasts_0.3.0     
 [5] fabletools_0.3.2  tsibbledata_0.4.1 tsibble_1.1.3     stringr_1.4.1    
 [9] lubridate_1.9.0   timechange_0.1.1  purrr_0.3.5       tidyr_1.2.1      
[13] dplyr_1.0.10     

loaded via a namespace (and not attached):
 [1] tidyselect_1.2.0     xfun_0.40            urca_1.3-3          
 [4] lattice_0.20-45      colorspace_2.0-3     vctrs_0.6.3         
 [7] generics_0.1.3       viridisLite_0.4.1    htmltools_0.5.3     
[10] yaml_2.3.5           utf8_1.2.2           rlang_1.1.1         
[13] pillar_1.8.1         glue_1.6.2           withr_2.5.0         
[16] rappdirs_0.3.3       distributional_0.3.1 lifecycle_1.0.3     
[19] progressr_0.11.0     munsell_0.5.0        anytime_0.3.9       
[22] gtable_0.3.1         rvest_1.0.3          evaluate_0.16       
[25] labeling_0.4.2       knitr_1.40           fastmap_1.1.0       
[28] fansi_1.0.3          highr_0.9            Rcpp_1.0.9          
[31] renv_0.16.0          scales_1.2.1         webshot_0.5.4       
[34] jsonlite_1.8.0       systemfonts_1.0.4    farver_2.1.1        
[37] digest_0.6.29        stringi_1.7.12       grid_4.2.1          
[40] cli_3.6.1            tools_4.2.1          magrittr_2.0.3      
[43] tibble_3.1.8         pkgconfig_2.0.3      ellipsis_0.3.2      
[46] xml2_1.3.3           rmarkdown_2.16       svglite_2.1.0       
[49] httr_1.4.4           rstudioapi_0.14      R6_2.5.1            
[52] nlme_3.1-157         compiler_4.2.1