Chapter 16 - Latent Change Score Modeling
Overview
This tutorial walks through the fitting of univariate latent change
score models in the structural equation modeling framework in R using
using the lavaan package.
The example follows Chapter 16 of Grimm, Ram, and Estabrook (2017). Please refer to the chapter for further interpretations and insights about the analyses.
Preliminaries
Loading Libraries Used in This Script
Reading in Repeated Measures Data
We use data from the NLSY-CYA (Center for Human Resource Research, 2009) that includes repeated measures of children’s math ability (math) from the second through eighth grade.
Reading in the data
#set filepath
filepath <- "https://raw.githubusercontent.com/The-Change-Lab/collaborations/refs/heads/main/GrowthModeling/nlsy_math_wide_R.dat"
#read in the text data file using the url() function
nlsy_data <- read.table(file=url(filepath), na.strings = ".")
#adding names for the columns of the data set
names(nlsy_data) <- c('id', 'female', 'lb_wght', 'anti_k1',
'math2', 'math3', 'math4', 'math5',
'math6', 'math7', 'math8',
'age2', 'age3', 'age4', 'age5',
'age6', 'age7', 'age8',
'men2', 'men3', 'men4', 'men5',
'men6', 'men7', 'men8',
'spring2', 'spring3', 'spring4', 'spring5',
'spring6', 'spring7', 'spring8',
'anti2', 'anti3', 'anti4', 'anti5',
'anti6', 'anti7', 'anti8')
#reduce data down to the id variable and the math variables of interest
nlsy_data <- nlsy_data %>%
select(id, math2, math3, math4, math5, math6, math7, math8)
psych::describe(nlsy_data)## vars n mean sd median trimmed mad min max
## id 1 933 532334.90 328020.79 506602.0 520130.77 391999.44 201 1256601
## math2 2 335 32.61 10.29 32.0 32.28 10.38 12 60
## math3 3 431 39.88 10.30 41.0 39.88 10.38 13 67
## math4 4 378 46.17 10.17 46.0 46.22 8.90 18 70
## math5 5 372 49.77 9.47 48.0 49.77 8.90 23 71
## math6 6 390 52.72 9.92 50.5 52.38 9.64 24 78
## math7 7 173 55.35 10.63 53.0 55.09 11.86 31 81
## math8 8 142 57.83 11.53 56.0 57.43 12.60 26 81
## range skew kurtosis se
## id 1256400 0.28 -0.91 10738.92
## math2 48 0.27 -0.46 0.56
## math3 54 -0.05 -0.33 0.50
## math4 52 -0.06 -0.08 0.52
## math5 48 0.04 -0.34 0.49
## math6 54 0.25 -0.38 0.50
## math7 50 0.21 -0.97 0.81
## math8 55 0.16 -0.52 0.97
Plotting the Repeated Measures Data
#reshaping wide to long (using tidyverse)
data_long <- nlsy_data %>%
pivot_longer(.,
cols = c(math2, math3, math4, math5, math6, math7, math8),
cols_vary = "fastest", #to keep same-id rows close together
names_to = "grade",
names_prefix = "math",
names_transform = list(grade = as.integer),
values_to = "math")
#looking at the long data
head(data_long, 8)## # A tibble: 8 × 3
## id grade math
## <int> <int> <int>
## 1 201 2 NA
## 2 201 3 38
## 3 201 4 NA
## 4 201 5 55
## 5 201 6 NA
## 6 201 7 NA
## 7 201 8 NA
## 8 303 2 26
#Plotting intraindividual change MATH
data_long %>%
ggplot(aes(x = grade, y = math, group = id)) +
geom_point(color="blue", alpha=.7) +
geom_line(color="blue", alpha=.7) +
xlab("Grade") +
ylab("PIAT Mathematics") +
scale_x_continuous(limits=c(2,8), breaks=seq(2,8,by=1)) +
scale_y_continuous(limits=c(0,90), breaks=seq(0,90,by=10))## Warning: Removed 4310 rows containing missing values or values outside the scale range
## (`geom_point()`).
## Warning: Removed 2787 rows containing missing values or values outside the scale range
## (`geom_line()`).
The plot shows individual trajectories of the PIAT Mathematics scores across Grade 2 to 8. One can notice the attrition and only partial overlap across years - which requires some assumptions about the missing data (i.e., missing at random).
Dual Change Score Model
Model Specification
Following the examples in the book, we specify the Dual Change Score Model - which accommodates the nonlinear shape we see in the data. The model invokes the latent variables and builds the difference scores, incorporates the constant growth factor and the proportional change. Lines invoking each set of latent variables and arrows is indicated in the model specification.
The model diagram follows this kind of setup …
Fitting the model in lavaan as an SEM requires a model and a wide data file.
Model Specification
dcm_math <- ' #opening quote
#MATHEMATICS
#latent true scores (loadings = 1)
lm1 =~ 1*math2
lm2 =~ 1*math3
lm3 =~ 1*math4
lm4 =~ 1*math5
lm5 =~ 1*math6
lm6 =~ 1*math7
lm7 =~ 1*math8
#latent true score means (initial free, others = 0)
lm1 ~ 1
lm2 ~ 0*1
lm3 ~ 0*1
lm4 ~ 0*1
lm5 ~ 0*1
lm6 ~ 0*1
lm7 ~ 0*1
#latent true score variances (initial free, others = 0)
lm1 ~~ start(15)*lm1
lm2 ~~ 0*lm2
lm3 ~~ 0*lm3
lm4 ~~ 0*lm4
lm5 ~~ 0*lm5
lm6 ~~ 0*lm6
lm7 ~~ 0*lm7
#observed intercepts (fixed to 0)
math2 ~ 0*1
math3 ~ 0*1
math4 ~ 0*1
math5 ~ 0*1
math6 ~ 0*1
math7 ~ 0*1
math8 ~ 0*1
#observed residual variances (constrained to equality)
math2 ~~ sigma2_u*math2
math3 ~~ sigma2_u*math3
math4 ~~ sigma2_u*math4
math5 ~~ sigma2_u*math5
math6 ~~ sigma2_u*math6
math7 ~~ sigma2_u*math7
math8 ~~ sigma2_u*math8
#autoregressions (fixed = 1)
lm2 ~ 1*lm1
lm3 ~ 1*lm2
lm4 ~ 1*lm3
lm5 ~ 1*lm4
lm6 ~ 1*lm5
lm7 ~ 1*lm6
#latent change scores (fixed = 1)
dm2 =~ 1*lm2
dm3 =~ 1*lm3
dm4 =~ 1*lm4
dm5 =~ 1*lm5
dm6 =~ 1*lm6
dm7 =~ 1*lm7
#latent change score means (constrained to 0)
dm2 ~ 0*1
dm3 ~ 0*1
dm4 ~ 0*1
dm5 ~ 0*1
dm6 ~ 0*1
dm7 ~ 0*1
#latent change score variances (constrained to 0)
dm2 ~~ 0*dm2
dm3 ~~ 0*dm3
dm4 ~~ 0*dm4
dm5 ~~ 0*dm5
dm6 ~~ 0*dm6
dm7 ~~ 0*dm7
#constant change factor (loadings = 1)
g2 =~ 1*dm2 +
1*dm3 +
1*dm4 +
1*dm5 +
1*dm6 +
1*dm7
#constant change factor mean
g2 ~ start(15)*1
#constant change factor variance
g2 ~~ g2
#constant change factor covariance with the initial true score
g2 ~~ lm1
#proportional effects (constrained equal)
dm2 ~ start(-.2)*pi_m * lm1
dm3 ~ start(-.2)*pi_m * lm2
dm4 ~ start(-.2)*pi_m * lm3
dm5 ~ start(-.2)*pi_m * lm4
dm6 ~ start(-.2)*pi_m * lm5
dm7 ~ start(-.2)*pi_m * lm6
' #closing quoteModel Estimation and Interpretation
We fit the model using lavaan.
#Model fitting
fit_math <- lavaan(dcm_math,
data = nlsy_data, #note that fitting uses wide data
meanstructure = TRUE,
estimator = "ML",
missing = "fiml",
fixed.x = FALSE,
mimic="mplus",
control=list(iter.max=500),
verbose=FALSE)## Warning: lavaan->lav_data_full():
## some cases are empty and will be ignored: 741.
## Warning: lavaan->lav_data_full():
## due to missing values, some pairwise combinations have less than 10%
## coverage; use lavInspect(fit, "coverage") to investigate.
## Warning: lavaan->lav_mvnorm_missing_h1_estimate_moments():
## Maximum number of iterations reached when computing the sample moments
## using EM; use the em.h1.iter.max= argument to increase the number of
## iterations
## lavaan 0.6-19 ended normally after 49 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 18
## Number of equality constraints 11
##
## Used Total
## Number of observations 932 933
## Number of missing patterns 60
##
## Model Test User Model:
##
## Test statistic 58.308
## Degrees of freedom 28
## P-value (Chi-square) 0.001
##
## Model Test Baseline Model:
##
## Test statistic 862.334
## Degrees of freedom 21
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.964
## Tucker-Lewis Index (TLI) 0.973
##
## Robust Comparative Fit Index (CFI) 1.000
## Robust Tucker-Lewis Index (TLI) 1.026
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -7895.605
## Loglikelihood unrestricted model (H1) -7866.451
##
## Akaike (AIC) 15805.209
## Bayesian (BIC) 15839.071
## Sample-size adjusted Bayesian (SABIC) 15816.839
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.034
## 90 Percent confidence interval - lower 0.022
## 90 Percent confidence interval - upper 0.046
## P-value H_0: RMSEA <= 0.050 0.985
## P-value H_0: RMSEA >= 0.080 0.000
##
## Robust RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.150
## P-value H_0: Robust RMSEA <= 0.050 0.703
## P-value H_0: Robust RMSEA >= 0.080 0.219
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.136
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Observed
## Observed information based on Hessian
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|)
## lm1 =~
## math2 1.000
## lm2 =~
## math3 1.000
## lm3 =~
## math4 1.000
## lm4 =~
## math5 1.000
## lm5 =~
## math6 1.000
## lm6 =~
## math7 1.000
## lm7 =~
## math8 1.000
## dm2 =~
## lm2 1.000
## dm3 =~
## lm3 1.000
## dm4 =~
## lm4 1.000
## dm5 =~
## lm5 1.000
## dm6 =~
## lm6 1.000
## dm7 =~
## lm7 1.000
## g2 =~
## dm2 1.000
## dm3 1.000
## dm4 1.000
## dm5 1.000
## dm6 1.000
## dm7 1.000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|)
## lm2 ~
## lm1 1.000
## lm3 ~
## lm2 1.000
## lm4 ~
## lm3 1.000
## lm5 ~
## lm4 1.000
## lm6 ~
## lm5 1.000
## lm7 ~
## lm6 1.000
## dm2 ~
## lm1 (pi_m) -0.241 0.018 -13.387 0.000
## dm3 ~
## lm2 (pi_m) -0.241 0.018 -13.387 0.000
## dm4 ~
## lm3 (pi_m) -0.241 0.018 -13.387 0.000
## dm5 ~
## lm4 (pi_m) -0.241 0.018 -13.387 0.000
## dm6 ~
## lm5 (pi_m) -0.241 0.018 -13.387 0.000
## dm7 ~
## lm6 (pi_m) -0.241 0.018 -13.387 0.000
##
## Covariances:
## Estimate Std.Err z-value P(>|z|)
## lm1 ~~
## g2 13.746 1.699 8.092 0.000
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|)
## lm1 32.533 0.434 74.955 0.000
## .lm2 0.000
## .lm3 0.000
## .lm4 0.000
## .lm5 0.000
## .lm6 0.000
## .lm7 0.000
## .math2 0.000
## .math3 0.000
## .math4 0.000
## .math5 0.000
## .math6 0.000
## .math7 0.000
## .math8 0.000
## .dm2 0.000
## .dm3 0.000
## .dm4 0.000
## .dm5 0.000
## .dm6 0.000
## .dm7 0.000
## g2 15.222 0.815 18.687 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|)
## lm1 71.900 6.548 10.980 0.000
## .lm2 0.000
## .lm3 0.000
## .lm4 0.000
## .lm5 0.000
## .lm6 0.000
## .lm7 0.000
## .math2 (sg2_) 30.818 1.700 18.126 0.000
## .math3 (sg2_) 30.818 1.700 18.126 0.000
## .math4 (sg2_) 30.818 1.700 18.126 0.000
## .math5 (sg2_) 30.818 1.700 18.126 0.000
## .math6 (sg2_) 30.818 1.700 18.126 0.000
## .math7 (sg2_) 30.818 1.700 18.126 0.000
## .math8 (sg2_) 30.818 1.700 18.126 0.000
## .dm2 0.000
## .dm3 0.000
## .dm4 0.000
## .dm5 0.000
## .dm6 0.000
## .dm7 0.000
## g2 5.602 0.837 6.695 0.000
#Model Diagram
# semPaths(dcm_math, what="est",
# sizeLat = 7, sizeMan = 7, edge.label.cex = .75)
#Note that semPaths does not know how to draw this kind of model.
#So, better to map to the diagram from the book. The change equation based on the output could be written as:
Change in Math:
\[\Delta_{math} = 15.222 - 0.241(math_{t-1})\] We see there is both constant change (a linear component) and proportional change (a nonlinear component)
Predicted scores
#obtaining predicted scores
nlsy_predicted <- cbind(nlsy_data$id, as.data.frame(lavPredict(fit_math, type = "yhat")))
names(nlsy_predicted)[1] <- "id"
#looking at data
head(nlsy_predicted)## id math2 math3 math4 math5 math6 math7 math8
## 1 201 33.14195 40.99641 46.95784 51.48248 54.91663 57.52309 59.50137
## 2 303 24.16182 30.57662 35.44537 39.14069 41.94539 44.07411 45.68979
## 3 2702 49.24606 56.78947 62.51481 66.86027 70.15842 72.66167 74.56160
## 4 4303 37.40837 45.27911 51.25289 55.78691 59.22818 61.84005 63.82242
## 5 5002 33.84478 41.89858 48.01132 52.65080 56.17210 58.84472 60.87321
## 6 5005 35.65097 43.24185 49.00324 53.37605 56.69495 59.21396 61.12585
#reshaping wide to long (using tidyverse)
predicted_long <- nlsy_predicted %>%
pivot_longer(.,
cols = c(math2, math3, math4, math5, math6, math7, math8),
cols_vary = "fastest", #to keep same-id rows close together
names_to = "grade",
names_prefix = "math",
names_transform = list(grade = as.integer),
values_to = "math")
#looking at the long data
head(predicted_long, 14)## # A tibble: 14 × 3
## id grade math
## <int> <int> <dbl>
## 1 201 2 33.1
## 2 201 3 41.0
## 3 201 4 47.0
## 4 201 5 51.5
## 5 201 6 54.9
## 6 201 7 57.5
## 7 201 8 59.5
## 8 303 2 24.2
## 9 303 3 30.6
## 10 303 4 35.4
## 11 303 5 39.1
## 12 303 6 41.9
## 13 303 7 44.1
## 14 303 8 45.7
#Plotting intraindividual change MATH
predicted_long %>%
ggplot(aes(x = grade, y = math, group = id)) +
geom_line(color="blue", alpha=.4) +
xlab("Grade") +
ylab("Predicted PIAT Mathematics") +
scale_x_continuous(limits=c(2,8), breaks=seq(2,8,by=1)) +
scale_y_continuous(limits=c(0,90), breaks=seq(0,90,by=10))## Warning: Removed 7 rows containing missing values or values outside the scale range
## (`geom_line()`).
These figures (similar to Figure 16.3 in the book) show the nonlinear exponential-type shape that is captured by the dual change score model. Notice also how the missing data was inferred through full information maximum likelihood under the missing at random assumption.
See also Ghisletta, P., & McArdle, J. J. (2012). Teacher’s Corner: Latent Curve Models and Latent Change Score Models Estimated in R. Structural Equation Modeling, 19(4): 651-682. doi:10.1080/10705511.2012.713275. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4259494/pdf/nihms412332.pdf
Conclusion
The latent change score models provide a framework for examining change as an outcome explicitly by constructing a series of latent change variables and modeling those. This changes our thinking from modeling the observed repeated measures outcome \(Y_{it}\) to thinking about the change \(\Delta Y_{it}\) explicitly. This totally expands how we model dynamics of change!
Change to Change! Go for it!
Citations
Epskamp, S. (2022). semPlot: Path Diagrams and Visual Analysis of Various SEM Packages’ Output (Version 1.1.6). https://CRAN.R-project.org/package=semPlot
Ghisletta, P., & McArdle, J. J. (2012). Teacher’s Corner: Latent Curve Models and Latent Change Score Models Estimated in R. Structural Equation Modeling, 19(4): 651-682. doi:10.1080/10705511.2012.713275. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4259494/pdf/nihms412332.pdf
Grimm, K. J., Ram, N., & Estabrook, R. (2017). Growth Modeling: Structural Equation and Multilevel Modeling Approaches. Guilford Publications.
R Core Team. (2024). R: A Language and Environment for Statistical Computing. Foundation for Statistical Computing. https://www.R-project.org/
Revelle, W. (2024). psych: Procedures for Psychological, Psychometric, and Personality Research. Northwestern University. https://CRAN.R-project.org/package=psych
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48, 1–36. https://doi.org/10.18637/jss.v048.i02
Wei, T., & Simko, V. (2024). R package “corrplot”: Visualization of a Correlation Matrix (Version 0.95). https://github.com/taiyun/corrplot
Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T. L., Miller, E., Bache, S. M., Müller, K., Ooms, J., Robinson, D., Seidel, D. P., Spinu, V., … Yutani, H. (2019). Welcome to the Tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi.org/10.21105/joss.01686