Post-Hoc Power Sensitivity Analysis using the SESOI

Examples from B & L Chapter 10

Overview

This tutorial illustrates how to calculate the Smallest Effect Size of Interest (SESOI) for multilevel models using the simr package in R. The examples extend the presentation in Chapter 10, Statistical Power for Intensive Longitudinal Designs, of Bolger & Laurenceau (2013).

Outline

This script covers:

A. Power Analysis for Time Course

This example fits a simple multilevel model (i.e., growth model) that investigates how variables change over time.

B. Power Analysis for Within-Person Process

This example fits a multilevel process model that investigates causal processes between within-person covariates (rather than time itself) and their outcomes.

C. Power Analysis for Categorical Outcomes

This example fits a multilevel model for categorical data (i.e., generalized linear mixed effect model).


In all three cases, we load example data from B & L and conduct post-hoc power analyses for the multilevel models. After loading the data, we fit a multilevel model using the lme4 package. Based on the fitted lme4 object, we then make use of the simr package to perform simulation-based power analysis for the specified multilevel model. Using the functions in the simr package, we illustrate how statistical power changes as the effect size increases; determining how big the effect size should be to assure an adequate power, i.e., finding the smallest effect size of interest.

Preliminaries

Load the libraries used in this script.

library(psych)    #for describing data
library(lme4)     #for multilevel models
library(simr)     #for power analysis simulations
library(ggplot2)  #for data visualization
library(dplyr)    #for data manipulation

A. Power Analysis for the Time Course

This example makes use of the Intimacy data set from B & L Chapter 4. The dataset is obtained from 50 wives from 50 married heterosexual couples, with each person randomly assigned to a 16-week marital therapy treatment (N = 25) or a control (N = 25) group.

First, we load the Intimacy data (N = 50, T = 16):

#set filepath for data file
filepath <- "https://raw.githubusercontent.com/The-Change-Lab/collaborations/refs/heads/main/Bolger_Laurenceau_2013/BL2013_data_intimacy.csv"
#read in the .csv file using the url() function
data_intimacy <- read.csv(file=url(filepath), header=TRUE)

We saw that there are 50 persons with 16 occasions each (coded 0 to 15). The data are fully complete.

1) Running the Multilevel Model with the Available Data

Next, we run multilevel models using lmer(). We use the lmer() function in the lme4 library, as the simr power analysis package is built to work with lmer().

#Run linear growth model without the AR(1) errors
fit_lgmodel <- lme4::lmer(intimacy ~ 1 + time01 + treatment + time01:treatment + 
                            (1 + time01 | id),
                          data=data_intimacy,
                          na.action=na.exclude)
#examine model summary
summary(fit_lgmodel)
## Linear mixed model fit by REML ['lmerMod']
## Formula: intimacy ~ 1 + time01 + treatment + time01:treatment + (1 + time01 |  
##     id)
##    Data: data_intimacy
## 
## REML criterion at convergence: 2834.4
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.6242 -0.6798 -0.0190  0.6433  2.4463 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  id       (Intercept) 0.6858   0.8281        
##           time01      1.8936   1.3761   -0.45
##  Residual             1.6925   1.3010        
## Number of obs: 800, groups:  id, 50
## 
## Fixed effects:
##                  Estimate Std. Error t value
## (Intercept)       2.89897    0.20703  14.003
## time01            0.73520    0.34720   2.118
## treatment        -0.05644    0.29279  -0.193
## time01:treatment  0.92143    0.49101   1.877
## 
## Correlation of Fixed Effects:
##             (Intr) time01 trtmnt
## time01      -0.599              
## treatment   -0.707  0.423       
## tm01:trtmnt  0.423 -0.707 -0.599

The results are very similar to those in the book, even through the model does not include the auto-correlation of residuals. lmer() does not provide for alternative error structures.

The object obtained from lmer(), formally referred to as a “merMod” object is used in the power analysis. Note that this object must be obtained from the lme4::lmer() function and NOT the lmerTest::lmer() function. If you use the lmerTest version, the subsequent steps will fail.

Our main parameter of interest has an effect size of 0.92. We create a variable containing this effect size.

fit_test <- fixef(fit_lgmodel)["time01:treatment"]
fit_test
## time01:treatment 
##        0.9214324

2) Examining Statistical Power

We can now simulate power directly from the fitted model for the time01:treatment interaction effect.

As in the previous tutorial, Post-Hoc Power Analysis for Multilevel Models in R, we supply the function with the fitted model object (fit_lgmodel), the specific parameter we would like to examine ("time01"), and test to use ("lr"), how many simulations we would like (nsim=100), and what alpha level (i.e., p-value) to use in the evaluations of significance (alpha=.05). Typically, power analysis simulations use 1000 or more simulations. To save time here, we only use 100.

#Power for the effect of interest
SimPower_Fixed <- simr::powerSim(fit_lgmodel,
                                 test=simr::fixed("time01:treatment", method="lr"),
                                 seed=1234, #set for replication
                                 nsim=100, #set low for time or high for real
                                 alpha=.05, progress=FALSE) #options
## boundary (singular) fit: see help('isSingular')
## Warning in observedPowerWarning(sim): This appears to be an "observed power"
## calculation
#examine post-hoc power
SimPower_Fixed
## Power for predictor 'time01:treatment', (95% confidence interval):
##       45.00% (35.03, 55.27)
## 
## Test: Likelihood ratio
##       Effect size for time01:treatment is 0.92
## 
## Based on 100 simulations, (1 warning, 0 errors)
## alpha = 0.05, nrow = 800
## 
## Time elapsed: 0 h 0 m 6 s
## 
## nb: result might be an observed power calculation

We see that with an effect size of 0.92, power for the time01:treatment interaction is 45%, which is lower than ideal.Conventionally, we aim for power greater than 80%. Now we examine how power increases as the hypothesized effect size increases.

3) Finding the SESOI

Suppose that we have a target power goal (i.e., 80%). Given that the effect size of 0.92 resulted in an underpowered design (with power 45%), inferences about the data may involve the analyses on inflated effect sizes (Kumle et al., 2021). Thus, in a SESOI approach, we evaluate what the Smallest Effect Size Of Interest is, or the effect size that the study is able to detect at a predetermined power. By adopting this approach, researchers can aim to design well-powered studies.

  • Kumle, L., Võ, M. L. -H., & Draschkow, D. (2021). Estimating power in (generalized) linear mixed models: An open introduction and tutorial in R. Behavior Research Methods, 53(6), 2528–2543. https://doi.org/10.3758/s13428-021-01546-0

In attempt to obtain 80% power, we first increase the parameter from 0.92 to 1.0. (Note: 1.0 was chosen arbitrarily – simply as a larger effect size than our current effect size of 0.92.)

#setting the effect size to test
fixef(fit_lgmodel)["time01:treatment"] <- 1.0

Re-run the powerSim() function.

#Power for the effect of interest
SimPower_Fixed <- simr::powerSim(fit_lgmodel,
                                 test=simr::fixed("time01:treatment", method="lr"),
                                 seed=1234, #set for replication
                                 nsim=100, #set low for time or high for real
                                 alpha=.05, progress=FALSE) #options
## boundary (singular) fit: see help('isSingular')
## boundary (singular) fit: see help('isSingular')
#examine post-hoc power
SimPower_Fixed
## Power for predictor 'time01:treatment', (95% confidence interval):
##       53.00% (42.76, 63.06)
## 
## Test: Likelihood ratio
##       Effect size for time01:treatment is 1.0
## 
## Based on 100 simulations, (1 warning, 0 errors)
## alpha = 0.05, nrow = 800
## 
## Time elapsed: 0 h 0 m 6 s

The updated parameter has 53% power, which is still lower than ideal. We will repeat the process until the power reaches 80%.

In this while(){} loop, we start with the smallest effect size to test (SESOI) and its power (SESOI_Power). Then we increase the effect size by 0.1 (you can set smaller/bigger values depending on your effect size) and re-evaluate the power using the powerSim() function. This process will be repeated until the power estimate is greater or equal to 0.8.

If the effect size of interest is negative (e.g., -1.0 instead of 1.0), set the unit of increase as a negative value (i.e., -0.1 instead of 0.1) so that the effect size keeps decreasing and the absolute value of the effect size keeps increasing until the power reaches 80%.

SESOI <- 1.0 #setting the smallest effect size to test
SESOI_Power <- 0.53 #current power

#repeat until the power reaches 80%
while (SESOI_Power < 0.8){
  #update the effect size
  SESOI <- SESOI + 0.1 #increase by 0.1
  fixef(fit_lgmodel)["time01:treatment"] <- SESOI
  
  #Power for the effect of interest
  SimPower_Fixed <- simr::powerSim(fit_lgmodel,
                                   test=simr::fixed("time01:treatment", method="lr"),
                                   seed=1234, #set for replication
                                   nsim=100, #set low for time or high for real
                                   alpha=.05, progress=FALSE) #options

  #examine post-hoc power
  SESOI_Power <- summary(SimPower_Fixed)$mean #save the power estimate
  cat(SESOI,"with power",SESOI_Power,"\n")
}
## boundary (singular) fit: see help('isSingular')
## 1.1 with power 0.58 
## 1.2 with power 0.62 
## 1.3 with power 0.68
## boundary (singular) fit: see help('isSingular')
## 1.4 with power 0.72 
## 1.5 with power 0.82
#print the smallest effect size with power 80%
SESOI
## [1] 1.5

Cool! In order to have a power of 80% for the time01:treatment interaction parameter, the current study requires that the effect size should be at least 1.5.

This may produce several warnings (“boundary (singular) fit: see help(‘isSingular’)), showing that the fitted mixed model is almost/near singular. Please keep in mind that this code chunk aims to find the smallest effect size of interest (using hypothetical scenarios), rather than to obtain a good model fit!

4) Plotting Power Curves

Power curves are useful for visual interpretation. In this section, we define a function called powerES() that calculates power for fixed effect (effectName) with a specified effect size (effectSize) in a given model object (modelFit). As before, we use the likelihood ratio test ("lr"), use 100 simulations (nsim=100) and significance level 0.05 (alpha=.05).

#power analysis for fixed effect
powerES <- function(modelFit, effectName, effectSize){
  #set the effect size to test
  fixef(modelFit)[effectName] <- effectSize
  
  #Power for the effect of interest
  SimPower_Fixed <- simr::powerSim(modelFit,
                                   test=simr::fixed(effectName, method="lr"),
                                   seed=1234, #set for replication
                                   nsim=100, #set low for time or high for real
                                   alpha=.05, progress=FALSE) #options
  
  #examine post-hoc power
  powerdata <- summary(SimPower_Fixed) #save the power estimate
  powerdata$effectSize <- effectSize #save the effect size
  return(powerdata)
}

Using the powerES() function, we iteratively calculate power until the effect size reaches the SESOI. First create an empty object (powerdata_all) and a list of effect sizes that will be tested (effectSizes). This for(){} loop will then iterate the power analysis and bind the results.

#calculate power for specified effect sizes
powerdata_all <- list() #an empty object to save outputs
effectSizes <- c(fit_test, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6) #specify effect sizes to test
for (i in effectSizes){
  powerdata_all <- rbind(powerdata_all,
                         powerES(modelFit=fit_lgmodel, 
                                 effectName="time01:treatment", 
                                 effectSize=i))
}
## boundary (singular) fit: see help('isSingular')
## boundary (singular) fit: see help('isSingular')
## boundary (singular) fit: see help('isSingular')
## boundary (singular) fit: see help('isSingular')
## boundary (singular) fit: see help('isSingular')
## boundary (singular) fit: see help('isSingular')
## boundary (singular) fit: see help('isSingular')
## boundary (singular) fit: see help('isSingular')

Note that each row corresponds to a fitted simr() object for a specified effect size.

powerdata_all
##   successes trials mean     lower     upper effectSize
## 1        45    100 0.45 0.3503202 0.5527198  0.9214324
## 2        41    100 0.41 0.3126200 0.5128558  0.9000000
## 3        53    100 0.53 0.4275815 0.6305948  1.0000000
## 4        58    100 0.58 0.4771192 0.6780145  1.1000000
## 5        62    100 0.62 0.5174607 0.7152325  1.2000000
## 6        68    100 0.68 0.5792331 0.7697801  1.3000000
## 7        72    100 0.72 0.6213330 0.8052064  1.4000000
## 8        82    100 0.82 0.7305229 0.8896888  1.5000000
## 9        88    100 0.88 0.7997643 0.9364311  1.6000000

Plot the power curve using ggplot(); effectSize on the x-axis and power on the y-axis.

#Plotting curves
powerdata_all %>%
  ggplot(aes(y=mean, x=effectSize)) +
  #power goal
  geom_hline(yintercept=0.8, color="gray40",lty=2) +
  #power curve
  geom_line(show.legend=FALSE) +
  geom_errorbar(aes(ymin=lower, ymax=upper,
                    color=(effectSize==fit_test)), show.legend=FALSE) +
  geom_point(aes(color=(effectSize==fit_test)), size=3, show.legend=FALSE) +
  scale_color_manual(values=c("black","red")) +
  scale_x_continuous(breaks=c(.9,.92,1.0,1.1,1.2,1.3,1.4,1.5,1.6),
                     name="Effect Size") +
  scale_y_continuous(limits=c(0,1.0), 
                     breaks=c(0,.2,.4,.6,.8,1.0), 
                     name="Power") +  
  theme_classic() +
  ggtitle("Power Curve for time01:treatment Interaction")

As the hypothesized effect size increases, the power estimate increases as well. The SESOI approach finds the smallest effect size that can satisfy the conventional power threshold 0.8. Here, we see that the fixed effect should be at least 1.5 to produce an adequately-powered estimate.

B. Power Analysis for the Within-Person Process

This example makes use of the Process data set from B & L Chapter 5. The dataset represents daily relationship conflicts and daily relational intimacy from 66 women for 28 days.

We first load the Process data (N = 66, T = 28).

#set filepath for data file
filepath <- "https://raw.githubusercontent.com/The-Change-Lab/collaborations/refs/heads/main/Bolger_Laurenceau_2013/BL2013_data_process.csv"
#read in the .csv file using the url() function
data_process <- read.csv(file=url(filepath), header=TRUE)

There are 66 persons with 28 occasions each (coded 0 to 27). The data are fully complete.

1) Running the Multilevel Model with the Available Data

Next, we run the multilevel level using lmer(). As before, the outcome variable is intimacy scores and the parameter of interest is relqual:confcw, or the interaction between relqual (relationship quality; range = 0 to 1) and confcw (raw scores of conflict; within-person mean-centered). Please see Chapter 5 of B & L for further details.

#Run multilevel model 
fit_cpmodel <- lme4::lmer(intimacy ~ 1 + relqual + confcb + relqual:confcb + confcw +
                            confcw:relqual + time7c +
                            (1 + confcw | id),
                          data=data_process,
                          na.action=na.exclude)
#examine model summary
summary(fit_cpmodel)
## Linear mixed model fit by REML ['lmerMod']
## Formula: 
## intimacy ~ 1 + relqual + confcb + relqual:confcb + confcw + confcw:relqual +  
##     time7c + (1 + confcw | id)
##    Data: data_process
## 
## REML criterion at convergence: 7817.9
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -3.00949 -0.67665 -0.00451  0.67142  2.91605 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr
##  id       (Intercept) 0.8007   0.8948       
##           confcw      2.6990   1.6429   0.27
##  Residual             3.5908   1.8949       
## Number of obs: 1848, groups:  id, 66
## 
## Fixed effects:
##                Estimate Std. Error t value
## (Intercept)     4.53208    0.22131  20.478
## relqual         0.64735    0.28212   2.295
## confcb         -0.84166    1.10431  -0.762
## confcw         -2.02443    0.37061  -5.462
## time7c         -0.02820    0.03865  -0.730
## relqual:confcb  2.52633    1.68192   1.502
## relqual:confcw  1.03381    0.49232   2.100
## 
## Correlation of Fixed Effects:
##             (Intr) relqul confcb confcw time7c rlql:cnfcb
## relqual     -0.784                                       
## confcb      -0.520  0.408                                
## confcw       0.166 -0.131  0.036                         
## time7c       0.001 -0.001 -0.001 -0.008                  
## relql:cnfcb  0.341 -0.038 -0.657 -0.024 -0.002           
## relql:cnfcw -0.125  0.178 -0.027 -0.753  0.005  0.040

The results are the very similar to those in the book.

Our main parameter of interest has an effect size of 1.03. We create a variable containing this effect size.

fit_test <- fixef(fit_cpmodel)["relqual:confcw"]
fit_test
## relqual:confcw 
##       1.033811

2) Examining Statistical Power

We can now simulate power directly from the fitted model for the relqual:confcw interaction effect.

#Power for the effect of interest
SimPower_Fixed <- simr::powerSim(fit_cpmodel,
                                 test=simr::fixed("relqual:confcw", method="lr"),
                                 seed=1234, #set for replication
                                 nsim=100, #set low for time or high for real
                                 alpha=.05, progress=FALSE) #options
## Warning in observedPowerWarning(sim): This appears to be an "observed power"
## calculation
#examine post-hoc power
SimPower_Fixed
## Power for predictor 'relqual:confcw', (95% confidence interval):
##       58.00% (47.71, 67.80)
## 
## Test: Likelihood ratio
##       Effect size for relqual:confcw is 1.0
## 
## Based on 100 simulations, (1 warning, 0 errors)
## alpha = 0.05, nrow = 1848
## 
## Time elapsed: 0 h 0 m 11 s
## 
## nb: result might be an observed power calculation

We see that with an effect size of 1.03, power for the relqual:confcw interaction is 58%, as stated in the book.

3) Finding the SESOI

In a SESOI approach, we find the smallest effect size that the study is able to detect at a predetermined power. In an attempt to obtain 80% power, we increase the parameter from 1.03 to 1.2.

#setting the effect size to test
fixef(fit_cpmodel)["relqual:confcw"] <- 1.2

We now re-run the powerSim() function.

#Power for the effect of interest
SimPower_Fixed <- simr::powerSim(fit_cpmodel,
                                 test=simr::fixed("relqual:confcw", method="lr"),
                                 seed=1234, #set for replication
                                 nsim=100, #set low for time or high for real
                                 alpha=.05, progress=FALSE) #options

#examine post-hoc power
SimPower_Fixed
## Power for predictor 'relqual:confcw', (95% confidence interval):
##       72.00% (62.13, 80.52)
## 
## Test: Likelihood ratio
##       Effect size for relqual:confcw is 1.2
## 
## Based on 100 simulations, (0 warnings, 0 errors)
## alpha = 0.05, nrow = 1848
## 
## Time elapsed: 0 h 0 m 11 s

The updated parameter has 72% power, which is still lower than ideal. We will repeat the process until the power reaches 80%.

SESOI <- 1.2 #setting the smallest effect size to test
SESOI_Power <- 0.72 #current power

#repeat until the power reaches 80%
while (SESOI_Power < 0.8){
  #update the effect size
  SESOI <- SESOI + 0.1 #increase by 0.1
  fixef(fit_cpmodel)["relqual:confcw"] <- SESOI
  
  #Power for the effect of interest
  SimPower_Fixed <- simr::powerSim(fit_cpmodel,
                                   test=simr::fixed("relqual:confcw", method="lr"),
                                   seed=1234, #set for replication
                                   nsim=100, #set low for time or high for real
                                   alpha=.05, progress=FALSE) #options

  #examine post-hoc power
  SESOI_Power <- summary(SimPower_Fixed)$mean #save the power estimate
  cat(SESOI,"with power",SESOI_Power,"\n")
}
## 1.3 with power 0.75 
## 1.4 with power 0.83
#print the smallest effect size with power 80%
SESOI
## [1] 1.4

Cool! In order to have a power of 80% for the relqual:confcw interaction parameter, the current study requires that the effect size should be at least 1.4.

4) Plotting Power Curves

Power curves are useful for visual interpretation. In the previous example, we defined powerES() function that calculates power for fixed effect (effectName) with a specified effect size (effectSize) in a given model object (modelFit). Using this function, we iteratively calculate power until the effect size reaches the SESOI.

#calculate power for specified effect sizes
powerdata_all <- list() #an empty object to save outputs
effectSizes <- c(fit_test, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6) #specify effect sizes to test
for (i in effectSizes){
  powerdata_all <- rbind(powerdata_all,
                         powerES(modelFit=fit_cpmodel, 
                                 effectName="relqual:confcw", 
                                 effectSize=i))
}

Note that each row corresponds to a fitted simr() object for a specified effect size.

powerdata_all
##   successes trials mean     lower     upper effectSize
## 1        58    100 0.58 0.4771192 0.6780145   1.033811
## 2        53    100 0.53 0.4275815 0.6305948   0.900000
## 3        57    100 0.57 0.4671337 0.6686090   1.000000
## 4        64    100 0.64 0.5378781 0.7335916   1.100000
## 5        72    100 0.72 0.6213330 0.8052064   1.200000
## 6        75    100 0.75 0.6534475 0.8312203   1.300000
## 7        83    100 0.83 0.7418246 0.8977351   1.400000
## 8        85    100 0.85 0.7646925 0.9135456   1.500000
## 9        91    100 0.91 0.8360177 0.9580164   1.600000

Let’s plot the power curve using ggplot(); effectSize on the x-axis and power on the y-axis.

#Plotting curves
powerdata_all %>%
  ggplot(aes(y=mean, x=effectSize)) +
  #power goal
  geom_hline(yintercept=0.8, color="gray40",lty=2) +
  #power curve
  geom_line(show.legend=FALSE) +
  geom_errorbar(aes(ymin=lower, ymax=upper,
                    color=(effectSize==fit_test)), show.legend=FALSE) +
  geom_point(aes(color=(effectSize==fit_test)), size=3, show.legend=FALSE) +
  scale_color_manual(values=c("black","red")) +
  scale_x_continuous(breaks=c(1.0,1.03,1.1,1.2,1.3,1.4,1.5),
                     name="Effect Size") +
  scale_y_continuous(limits=c(0,1.0), 
                     breaks=c(0,.2,.4,.6,.8,1.0), 
                     name="Power") +  
  theme_classic() +
  ggtitle("Power Curve for relqual:confcw Interaction")

As the hypothesized effect size increases, the power estimate increases as well. The SESOI approach finds the smallest effect size that can satisfy the conventional power threshold 0.8. Here, we see that the fixed effect should be at least 1.4 to produce an adequately-powered estimate.

C. Power Analysis for the Categorical Outcomes

This example makes use of the Categorical data set from B & L Chapter 6. The dataset contains 61 heterosexual couples for a 4-week period (27 days).

We first load the Categorical data (N = 61, T = 27).

#set filepath for data file
filepath <- "https://raw.githubusercontent.com/The-Change-Lab/collaborations/refs/heads/main/Bolger_Laurenceau_2013/BL2013_data_categorical.csv"
#read in the .csv file using the url() function
data_categorical <- read.csv(file=url(filepath), header=TRUE)

We see that there are 61 persons with 27 occasions (coded 2 to 28). Moreover, the data are not fully complete.

1) Running the Multilevel Model with the Available Data

Next, we run the multilevel model using lmer(). As before, the model formula includes the male partner’s evening report of a conflict that day (pconf) and yesterday (lpconfc), the female partner’s morning report of anger after within- (amangcw) and between-subject (amangcb) mean-centering, and time7c (time variable, scaled that 1-unit difference corresponds to the passage of 1-week). The outcome variable is pconf and the predictor variable of interest is amangcw.

#Run multilevel model 
fit_catmodel <- lme4::glmer(pconf ~ 1 + amangcw + amangcb + lpconfc + time7c +
                              (1 | id),
                            family=binomial,
                            data=data_categorical,
                            na.action=na.exclude)
#examine model summary
summary(fit_catmodel)
## Generalized linear mixed model fit by maximum likelihood (Laplace
##   Approximation) [glmerMod]
##  Family: binomial  ( logit )
## Formula: pconf ~ 1 + amangcw + amangcb + lpconfc + time7c + (1 | id)
##    Data: data_categorical
## 
##      AIC      BIC   logLik deviance df.resid 
##   1084.8   1116.0   -536.4   1072.8     1339 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -0.7981 -0.4224 -0.3533 -0.2893  4.5619 
## 
## Random effects:
##  Groups Name        Variance Std.Dev.
##  id     (Intercept) 0.2475   0.4975  
## Number of obs: 1345, groups:  id, 61
## 
## Fixed effects:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.90161    0.10894 -17.456  < 2e-16 ***
## amangcw      0.21565    0.06738   3.200  0.00137 ** 
## amangcb     -0.22437    0.22398  -1.002  0.31649    
## lpconfc      0.31826    0.20892   1.523  0.12768    
## time7c      -0.19222    0.07073  -2.718  0.00657 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##         (Intr) amngcw amngcb lpcnfc
## amangcw -0.108                     
## amangcb  0.050 -0.103              
## lpconfc  0.006 -0.083  0.052       
## time7c   0.132 -0.004 -0.027  0.102

The results are very similar to the Mplus results reported in the book.

Our main parameter of interest has an effect size of 0.22. We create a variable containing this effect size.

fit_test <- fixef(fit_catmodel)["amangcw"]
fit_test
##   amangcw 
## 0.2156482

2) Examining Statistical Power

We can now simulate power directly from fitted model for the amangcw effect.

#Power for the effect of interest
SimPower_Direct <- simr::powerSim(fit_catmodel,
                                  test=simr::fixed("amangcw","lr"),
                                  seed=1234, #set for replication
                                  nsim=100, #set low for time or high for real
                                  alpha=.05, progress=FALSE) #options
## Warning in observedPowerWarning(sim): This appears to be an "observed power"
## calculation
#examine post-hoc power
SimPower_Direct
## Power for predictor 'amangcw', (95% confidence interval):
##       79.00% (69.71, 86.51)
## 
## Test: Likelihood ratio
##       Effect size for amangcw is 0.22
## 
## Based on 100 simulations, (0 warnings, 0 errors)
## alpha = 0.05, nrow = 1345
## 
## Time elapsed: 0 h 0 m 31 s
## 
## nb: result might be an observed power calculation

We see that with an effect size of 0.22, power for the amangcw effect is 79%. Since the data object we are using contains the missing data, we are already doing the power analysis with, on average, 22 occasions (i.e., we do not need to reduce to get that analysis, as B & L do in Chapter 10.5).

3) Finding the SESOI

In a SESOI approach, we find the smallest effect size that the study is able to detect at a predetermined power. In attempt to obtain 80% power, we increase the parameter from 0.22 to 0.23.

#setting the effect size to test
fixef(fit_catmodel)["amangcw"] <- 0.23

We next re-run the powerSim() function.

#Power for the effect of interest
SimPower_Fixed <- simr::powerSim(fit_catmodel,
                                 test=simr::fixed("amangcw", method="lr"),
                                 seed=1234, #set for replication
                                 nsim=100, #set low for time or high for real
                                 alpha=.05, progress=FALSE) #options

#examine post-hoc power
SimPower_Fixed
## Power for predictor 'amangcw', (95% confidence interval):
##       85.00% (76.47, 91.35)
## 
## Test: Likelihood ratio
##       Effect size for amangcw is 0.23
## 
## Based on 100 simulations, (0 warnings, 0 errors)
## alpha = 0.05, nrow = 1345
## 
## Time elapsed: 0 h 0 m 31 s

Cool! In order to have a power of 80% for the amangcw parameter, the current study requires that the effect size should be at least 0.23.

Conclusion

This tutorial covered some basics for conducting SESOI post-hoc power sensitivity analysis in R using the simr package. We fit models using “pilot” data from Bolger & Laurenceau (2013) and then found the smallest effect size that can achieve adequate power.

In addition to post-hoc power analysis, it is also possible to conduct a power analysis prior to having data for analysis. Please see our next tutorial, “A-Priori Power Analysis for Multilevel Models,” for more info.

Thanks for powering through with us!