Which weights to use with multilevel models? A common question with the use of large-scale assessments (LSAs) is related to the use of weights. Another issue is how to specify these weights properly.
Software such as SAS and Mplus, when specifying weights at two levels, require the use of conditional weights at level 1 if the level-2 weight is specified (or you can just use the level-2 weights alone; see Mang et al.

This is an update to:
Huang, F. (2024). Using plausible values when fitting multilevel models with large-scale assessment data using R. Large-scale Assessments in Education.
This is an update to mixPV, load it using this function:
source("https://raw.githubusercontent.com/flh3/pubdata/main/mixPV/mixPVv2.R") The function has been updated to be able to use parallel processing or multiple cores of your computer (to make computation faster).
Load in the dataset.
data(pisa2012, package = 'MLMusingR') The usual mixPV function can be used as normal.

Years ago I had written a post on using multiple imputation, weights, and accounting for clustering using R. However, the process was actually quite cumbersome and now in 2024, there are more straightforward ways of handling this.
1. Load in the required packges library(dplyr) #for basic data management library(tidyr) #converting wide to tall library(estimatr) #estimating models with robust SEs library(mitml) #for imputation and analyzing MI datasets library(MLMusingR) #contains the sample dataset library(mice) #for carrying out the analysis with MI data library(modelsummary) #outputting the results nicely library(survey) #alternative (classic) way 2.

Syntax to accompany the article:
Huang, F. (2024). Using plausible values when fitting multilevel models with large-scale assessment data using R. Large-scale Assessments in Education.
When fitting multilevel models using large scale assessments such as PISA or TIMSS, it is important to account for:
the use of weights at different levels and the presence of multiple plausible values. I am often asked how do you run this analysis in R.

Random notes. Regression based techniques often involve finding a maximum (e.g., the maximum likelihood) or a minimum (e.g., least squares or mean square error) value. Gradient descent is an iterative optimization algorithm used to find the minimum of a function (or gradient ascent to find the maximum).
The algorithm for solving for (\theta_j) looks like:
[\theta_j = \theta_j - \alpha\frac{\partial{}}{\partial{\theta_j} }J(\theta)]
(\alpha) is the learning rate (smaller step size takes more iterations)

© ^{2020}⁄_{2024} ·
Powered by the
Academic theme for
Hugo.