Job postings
Random notes. Regression based techniques often involve finding a maximum (e.g., the maximum likelihood) or a minimum (e.g., least squares or mean square error) value. Gradient descent is an iterative optimization algorithm used to find the minimum of a function (or gradient ascent to find the maximum).
Logistic regression is a modeling technique that has attracted a lot of attention, especially from folks interested in classification, machine learning, and prediction using binary outcomes. One of the neat things about using R is that users can revisit commonly used procedures and figure out how they work.
(MLM notes). Residuals are often used for model diagnostics or for spotting outliers in the data. For single-level models, these are merely the observed - predicted data (i.e., ). However, for multilevel models, these are a bit more complicated to compute. Since we have two error terms (in this example), we will have two sets of residuals. For example, a multilevel model with a single predictor at level one can be written as:
Job postings
I was poking around my old teaching files and I found an old file and I wasn’t sure what it was:
dat <- read.table("https://raw.githubusercontent.com/flh3/pubdata/main/Stefanski_2007/mizzo_1_data_yx1x5.txt")
head(dat)
## V1 V2 V3 V4 V5 V6
## 1 -0.224 0.00546 0.3803 0.01351 0.2092 0.14671
## 2 0.844 0.10737 -0.0265 0.04586 0.0130 -0.02719
## 3 1.062 0.09112 0.1813 0.05017 -0.1887 -0.01208
## 4 -1.042 0.44049 0.2460 0.00542 -0.2129 0.10152
## 5 0.157 -0.17051 0.1476 0.08363 -0.0953 -0.00785
## 6 -0.135 0.06160 -0.8041 -0.02595 0.2917 -0.07838
dim(dat)
## [1] 3785 6
Turns out it was an old data file I had used in class discussing regression diagnostics. We often talk about the assumption of the homoskedasticity of residuals and we graphically depict that by plotting the fitted values on the X axis and the residuals on the y axis. If all is well, we are told that we should have any discernible pattern.
A recurring question that I get asked is how to use full information maximum likelihood (FIML) when performing a multiple regression analysis BUT this time, accounting for nesting or clustered data structure. For this example, I use the the leadership
dataset in the mitml
package (Grund et al., 2021). We’ll also use lavaan
(Roseel, 2012) to estimate the two-level model. The chapter of Grund et al. (2019) is available here. We’ll replicate the Mplus
FIML results in Table 16.3 in the chapter and is shown below:
In a recent article in Multivariate Behavioral Research, we (Huang, Wiedermann, and Zhang; HWZ; doi: 10.1080/00273171.2022.2077290) discuss a robust standard error that can be used with mixed models that accounts for violations of homogeneity. Note that these robust standard errors have been around for years though are not always provided in statistical software. These can also be computed using the CR2
package or the clubSandwich
package. This page shows how to compute the traditional Liang and Zeger (1986) robust standard errors (CR0) and the CR2 estimator- see Bell and McCaffrey (2002) as well as McCaffrey, Bell, and Botts (2001) (BM and MBB).
In an earlier post, I had shown this using iteratively reweighted least squares (IRLS). This is just an alternative method using Newton Raphson and the Fisher scoring algorithm. For further details, you can look here as well.
More notes to self… Obtaining estimates of the unknown parameters in multilevel models is often done by optimizing a likelihood function. The estimates are the values that maximize the likelihood function given certain distributional assumptions.