I show this in a recent JEBS article on using Generalized Estimating Equations (GEEs). Shown below is some annotated syntax and examples.
Huang, F. (2021). Analyzing cross-sectionally clustered data using generalized estimating equations. Journal of Educational and Behavioral Statistics. doi: 10.^{3102}⁄_{10769986211017480}
In the original paper draft, I had a section which showed how much more widely used mixed models (i.e., MLMs, HLMs) were compared to GEEs but was asked to remove that (to save space).

More notes to selfâ€¦ Obtaining estimates of the unknown parameters in multilevel models is often done by optimizing a likelihood function. The estimates are the values that maximize the likelihood function given certain distributional assumptions.
The likelihood function differs depending on whether maximum (ML) or restricted maximum (REML) likelihood is used. For ML, the log likelihood function to be maximized is:
[ \ell*{ML}(\theta)=-0.5n \times ln(2\pi) -0.5 \times \sum*{i}{ln(det(V_i))} - 0.

Notes to self (and anyone else who might find this useful). With the general linear mixed models (to simplify, I am just omitting super/subscripts):
[Y = X\beta + Zu + e] where we assume (u \sim MVN(0, G)) and (e \sim MVN(0, R)). (V) is:
[V = ZGZ^T + R]
Software estimates (V) iteratively and maximizes the likelihood (the function of which depends on whether ML or REML is used).

© ^{2020}⁄_{2024} ·
Powered by the
Academic theme for
Hugo.