Generalised Linear Mixed Models: Ohio Respiratory Data and Wine Bitterness

Fahrmeir and Tutz (2001) Examples 7.1–7.4

Published

April 15, 2026

Many R packages exist for fitting generalised linear mixed models, using different estimation methods:

  • lme4::glmer (most widely used): Frequentist GLMMs via Laplace approximation (nAGQ = 1) or adaptive Gauss-Hermite quadrature (nAGQ > 1)
  • MASS::glmmPQL: Penalized quasi-likelihood (PQL; approximate, fast, but can be biased for binary/sparse outcomes)
  • MCMCglmm: Bayesian GLMMs fitted by Markov chain Monte Carlo (MCMC)
  • glmmTMB (very popular modern alternative): Frequentist GLMMs via Template Model Builder/Laplace; especially strong for count models, zero inflation, and overdispersion
  • ordinal::clmm: Cumulative link mixed models for ordinal outcomes (an ordinal GLMM family)
  • glmmML: Classical random-intercept GLMMs using adaptive Gauss-Hermite quadrature (more limited model scope)
  • ASREML-R: Commercial software with broad mixed-model/GLMM capability, widely used in biostatistics/agricultural genetics
  • glmmADMB: GLMMs via AD Model Builder/Laplace (historically important, but largely superseded and no longer actively maintained)
  • glmm (from Jim Lindsey’s repeated package): Older AGHQ-based approach for repeated-measures GLMMs (now uncommon)
  • sabreR, mcemGLM, gamlss.mx: Specialized/less commonly used frameworks that can fit mixed or latent/random-effect extensions in narrower settings

Ohio Respiratory Data Fahrmeir and Tutz (2001) Example 7.1 and 7.3

Setup

The Ohio respiratory dataset contains repeated respiratory status measurements (yes/no) for children in a longitudinal study, with covariates including age and smoking status. We’ll fit GLMM models to account for within-child correlation using random intercepts. Our goal is to reproduce some of Table 7.2 from Fahrmeir and Tutz (2001).

Load required packages and do the standard preparation of the Ohio respiratory dataset:

Code
library(conflicted)
library(dplyr)
library(lme4)
Loading required package: Matrix
Code
library(geepack)
library(MASS)
library(ordinal)

ohio_tbl <- as_tibble(ohio)
ohio_tbl
# A tibble: 2,148 × 4
    resp    id   age smoke
   <int> <int> <int> <int>
 1     0     0    -2     0
 2     0     0    -1     0
 3     0     0     0     0
 4     0     0     1     0
 5     0     1    -2     0
 6     0     1    -1     0
 7     0     1     0     0
 8     0     1     1     0
 9     0     2    -2     0
10     0     2    -1     0
# ℹ 2,138 more rows
Code
ohio_tbl <- mutate(ohio_tbl, across(c(age, smoke), factor))

# Check default contrasts
contrasts(ohio_tbl$age)
   -1 0 1
-2  0 0 0
-1  1 0 0
0   0 1 0
1   0 0 1
Code
contrasts(ohio_tbl$smoke)
  1
0 0
1 1
Code
options(contrasts = c("contr.sum", "contr.poly"))

contrasts(ohio_tbl$age)
   [,1] [,2] [,3]
-2    1    0    0
-1    0    1    0
0     0    0    1
1    -1   -1   -1
Code
contrasts(ohio_tbl$smoke)
  [,1]
0    1
1   -1
Code
contrasts(ohio_tbl$smoke) <- matrix(c(-1, 1), 2, 1)
contrasts(ohio_tbl$smoke)
  [,1]
0   -1
1    1

lme4::glmer() Fitting with Laplace Approximation

When fitting a generalised linear mixed model (GLMM), the likelihood involves integrating over the random effects, which typically has no closed-form solution. Therefore we need to choose a numerical integration method to approximate the likelihood. The lme4::glmer() function provides two main methods for this: the Laplace approximation and adaptive Gauss-Hermite quadrature (AGHQ). The default lme4::glmer() approach uses the Laplace approximation. Once we have chosen the integration method, we also need to specify the optimization algorithm; the Nelder-Mead optimizer is recommended by Faraway (2016).

Code
glmer_fit_nelder_mead <- glmer(
  resp ~ age * smoke + (1 | id),
  data = ohio_tbl,
  family = binomial,
  control = glmerControl(
    optimizer = "Nelder_Mead"
  )
)
Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 0.0589875 (tol = 0.002, component 1)
  See ?lme4::convergence and ?lme4::troubleshooting.

This specifies a random intercept model: (1 | id) means each subject (id) has its own random intercept drawn from a normal distribution with mean 0 and estimated standard deviation. The 1 represents the intercept term; | id conditions on subject ID. This allows each subject to have a different baseline probability of respiratory status while maintaining a common effect of the fixed covariates.

We can see that there is an error message about convergence. Checking ?lme4::troubleshooting suggests that this type of error may be fixed by trying a different optimiser. The other optimiser available is Bound Optimization BY Quadratic Approximation (BOBYQA). Let’s try that:

Code
glmer_fit <- glmer(
  resp ~ age * smoke + (1 | id),
  data = ohio_tbl,
  family = binomial,
  control = glmerControl(
    optimizer = "bobyqa"
  )
)

logLik(glmer_fit) - logLik(glmer_fit_nelder_mead)
'log Lik.' 0.004172472 (df=9)
Code
as_tibble(coef(summary(glmer_fit)) - coef(summary(glmer_fit_nelder_mead)))
# A tibble: 8 × 4
   Estimate `Std. Error` `z value` `Pr(>|z|)`
      <dbl>        <dbl>     <dbl>      <dbl>
1  0.00661    -0.00187   -0.0657    -2.79e-35
2 -0.00591     0.0000131 -0.0441     2.78e- 2
3 -0.00347    -0.0000743 -0.0254     3.27e- 3
4  0.00415    -0.000176   0.0321    -1.91e- 2
5 -0.000761   -0.000608   0.000747  -2.14e- 4
6 -0.00516     0.0000285 -0.0381    -1.27e- 2
7 -0.00101    -0.0000643 -0.00724    3.89e- 3
8  0.00413    -0.000178   0.0314    -2.39e- 2

There are no longer convergence issues, and comparing the log-likelihoods, we see that the log-likelihood is slightly higher with the BOBYQA optimizer. The coefficient estimates are slightly different, but the overall pattern of results is the same.

Code
summary(glmer_fit)
Generalized linear mixed model fit by maximum likelihood (Laplace
  Approximation) [glmerMod]
 Family: binomial  ( logit )
Formula: resp ~ age * smoke + (1 | id)
   Data: ohio_tbl
Control: glmerControl(optimizer = "bobyqa")

      AIC       BIC    logLik -2*log(L)  df.resid 
   1601.5    1652.6    -791.8    1583.5      2139 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-1.4831 -0.1819 -0.1583 -0.1174  2.7911 

Random effects:
 Groups Name        Variance Std.Dev.
 id     (Intercept) 5.628    2.372   
Number of obs: 2148, groups:  id, 537

Fixed effects:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -3.11284    0.25076 -12.414   <2e-16 ***
age1         0.08911    0.13434   0.663   0.5071    
age2         0.24920    0.13140   1.897   0.0579 .  
age3         0.10406    0.13333   0.780   0.4351    
smoke1       0.20808    0.14540   1.431   0.1524    
age1:smoke1 -0.18014    0.13447  -1.340   0.1804    
age2:smoke1  0.11622    0.13112   0.886   0.3754    
age3:smoke1  0.04222    0.13328   0.317   0.7514    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Correlation of Fixed Effects:
            (Intr) age1   age2   age3   smoke1 ag1:s1 ag2:s1
age1        -0.027                                          
age2        -0.084 -0.299                                   
age3        -0.037 -0.314 -0.292                            
smoke1       0.115  0.026 -0.019 -0.008                     
age1:smoke1  0.056  0.277 -0.089 -0.095 -0.014              
age2:smoke1 -0.041 -0.084  0.218 -0.053 -0.041 -0.303       
age3:smoke1 -0.016 -0.093 -0.053  0.228 -0.019 -0.315 -0.294

The output is very similar to that in the linear mixed model fit, with fixed effects estimates, as well as a correlation matrix for the fixed effects, and an estimate of the random effect variance.

The coefficient estimates are in the right ballpark, but ultimately don’t match very closely with those in the book. Instead we follow the book’s approach and try using adaptive Gauss-Hermite quadrature (AGHQ) to estimate the integration.

CautionStandard Errors in Table 7.2

Although strictly speaking the mixed effects model is not in the exponential family, it is still a likelihood-based model, and so in theory, we should be able to approximate a Fisher information matrix to compute standard errors. Unfortunately the lme4::glmer() function is not well documented enough for us to reproduce the standard errors in the book.

lme4::glmer() Fitting with Adaptive Gauss-Hermite Quadrature

In order to get more accurate estimates, we can improve the numerical integration by using adaptive Gauss-Hermite quadrature (AGHQ) instead of the Laplace approximation. The nAGQ argument in lme4::glmer() controls the number of quadrature points used for the integration. The default nAGQ = 1 corresponds to the Laplace approximation, while nAGQ > 1 uses multiple quadrature points for a more accurate approximation of the likelihood.

Here we try nAGQ = 10, which is what Fahrmeir and Tutz (2001) claim to use in their analysis.

Code
glmer_fit_n_agq_10 <- glmer(
  resp ~ age * smoke + (1 | id),
  data = ohio_tbl,
  family = binomial,
  control = glmerControl(
    optimizer = "bobyqa"
  ),
  nAGQ = 10
)

summary(glmer_fit_n_agq_10)
Generalized linear mixed model fit by maximum likelihood (Adaptive
  Gauss-Hermite Quadrature, nAGQ = 10) [glmerMod]
 Family: binomial  ( logit )
Formula: resp ~ age * smoke + (1 | id)
   Data: ohio_tbl
Control: glmerControl(optimizer = "bobyqa")

      AIC       BIC    logLik -2*log(L)  df.resid 
   1607.3    1658.3    -794.6    1589.3      2139 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-1.4428 -0.2051 -0.1800 -0.1340  2.7803 

Random effects:
 Groups Name        Variance Std.Dev.
 id     (Intercept) 4.697    2.167   
Number of obs: 2148, groups:  id, 537

Fixed effects:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.82560    0.18486 -15.285   <2e-16 ***
age1         0.08847    0.13354   0.663   0.5076    
age2         0.24591    0.13086   1.879   0.0602 .  
age3         0.10175    0.13265   0.767   0.4430    
smoke1       0.19938    0.13724   1.453   0.1463    
age1:smoke1 -0.17911    0.13366  -1.340   0.1802    
age2:smoke1  0.11461    0.13064   0.877   0.3803    
age3:smoke1  0.04135    0.13261   0.312   0.7552    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Correlation of Fixed Effects:
            (Intr) age1   age2   age3   smoke1 ag1:s1 ag2:s1
age1        -0.023                                          
age2        -0.077 -0.302                                   
age3        -0.032 -0.315 -0.296                            
smoke1       0.158  0.026 -0.021 -0.008                     
age1:smoke1  0.050  0.270 -0.086 -0.092 -0.013              
age2:smoke1 -0.040 -0.083  0.215 -0.055 -0.042 -0.305       
age3:smoke1 -0.015 -0.091 -0.055  0.224 -0.019 -0.316 -0.298

Comparing to Table 7.2, we are significantly closer to the book’s reported estimates.

Random Effects Estimates

As with the LMMs, we can use the lme4::ranef() function to extract the conditional modes of the random effects for each subject:

Code
ranef(glmer_fit)$id |> tail(20)
    (Intercept)
517    2.452273
518    2.452273
519    3.327563
520    2.452273
521    2.452273
522    2.452273
523    2.452273
524    3.327563
525    3.327563
526    3.327563
527    3.327563
528    3.327563
529    3.327563
530    4.357668
531    4.357668
532    4.357668
533    4.357668
534    4.357668
535    4.357668
536    4.357668

Note that now we are no longer dealing with normally distributed errors, and so the modes are not necessarily the same as the means of the random effects distribution.

Penalized quasi-likelihood (PQL) is an alternative (and much faster) method that doesn’t require numerical integration. See Chapter 13 in Faraway (2016) for more details.

Code
glmm_pql_fit <- glmmPQL(
  resp ~ age * smoke,
  random = ~ 1 | id,
  family = binomial,
  data = ohio_tbl
)
iteration 1
iteration 2
iteration 3
iteration 4
iteration 5
iteration 6
iteration 7
iteration 8
Code
summary(glmm_pql_fit)
Linear mixed-effects model fit by maximum likelihood
  Data: ohio_tbl 
  AIC BIC logLik
   NA  NA     NA

Random effects:
 Formula: ~1 | id
        (Intercept)  Residual
StdDev:     2.07297 0.6341298

Variance function:
 Structure: fixed weights
 Formula: ~invwt 
Fixed effects:  resp ~ age * smoke 
                 Value  Std.Error   DF    t-value p-value
(Intercept) -2.5323392 0.11682431 1605 -21.676475  0.0000
age1         0.0913145 0.08635904 1605   1.057382  0.2905
age2         0.2553234 0.08463119 1605   3.016895  0.0026
age3         0.1054854 0.08581869 1605   1.229166  0.2192
smoke1       0.1621624 0.11682431  535   1.388088  0.1657
age1:smoke1 -0.1858799 0.08635904 1605  -2.152408  0.0315
age2:smoke1  0.1190618 0.08463119 1605   1.406832  0.1597
age3:smoke1  0.0426174 0.08581869 1605   0.496598  0.6195
 Correlation: 
            (Intr) age1   age2   age3   smoke1 ag1:s1 ag2:s1
age1        -0.009                                          
age2        -0.042 -0.304                                   
age3        -0.017 -0.316 -0.297                            
smoke1       0.273  0.025 -0.024 -0.009                     
age1:smoke1  0.025  0.271 -0.086 -0.092 -0.009              
age2:smoke1 -0.024 -0.086  0.218 -0.056 -0.042 -0.304       
age3:smoke1 -0.009 -0.092 -0.056  0.226 -0.017 -0.316 -0.297

Standardized Within-Group Residuals:
       Min         Q1        Med         Q3        Max 
-2.7730944 -0.2836175 -0.2612394 -0.1922996  3.8397997 

Number of Observations: 2148
Number of Groups: 537 
Code
as_tibble(ranef(glmm_pql_fit))
# A tibble: 537 × 1
   `(Intercept)`
           <dbl>
 1         -1.04
 2         -1.04
 3         -1.04
 4         -1.04
 5         -1.04
 6         -1.04
 7         -1.04
 8         -1.04
 9         -1.04
10         -1.04
# ℹ 527 more rows

Wine Bitterness: Fahrmeir and Tutz (2001) Example 7.2 and 7.4

Now we demonstrate ordinal regression with random effects. For this we consider the ordinal::wine dataset, described in Example 7.2 of Fahrmeir and Tutz (2001), which contains ratings of wine bitterness by different judges under different conditions.

As always, we should inspect the data help page

Code
?wine

The covariates are - response: a continuous measure of bitterness - rating: an ordered factor rating of bitterness (1-5) - temp: temperature (cold/warm) - contact: contact with skin (yes/no) - judge: judge ID

We will regress the ordinal rating on the covariates temp and contact, while including a random intercept for judge to account for judge-specific differences in rating tendencies, in order to reproduce some of the results in Table 7.4 of Fahrmeir and Tutz (2001).

Data Setup and Contrast Coding

Even though the book says that no contact is the reference level, the analysis appears to be done with contact as the reference level.

Code
wine_tbl <- as_tibble(wine)
wine_tbl
# A tibble: 72 × 6
   response rating temp  contact bottle judge
      <dbl> <ord>  <fct> <fct>   <fct>  <fct>
 1       36 2      cold  no      1      1    
 2       48 3      cold  no      2      1    
 3       47 3      cold  yes     3      1    
 4       67 4      cold  yes     4      1    
 5       77 4      warm  no      5      1    
 6       60 4      warm  no      6      1    
 7       83 5      warm  yes     7      1    
 8       90 5      warm  yes     8      1    
 9       17 1      cold  no      1      2    
10       22 2      cold  no      2      2    
# ℹ 62 more rows
Code
wine_tbl <- wine_tbl |>
  mutate(
    temp = factor(temp, levels = c("warm", "cold")),
    contact = factor(contact, levels = c("yes", "no"))
  )
contrasts(wine_tbl$contact)
    [,1]
yes    1
no    -1
Code
contrasts(wine_tbl$temp)
     [,1]
warm    1
cold   -1

Reset contrasts to dummy coding:

Code
options(contrasts = c("contr.treatment", "contr.poly"))

# Check default contrasts
contrasts(wine_tbl$contact)
    no
yes  0
no   1
Code
contrasts(wine_tbl$temp)
     cold
warm    0
cold    1

Fixed Effects Ordinal Model: MASS::polr()

To replicate the first column of Table 7.4, we fit using MASS::polr():

Code
wine_fixed_fit_polr <- polr(
  rating ~ temp + contact,
  data = wine_tbl,
  Hess = TRUE,
  method = "logistic"
)

summary(wine_fixed_fit_polr)
Call:
polr(formula = rating ~ temp + contact, data = wine_tbl, Hess = TRUE, 
    method = "logistic")

Coefficients:
           Value Std. Error t value
tempcold  -2.503     0.5287  -4.735
contactno -1.528     0.4766  -3.205

Intercepts:
    Value   Std. Error t value
1|2 -5.3753  0.7680    -6.9991
2|3 -2.7801  0.5306    -5.2397
3|4 -0.5640  0.4076    -1.3837
4|5  0.9755  0.4593     2.1238

Residual Deviance: 172.9838 
AIC: 184.9838 

Our coefficient estimates for the non-threshold coefficients are relatively close to the book’s (up to the signs as we expect, explained in the first ordinal regression script).

The book reports thresholds differently to how we do, but we can easily convert our results to the book’s formulation (see the equation (7.5.1) on page 320 to see why this is the case). Let’s check:

Code
thresholds <- wine_fixed_fit_polr$zeta
alphas <- c(thresholds[1], log(diff(thresholds)))
alphas
       1|2        2|3        3|4        4|5 
-5.3753078  0.9536691  0.7957376  0.4314940 

This line re-parameterises the ordered thresholds into the book’s unconstrained parameterisation (alphas):

  • thresholds[1]: keeps the first cutpoint unchanged.
  • diff(thresholds): computes successive gaps
    \((\theta_2-\theta_1,\theta_3-\theta_2,\ldots)\).
  • log(diff(thresholds)): log-transforms those positive gaps so they can be modelled on the real line.
  • c(...): concatenates the first cutpoint and the logged gaps into one vector.

So if thresholds = (theta_1, theta_2, theta_3, theta_4), then this creates \[ \alpha = \left(\theta_1,\log(\theta_2-\theta_1),\log(\theta_3-\theta_2),\log(\theta_4-\theta_3)\right). \]

This guarantees ordered thresholds when converting back, because exponentiating the gap terms makes each gap strictly positive.

These are also close but not exact. For a final sanity check, we can compare our log-likelihood with the book’s:

Code
logLik(wine_fixed_fit_polr)
'log Lik.' -86.49192 (df=6)

The log-likelihood is close to the reported log-likehood in Table 7.4, but not exact. In fact, ours is marginally higher than the book’s. The reason for all the small mismatches is not clear.

In order to make sure that the mismatches aren’t due to the specific implementation of MASS::polr(), we can try a different function ordinal::clm():

Code
wine_fixed_fit_clm <- clm(rating ~ temp + contact, data = wine_tbl)

summary(wine_fixed_fit_clm)
formula: rating ~ temp + contact
data:    wine_tbl

 link  threshold nobs logLik AIC    niter max.grad cond.H 
 logit flexible  72   -86.49 184.98 6(0)  4.01e-12 2.5e+01

Coefficients:
          Estimate Std. Error z value Pr(>|z|)    
tempcold   -2.5031     0.5287  -4.735 2.19e-06 ***
contactno  -1.5278     0.4766  -3.205  0.00135 ** 
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Threshold coefficients:
    Estimate Std. Error z value
1|2  -5.3753     0.7680  -6.999
2|3  -2.7801     0.5306  -5.240
3|4  -0.5640     0.4076  -1.384
4|5   0.9755     0.4593   2.124
Code
coef(wine_fixed_fit_clm)
       1|2        2|3        3|4        4|5   tempcold  contactno 
-5.3752831 -2.7800909 -0.5640127  0.9755045 -2.5031020 -1.5277977 
Code
logLik(wine_fixed_fit_clm)
'log Lik.' -86.49192 (df=6)
Code
thresholds_clm <- wine_fixed_fit_clm$alpha
alphas_clm <- c(thresholds_clm[1], log(diff(thresholds_clm)))
alphas_clm
       1|2        2|3        3|4        4|5 
-5.3752831  0.9536606  0.7957390  0.4314689 

The values retrieved are almost the same as those obtained with MASS::polr().

Random Effects Model: ordinal::clmm()

Now we fit a random effects model using ordinal::clmm(), which is designed for cumulative link mixed models (CLMMs).

Code
wine_mixed_fit <- clmm(
  rating ~ temp + contact + (1 | judge),
  data = wine_tbl,
  nAGQ = 10
)

summary(wine_mixed_fit)
Cumulative Link Mixed Model fitted with the adaptive Gauss-Hermite 
quadrature approximation with 10 quadrature points 

formula: rating ~ temp + contact + (1 | judge)
data:    wine_tbl

 link  threshold nobs logLik AIC    niter    max.grad cond.H 
 logit flexible  72   -81.53 177.06 289(870) 4.39e-05 2.7e+01

Random effects:
 Groups Name        Variance Std.Dev.
 judge  (Intercept) 1.288    1.135   
Number of groups:  judge 9 

Coefficients:
          Estimate Std. Error z value Pr(>|z|)    
tempcold   -3.0619     0.5951  -5.145 2.67e-07 ***
contactno  -1.8334     0.5122  -3.580 0.000344 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Threshold coefficients:
    Estimate Std. Error z value
1|2  -6.5188     1.0330  -6.310
2|3  -3.3825     0.7265  -4.656
3|4  -0.6683     0.5805  -1.151
4|5   1.1908     0.6287   1.894

Again these results are close to, but not exactly the same as, those in Table 7.4.

We can compare threshold estimates again as above:

Code
thresholds_mixed <- wine_mixed_fit$alpha
alphas_mixed <- c(thresholds_mixed[1], log(diff(thresholds_mixed)))
alphas_mixed
       1|2        2|3        3|4        4|5 
-6.5188348  1.1430410  0.9985211  0.6200911 

We can extract fitted values and contrast structure from the fitted model. Note that the fitted values are evaluated at the conditional modes of the random effects distribution (see ?clmm).

Code
# Fitted values for first few observations
fitted(wine_mixed_fit)
 [1] 0.4188893 0.4724074 0.5499018 0.2607457 0.4203979 0.4203979 0.6242554
 [8] 0.6242554 0.2578999 0.6309828 0.0526328 0.3895658 0.2563875 0.5772438
[15] 0.1471160 0.3782859 0.5628160 0.3305670 0.5901682 0.2038518 0.1136763
[22] 0.1136763 0.3922237 0.3922237 0.1583375 0.6550367 0.4857244 0.4026850
[29] 0.5888987 0.1742657 0.2226634 0.3175280 0.6474039 0.1992047 0.0855586
[36] 0.5313730 0.5736197 0.5736197 0.2629314 0.2629314 0.2391484 0.6288978
[43] 0.5613280 0.2911541 0.1107011 0.2619797 0.3289653 0.4298561 0.5720274
[50] 0.5720274 0.6549805 0.6549805 0.5312644 0.3658760 0.1772063 0.5893861
[57] 0.6506202 0.6506202 0.4483256 0.4469287 0.5899550 0.5899550 0.3597395
[64] 0.4096655 0.2557231 0.6320281 0.3918395 0.5063004 0.5780355 0.2543115
[71] 0.3796922 0.3796922

As always, to get the estimated conditional modes of the random effects, we use the generic lme4::ranef() function.

Code
ranef(wine_mixed_fit)
$judge
  (Intercept)
1   1.6984688
2  -0.5665832
3   0.9705262
4  -0.0593933
5   0.2311041
6   0.4779502
7  -1.9136235
8  -0.2730379
9  -0.5551779

Package Citations and Session Information

Code
sessionInfo()
R version 4.4.2 (2024-10-31)
Platform: x86_64-apple-darwin20
Running under: macOS Sequoia 15.7.5

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.4-x86_64/Resources/lib/libRblas.0.dylib 
LAPACK: /Library/Frameworks/R.framework/Versions/4.4-x86_64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.0

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

time zone: Australia/Melbourne
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices datasets  utils     methods   base     

other attached packages:
[1] ordinal_2025.12-29 MASS_7.3-61        geepack_1.3.13     lme4_1.1-38       
[5] Matrix_1.7-1       dplyr_1.2.0        conflicted_1.2.0  

loaded via a namespace (and not attached):
 [1] jsonlite_2.0.0      ucminf_1.2.2        compiler_4.4.2     
 [4] renv_1.1.7          tidyselect_1.2.1    Rcpp_1.1.1         
 [7] tidyr_1.3.2         splines_4.4.2       boot_1.3-31        
[10] yaml_2.3.12         fastmap_1.2.0       lattice_0.22-6     
[13] R6_2.6.1            generics_0.1.4      knitr_1.51         
[16] backports_1.5.0     rbibutils_2.4.1     tibble_3.3.1       
[19] nloptr_2.2.1        minqa_1.2.8         pillar_1.11.1      
[22] rlang_1.1.7         utf8_1.2.6          broom_1.0.12       
[25] cachem_1.1.0        xfun_0.56           memoise_2.0.1      
[28] cli_3.6.5           withr_3.0.2         magrittr_2.0.4     
[31] Rdpack_2.6.5        digest_0.6.39       grid_4.4.2         
[34] rstudioapi_0.18.0   lifecycle_1.0.5     nlme_3.1-168       
[37] reformulas_0.4.4    vctrs_0.7.1         evaluate_1.0.5     
[40] glue_1.8.0          numDeriv_2016.8-1.1 purrr_1.2.1        
[43] rmarkdown_2.30      tools_4.4.2         pkgconfig_2.0.3    
[46] htmltools_0.5.9    
Code
packages <- c("conflicted", "dplyr", "geepack", "lme4", "MASS", "ordinal")
for (pkg in packages) {
  cit <- citation(pkg)

  # Check if multiple citations
  if (length(cit) > 1) {
    cat("- **", pkg, "**:\n", sep = "")
    for (i in seq_along(cit)) {
      cit_text <- format(cit[i], style = "text")
      cat("    - ", cit_text, "\n\n", sep = "")
    }
  } else {
    cit_text <- format(cit, style = "text")
    cat("- **", pkg, "**: ", cit_text, "\n\n", sep = "")
  }
}
  • conflicted: Wickham H (2023). conflicted: An Alternative Conflict Resolution Strategy. R package version 1.2.0, https://github.com/r-lib/conflicted, https://conflicted.r-lib.org/.

  • dplyr: Wickham H, François R, Henry L, Müller K, Vaughan D (2026). dplyr: A Grammar of Data Manipulation. R package version 1.2.0, https://github.com/tidyverse/dplyr, https://dplyr.tidyverse.org.

  • geepack:

    • Halekoh U, Højsgaard S, Yan J (2006). “The R Package geepack for Generalized Estimating Equations.” Journal of Statistical Software, 15/2, 1-11.

    • Yan J, Fine JP (2004). “Estimating Equations for Association Structures.” Statistics in Medicine, 23, 859-880.

    • Yan J (2002). “geepack: Yet Another Package for Generalized Estimating Equations.” R-News, 2/3, 12-14.

    • Xu, Z., Fine, P. J, Song, W., Yan, J. (2025). “On GEE for mean-variance-correlation models: Variance estimation and model selection.” Statistics in Medicine, 44, 1-2.

  • lme4: Bates D, Mächler M, Bolker B, Walker S (2015). “Fitting Linear Mixed-Effects Models Using lme4.” Journal of Statistical Software, 67(1), 1-48. doi:10.18637/jss.v067.i01 https://doi.org/10.18637/jss.v067.i01.

  • MASS: Venables WN, Ripley BD (2002). Modern Applied Statistics with S, Fourth edition. Springer, New York. ISBN 0-387-95457-0, https://www.stats.ox.ac.uk/pub/MASS4/.

  • ordinal: Christensen R (2025). ordinal-Regression Models for Ordinal Data. R package version 2025.12-29, https://CRAN.R-project.org/package=ordinal.

References

Fahrmeir, Ludwig, and Gerhard Tutz. 2001. Multivariate Statistical Modelling Based on Generalized Linear Models. Second Edition. Springer Series in Statistics. Springer. https://doi.org/10.1007/978-1-4757-3454-6.
Faraway, Julian J. 2016. Extending the Linear Model with r : Generalized Linear, Mixed Effects and Nonparametric Regression Models, Second Edition. Second edition. Chapman & Hall/CRC Texts in Statistical Science Series. Chapman; Hall/CRC. https://research.ebsco.com/plink/3806c0de-f79d-330d-9c4e-7bb136efaa53.