Here we will use the bayesdfa
package to fit dynamic factor analysis (DFA) model to simulated time
series data. In addition to working through an example of DFA for
multivariate time series, we’ll apply bayesdfa routines for fitting
hidden Markov models (HMMs) to the estimated trends to identify latent
regimes. Most of the core functions of the package are included here,
including fit_dfa()
and find_dfa_trends
for
fitting DFA models, plot_trends()
,
plot_fitted()
and plot_loadings()
for plotting
estimates, find_swans()
for flagging extremes,
fit_regimes()
and find_regimes()
for fitting
HMM models, and plot_regimes()
for plotting HMM output.
Let’s load the necessary packages:
We adopt the same notation used in the MARSS package for dynamic factor analysis models. The DFA model consists of two models, one describing the latent process or states, and an observation or data model linking the process model to observations. Slight variations on this model are described below, but the process model for the basic DFA model is written as a multivariate random walk,
xt + 1 = xt + wt where the matrix x is dimensioned as the number of years N by the number of latent trends K. The process error is assumed to be multivariate normal, wt ∼ MVN(0, Q) where Q is generally assumed to be a K-by-K identity matrix.
The observation or data model linking xt to observed data yt is
yt = Zxt + Bdt + et The matrix Z represents a matrix of estimated loadings, dimensioned as number of time series P by number of latent trends K. Optional covariates dt are included in the observation model with estimated coefficients B. The residual errors et are assumed to be normally distributed, e.g. et ∼ MVN(0, R). There are a number of choices for R – these can be a diagonal matrix with equal or unequal elements, or an unconstrained covariance matrix.
First, let’s simulate some data. We will use the built-in function
sim_dfa()
, but normally you would start with your own data.
We will simulate 20 data points from 4 time series, generated from 2
latent processes. For this first dataset, the data won’t include
extremes, and loadings will be randomly assigned (default).
Next, we’ll fit a 1-trend, 2-trend, and 3-trend DFA model to the
simulated time series using the fit_dfa()
function.
Starting with the 1-trend model, we’ll estimate the posterior
distributions of the trends and loadings. Note that this example uses 1
MCMC chain and 50 iterations — for real examples, you’ll want to use
more (say 4 chains, 5000 iterations).
f1 <- fit_dfa(
y = sim_dat$y_sim, num_trends = 1, scale="zscore",
iter = iter, chains = chains, thin = 1
)
Convergence of DFA models can be evaluated with our
is_converged()
function. This function takes a fitted
object, and specified threshold
argument representing the
maximum allowed Rhat value (default = 1.05). The convergence test isn’t
that useful for a model with such a short number of iterations, but is
called with
## [1] TRUE
This function evaluates Rhat values for all parameters and log likelihood values - so be sure to check what’s not converging if the model is not passing this test.
Before we extract the trends from the model, we need to rotate the
loadings matrix and trends. By default we use the varimax rotation,
implemented in the rotate_trends()
function. An optional
argument is the conf_level
argument, which calculates the
specified confidence (credible) interval of the estimates (by default,
this is set to 0.95).
The rotated object has several quantities of interest, including the mean values of the trends “trends_mean” and loadings “Z_rot_mean”,
## [1] "Z_rot" "trends" "Z_rot_mean" "Z_rot_median"
## [5] "trends_mean" "trends_median" "trends_lower" "trends_upper"
We can then plot the trends and intervals, with
We can also plot the estimated loadings (we’ll show that plot for the
more complicated 2-trend model below because it’s not as interesting for
the 1-trend model) and the fitted values. To plot the fitted values from
the 1-trend model, we’ll use the plot_fitted()
function
(predicted values can also be returned without a plot, with the
predicted()
) function.
The trends and intervals are plotted, faceting by time series, with
Moving to a more complex model, we’ll fit the 2-trend and 3-trend models. All other arguments stay the same as before,
f2 <- fit_dfa(
y = sim_dat$y_sim, num_trends = 2, scale="zscore",
iter = iter, chains = chains, thin = 1
)
r2 <- rotate_trends(f2)
f3 <- fit_dfa(
y = sim_dat$y_sim, num_trends = 3, scale="zscore",
iter = chains, chains = chains, thin = 1
)
r3 <- rotate_trends(f3)
The fits from the 2-trend model look considerably better than that from the 1-trend model,
The loadings from the 1-trend model aren’t as interesting because for a 1-trend model the loadings are a 1-dimensional vector. For the 2 trend model, there’s a separate loading of each time series on each trend,
## [,1] [,2]
## [1,] -90.54 -41.89
## [2,] -19.10 -117.64
## [3,] -7.55 58.95
## [4,] -49.06 15.14
These loadings can also be plotted with the
plot_loadings()
function. This shows the distribution of
the densities as violin plots, with color proportional to being
different from 0.
Finally, we might be interested in comparing some measure of model
selection across these models to identify whether the data support the
1-trend, 2-trend, or 3-trend models. The Leave One Out Information
Criterion can be calculated with the loo()
function, for
example the LOOIC for the 1-trend model can be accessed with
## Estimate SE
## elpd_loo -2.973137e+04 6.859515e+03
## p_loo 8.881784e-16 1.919552e-15
## looic 5.946275e+04 1.371903e+04
where 5.9462745^{4} is the estimate and 1.371903^{4} is the standard error.
As an alternative to fitting each model individually as we did above,
we also developed the find_dfa_trends()
to automate fitting
a larger number of models. In addition to evaluating different trends,
this function allows the user to optionally evaluate models with normal
and Student-t process errors, and alternative variance structures
(observation variance of time series being equal, or not). For example,
to fit models with 1:5 trends, both Student-t and normal errors, and
equal and unequal variances, the call would be
In this example, we’ll simulate data with an extreme anomaly. The biggest difference between this model and the conventional model is that in the DFA process model,
xt + 1 = xt + wt
instead of wt being normally distributed, we assume wt is Student-t distributed. With multiple trends, this becomes a multivariate Student-t,
wt ∼ MVT(ν, 0, Q) The parameter ν controls how much the tails of this distribution deviate from the normal, with smaller values (ν closer to 2) resulting in more extreme anomalies, and large values (ν closer to 30) resulting in behavior similar to a normal distribution.
As before, this will be 20 data points from 4 time series, generated
from 2 latent processes. The sim_dfa()
function’s arguments
extreme_value
and extreme_loc
allow the user
to specify the magnitude of the extreme (as an additive term in the
random walk), and the location of the extreme (defaults to the midpoint
of the time series). Here we’ll include an extreme value of 6,
Plotting the data shows the anomaly occurring between time step 9 and 10,
Though the plot is a little more clear if we standardize the time series first,
Instead of fitting a model with normal process deviations, we may be
interested in fitting the model with Student-t deviations. We can turn
on the estimation of nu
with the estimate_nu
argument. [Alternatively, nu can also be fixed a priori by setting the
argument nu_fixed
]. Here’s the code for a 2-trend model
with Student-t deviations,
t2 <- fit_dfa(
y = sim_dat$y_sim, num_trends = 2, scale="zscore",
iter = iter, chains = chains, thin = 1, estimate_nu = TRUE
)
Again we have to rotate the trends before plotting,
And the loadings,
One way to look for extremes is using the find_swans()
function, which evaluates the probability of observing a deviation in
the estimated trend (or data) greater than what is expected from a
normal distribution. This function takes a threshold
argument, which specifies the cutoff. For example, to find extremes
greater than 1 in 1000 under a normal distribution, the function call
is
Setting plot to TRUE also creates a time series plot that flags these values.
We can also look at the estimated nu
parameter, which
shows some support for using the Student-t distribution (values greater
than ~ 30 lead to similar behavior as a normal distribution),
## V1
## Min. :2.82
## 1st Qu.:2.82
## Median :2.82
## Mean :2.82
## 3rd Qu.:2.82
## Max. :2.82
We’ve implemented a number of alternative families for cases when the
response variable might be non-normally distributed. These alternative
families may be specified with the family argument as a text string in
the fit_dfa
function, e.g.
The currently supported families can be specified as any of the following – the link functions are currently hard-coded, and included in the table below.
Family | link |
---|---|
gaussian | identity |
lognormal | log |
gamma | log |
binomial | logit |
poisson | log |
nbinom2 | log |
By default, the loadings matrix in a DFA is constrained by zeros. For example, a 3-trend model applied to 5 time series would have a loadings matrix that was constrained as
Trend 1 | Trend 2 | Trend 3 |
---|---|---|
z[1,1] | 0 | 0 |
z[2,1] | z[2,2] | 0 |
z[3,1] | z[3,2] | z[3,3] |
z[4,1] | z[4,2] | z[4,3] |
z[5,1] | z[5,2] | z[5,3] |
As an alternative, we may wish to fit a model where each time series arises as a mixture of the trends. In this case, the loadings matrix would be
Trend 1 | Trend 2 | Trend 3 |
---|---|---|
z[1,1] | z[1,2] | z[1,3] |
z[2,1] | z[2,2] | z[2,3] |
z[3,1] | z[3,2] | z[3,3] |
z[4,1] | z[4,2] | z[4,3] |
z[5,1] | z[5,2] | z[5,3] |
And the added constraint is that each row sums to 1, e.g.
∑j = 1 : 3Z1, j = 1 ## Including autoregressive (AR) or moving-average (MA) components on trends
For some models, it may be appropriate to include autoregressive or moving average components to model the latent trends. We’ve implemented 1st - order components on each, though by default these are not included.
To include the AR(1) component on the trend, you can specify
This results in a model where trend i is modeled as
xi, t + 1 = ϕi * xi, t + δi, t Each trend is allowed to have a unique AR(1) parameter, ϕi.
In conventional DFA models, the process deviations are assumed to be independent, e.g. δi, t Normal(0, r). By including a MA(1) component on the trends, these terms may be modeled as
deltai, t + 1 ∼ Normal(θi * deltai, t, qi)
where θi is the trend-specific MA parameter, q is the process variance, and usually constrained to be not estimated and fixed at 1.
We allow weights to be used in DFA models in two ways. In the first form, inverse variance weighting is used to adjust observations based on some standard error associated with each observation. Specifically, the weights are included by modifying each variance to be σ2/wi. As a concrete example, we’ll simulate a dataset, add some examples of standard errors on the survey indices, and then perform the DFA.
Our simulated standard errors are the same for all surveys – except time series 2, which is much more precise.
set.seed(1)
sim_dat <- sim_dfa(
num_trends = 2,
num_years = 20,
num_ts = 4
)
df <- data.frame(obs = c(sim_dat$y_sim), time = sort(rep(1:20,4)),
ts = rep(1:4,20))
df$se <- runif(nrow(df), 0.6, 0.8)
df$se[which(df$ts == 2)] = 0.2
Next we can generate the weights (this is redundant, and “se” could be used instead in the function call below). Because the weights are used as an offset, σ2/wi, we don’t want to use the SE alone as a weight but make them inversely related to the SE. As a quick note, the scale of these may affect estimation and some additional normalization may be needed (rather than standard errors, it may be more helpful to think about the sample size each data point represents).
And fit the model with the weights argument
f2 <- fit_dfa(
y = df, num_trends = 2, scale="zscore",
iter = 500, chains = 1, thin = 1,
inv_var_weights = "weights", data_shape = "long"
)
##
## SAMPLING FOR MODEL 'dfa' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 4.9e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.49 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 500 [ 0%] (Warmup)
## Chain 1: Iteration: 50 / 500 [ 10%] (Warmup)
## Chain 1: Iteration: 100 / 500 [ 20%] (Warmup)
## Chain 1: Iteration: 150 / 500 [ 30%] (Warmup)
## Chain 1: Iteration: 200 / 500 [ 40%] (Warmup)
## Chain 1: Iteration: 250 / 500 [ 50%] (Warmup)
## Chain 1: Iteration: 251 / 500 [ 50%] (Sampling)
## Chain 1: Iteration: 300 / 500 [ 60%] (Sampling)
## Chain 1: Iteration: 350 / 500 [ 70%] (Sampling)
## Chain 1: Iteration: 400 / 500 [ 80%] (Sampling)
## Chain 1: Iteration: 450 / 500 [ 90%] (Sampling)
## Chain 1: Iteration: 500 / 500 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 21.482 seconds (Warm-up)
## Chain 1: 3.81 seconds (Sampling)
## Chain 1: 25.292 seconds (Total)
## Chain 1:
## Warning: The largest R-hat is NA, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## https://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## https://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## https://mc-stan.org/misc/warnings.html#tail-ess
## Inference for the input samples (1 chains: each with iter = 250; warmup = 125):
##
## Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
## x[1,1] -0.1 0.4 0.8 0.4 0.3 1.02 62 77
## x[2,1] 1.4 2.0 2.7 2.0 0.4 1.05 46 41
## x[1,2] 0.7 1.2 1.8 1.2 0.3 1.04 48 103
## x[2,2] 1.3 2.2 2.9 2.2 0.5 1.02 40 48
## x[1,3] -1.0 -0.5 -0.1 -0.6 0.3 1.00 173 98
## x[2,3] 0.8 1.2 1.8 1.3 0.3 1.01 45 35
## x[1,4] -2.3 -1.6 -1.0 -1.6 0.4 1.02 49 57
## x[2,4] 0.6 1.6 2.4 1.5 0.5 1.03 30 36
## x[1,5] -0.9 -0.4 0.0 -0.4 0.3 1.00 125 70
## x[2,5] 0.5 0.9 1.4 0.9 0.3 1.01 64 51
## x[1,6] -1.9 -1.3 -0.9 -1.4 0.4 1.01 39 47
## x[2,6] 0.3 1.1 1.6 1.0 0.4 1.05 24 36
## x[1,7] -1.2 -0.6 -0.2 -0.6 0.3 1.02 95 138
## x[2,7] -0.9 -0.5 -0.2 -0.5 0.3 1.00 77 67
## x[1,8] -0.9 -0.4 0.0 -0.4 0.3 0.99 154 68
## x[2,8] -1.8 -1.2 -0.7 -1.2 0.3 1.00 47 83
## x[1,9] 0.3 0.8 1.3 0.8 0.3 1.00 113 141
## x[2,9] -2.5 -1.8 -1.2 -1.8 0.4 1.05 26 17
## x[1,10] 0.4 0.9 1.5 0.9 0.4 0.99 75 58
## x[2,10] -3.3 -2.5 -1.7 -2.5 0.5 1.00 37 35
## x[1,11] 0.6 1.2 1.7 1.1 0.3 1.00 80 143
## x[2,11] -3.7 -2.8 -1.9 -2.8 0.6 1.02 34 15
## x[1,12] -0.3 0.1 0.6 0.1 0.3 1.01 204 55
## x[2,12] -1.8 -1.3 -0.9 -1.3 0.3 1.00 45 68
## x[1,13] -1.7 -1.1 -0.7 -1.1 0.3 0.99 78 106
## x[2,13] -1.2 -0.6 0.0 -0.6 0.4 1.02 47 65
## x[1,14] -2.1 -1.4 -0.9 -1.4 0.4 1.00 51 101
## x[2,14] -1.4 -0.7 0.0 -0.7 0.4 1.03 37 45
## x[1,15] -1.6 -1.0 -0.6 -1.1 0.3 1.00 66 67
## x[2,15] -2.4 -1.7 -1.1 -1.7 0.4 1.01 40 48
## x[1,16] 0.3 0.7 1.2 0.7 0.3 1.01 71 50
## x[2,16] -1.3 -0.8 -0.4 -0.8 0.3 1.01 29 30
## x[1,17] 0.2 0.7 1.1 0.7 0.3 1.00 66 101
## x[2,17] -0.4 -0.1 0.3 -0.1 0.2 1.06 21 59
## x[1,18] 0.3 0.7 1.3 0.8 0.3 1.00 87 142
## x[2,18] 0.5 1.0 1.5 1.0 0.3 1.01 52 95
## x[1,19] 0.3 0.7 1.2 0.7 0.3 1.01 103 68
## x[2,19] 1.1 1.7 2.3 1.7 0.4 1.01 39 36
## x[1,20] 0.7 1.3 2.1 1.3 0.4 0.99 67 85
## x[2,20] 1.5 2.4 3.2 2.4 0.5 1.01 38 59
## Z[1,1] -1.2 -0.9 -0.6 -0.9 0.2 1.02 33 106
## Z[2,1] -0.1 0.2 0.4 0.2 0.2 1.07 16 22
## Z[3,1] -0.6 -0.3 -0.1 -0.3 0.2 1.05 24 22
## Z[4,1] 0.6 0.8 1.1 0.8 0.2 1.05 36 86
## Z[1,2] 0.0 0.0 0.0 0.0 0.0 1.00 125 125
## Z[2,2] 0.5 0.6 0.8 0.6 0.1 1.02 36 68
## Z[3,2] 0.4 0.6 0.8 0.6 0.1 0.99 44 53
## Z[4,2] -0.5 -0.3 -0.2 -0.3 0.1 0.99 69 68
## log_lik[1] -1.0 -0.1 0.1 -0.2 0.4 1.01 97 104
## log_lik[2] -0.1 1.0 1.5 0.9 0.7 0.99 93 134
## log_lik[3] -0.5 -0.1 0.1 -0.1 0.2 1.02 92 79
## log_lik[4] -0.9 0.0 0.2 -0.2 0.3 1.03 54 78
## log_lik[5] -1.3 0.1 0.4 -0.2 0.6 1.03 95 106
## log_lik[6] -0.5 1.1 1.5 0.8 0.7 1.00 109 71
## log_lik[7] -0.5 0.1 0.3 0.0 0.3 0.99 102 92
## log_lik[8] -1.0 0.0 0.3 -0.1 0.4 1.02 118 43
## log_lik[9] -3.3 -0.7 0.2 -1.0 1.1 1.00 216 101
## log_lik[10] -0.3 1.1 1.5 0.9 0.6 0.99 84 106
## log_lik[11] -0.4 0.0 0.2 0.0 0.2 1.02 60 56
## log_lik[12] -2.7 -0.7 0.0 -1.0 1.0 1.00 184 108
## log_lik[13] -1.3 -0.2 0.2 -0.3 0.5 0.99 136 134
## log_lik[14] -0.4 1.1 1.5 0.9 0.7 1.00 98 71
## log_lik[15] -1.0 -0.2 0.1 -0.3 0.3 1.00 64 100
## log_lik[16] -3.9 -1.0 0.1 -1.3 1.2 1.00 207 82
## log_lik[17] -3.0 -0.9 0.1 -1.1 1.0 1.01 195 108
## log_lik[18] -0.9 1.0 1.4 0.7 0.8 1.02 91 88
## log_lik[19] -1.5 -0.3 0.2 -0.4 0.5 1.02 140 117
## log_lik[20] -0.6 -0.1 0.1 -0.2 0.3 1.02 73 58
## log_lik[21] -2.4 -0.6 0.1 -0.8 0.8 0.99 107 106
## log_lik[22] -0.2 1.1 1.5 0.9 0.6 1.02 88 82
## log_lik[23] -0.9 -0.1 0.1 -0.2 0.3 0.99 94 94
## log_lik[24] -0.7 -0.1 0.1 -0.2 0.3 0.99 107 60
## log_lik[25] -1.2 -0.1 0.2 -0.2 0.5 0.99 101 81
## log_lik[26] -1.2 1.1 1.5 0.8 0.8 0.99 143 108
## log_lik[27] -0.5 -0.1 0.1 -0.1 0.2 1.00 80 68
## log_lik[28] -0.7 -0.1 0.2 -0.2 0.3 1.01 99 53
## log_lik[29] -1.7 -0.1 0.3 -0.3 0.7 0.99 154 82
## log_lik[30] -0.7 1.0 1.4 0.8 0.7 1.00 79 129
## log_lik[31] -0.8 0.0 0.3 -0.1 0.4 1.00 103 86
## log_lik[32] -1.1 -0.1 0.1 -0.2 0.4 1.00 97 108
## log_lik[33] -2.7 -0.7 0.0 -1.0 1.0 0.99 146 85
## log_lik[34] -1.0 1.1 1.4 0.8 0.9 1.00 114 102
## log_lik[35] -0.3 0.1 0.4 0.1 0.3 1.02 68 88
## log_lik[36] -1.8 -0.2 0.3 -0.5 0.7 1.00 137 143
## log_lik[37] -2.0 -0.3 0.1 -0.5 0.7 0.99 121 108
## log_lik[38] -0.3 1.1 1.5 0.9 0.6 1.00 103 85
## log_lik[39] -0.6 0.0 0.2 -0.1 0.2 1.00 71 81
## log_lik[40] -2.4 -0.2 0.2 -0.5 0.8 0.99 110 134
## log_lik[41] -1.0 -0.1 0.2 -0.2 0.5 1.00 125 146
## log_lik[42] -0.7 1.1 1.5 0.8 0.7 1.00 106 134
## log_lik[43] -0.7 -0.1 0.1 -0.2 0.3 0.99 79 103
## log_lik[44] -2.5 -0.5 0.3 -0.7 1.0 1.00 137 130
## log_lik[45] -1.9 -0.3 0.2 -0.5 0.8 1.00 204 81
## log_lik[46] -1.9 1.0 1.4 0.6 1.0 0.99 105 101
## log_lik[47] -2.7 -1.1 -0.2 -1.3 0.8 1.02 130 99
## log_lik[48] -0.8 -0.1 0.1 -0.2 0.3 1.08 15 41
## log_lik[49] -1.2 -0.1 0.1 -0.3 0.5 1.00 123 81
## log_lik[50] -1.0 1.1 1.5 0.9 0.7 0.99 81 29
## log_lik[51] -1.0 -0.1 0.2 -0.2 0.4 1.00 123 81
## log_lik[52] -1.0 -0.1 0.1 -0.2 0.4 1.00 126 56
## log_lik[53] -2.0 -0.6 0.1 -0.7 0.8 1.00 140 103
## log_lik[54] -0.8 1.2 1.5 0.9 0.7 1.02 107 55
## log_lik[55] -0.6 -0.1 0.1 -0.2 0.2 0.99 178 101
## log_lik[56] -1.0 -0.1 0.2 -0.2 0.4 1.01 72 85
## log_lik[57] -0.9 0.1 0.4 0.0 0.4 1.00 82 74
## log_lik[58] -0.7 0.9 1.4 0.7 0.9 1.00 105 63
## log_lik[59] -0.8 0.0 0.2 -0.1 0.3 0.99 101 82
## log_lik[60] -0.7 0.1 0.4 0.0 0.3 1.00 109 108
## log_lik[61] -6.9 -3.8 -1.1 -3.8 2.0 0.99 223 101
## log_lik[62] -0.5 1.0 1.5 0.8 0.7 0.99 91 85
## log_lik[63] -1.7 -0.6 -0.2 -0.8 0.5 0.99 141 146
## log_lik[64] -2.5 -0.9 0.0 -1.1 0.9 1.01 127 141
## log_lik[65] -0.7 0.1 0.3 0.0 0.3 1.00 112 116
## log_lik[66] -0.4 1.1 1.5 0.9 0.7 0.99 111 82
## log_lik[67] -0.3 0.0 0.2 0.0 0.1 1.02 72 107
## log_lik[68] -0.5 -0.1 0.2 -0.1 0.2 1.00 111 83
## log_lik[69] -1.3 -0.2 0.1 -0.4 0.5 1.00 147 81
## log_lik[70] -0.7 1.1 1.5 0.9 0.7 1.02 86 69
## log_lik[71] -0.7 -0.1 0.2 -0.1 0.3 1.00 186 87
## log_lik[72] -1.0 0.0 0.2 -0.1 0.4 0.99 108 83
## log_lik[73] -0.5 0.0 0.2 0.0 0.2 1.01 58 66
## log_lik[74] -0.4 1.1 1.4 0.9 0.6 1.04 62 143
## log_lik[75] -0.3 -0.1 0.1 -0.1 0.2 1.01 79 53
## log_lik[76] -0.5 0.1 0.3 0.0 0.3 1.01 90 69
## log_lik[77] -1.6 -0.1 0.2 -0.4 0.6 1.00 130 105
## log_lik[78] -0.7 1.1 1.5 0.9 0.7 1.00 138 146
## log_lik[79] -0.5 -0.1 0.1 -0.1 0.2 1.01 127 114
## log_lik[80] -1.4 0.0 0.3 -0.2 0.5 1.01 124 86
## xstar[1,1] -0.5 1.2 2.9 1.3 1.1 0.99 120 105
## xstar[2,1] 0.7 2.4 4.2 2.5 1.1 1.07 46 68
## sigma[1] 0.4 0.5 0.6 0.5 0.1 1.00 71 56
## lp__ -12.7 0.0 12.6 0.5 7.7 1.00 33 37
##
## For each parameter, Bulk_ESS and Tail_ESS are crude measures of
## effective sample size for bulk and tail quantities respectively (an ESS > 100
## per chain is considered good), and Rhat is the potential scale reduction
## factor on rank normalized split chains (at convergence, Rhat <= 1.05).
As a second type of weighting, we also have implemented weights in
the same form used in other widely used packages (glmmTMB, sdmTMB, brms,
etc). In this case, weights are used as multipliers on the
log-likelihood of each observation. To specify these kinds of weights,
we use the likelihood_weights
argument instead, where
observations with higher weights contribute more to the total log
likelihood.