Mixture models have experienced sustainable popularity over recent years. Not only they are natural models to adjust for unobserved or latent heterogeneity, they are fundamental cornerstones in many areas of statistics such as smoothing, empirical Bayes, likelihood based clustering, and latent variable analysis among others.
They combine parametric excellence and compromise in the trade-off between imposed model structure and freedom in model adaptation to observed data. However, mixture models experience a number of difficulties. The likelihood may not be bounded, and, even if it were, the global maximum might not be a good choice. Algorithmic solutions are nearly almost required and algorithms such as the EM algorithm is experiencing numerous problems such as the choice of initial values or the selection of an adequate stopping rule.
The track is devoted to the applications of continuous as well as of discrete mixture models in Bayes and Empirical Bayes approaches to capture/recapture, in model-based clustering, with potentially embedded dimensionality reduction techniques, in modelling mixtures in the presence of covariate information (nonparametric mixed modeling), in modelling multivariate (correlated) data when standard multivariate distributions are not available (e.g. discrete rvs, mixed-type rvs, longitudinal or clustered data), in developing via mixtures a robust and stable estimation approach, and in the gene expression modelling context in the case of two-class or multiclass experimental conditions as well as when biclustering of genes and tissues is concerned with.