This article details a correction to the article É. Kiss, Katalin & Tamás Zétényi. 2017. Quantifier spreading: children misled by ostensive cues.

In the statistical analyses of the article

Responses were encoded as binary data, 1 for “accept”, 0 for “reject”. The two lists of stimulus and picture type distribution, i.e. whether a certain participant saw a certain image as a drawing or a photography, were also included in the data set. Analysis was carried out separately for the children’s and the adults’ group. Binomial generalized mixed-effect models with random slopes were run, with response as the dependent variable, picture type (drawing/photography) and list as the fixed effects, and participant and item as random effects. Necessary simplifications of the models due to lack of convergence are reported along with the results.

Calculations were carried out in R, using the glmer() function from the lme4 package that includes z-values along with simulated p-values.

The random slope model that included the calculation of the intercept-slope correlation did not converge. Therefore, the random slope model was fitted to the data again without intercept correlation. First, the impact of list (whether participant X saw item Y as a drawing or a photography) was tested. This variable had no effect in either group (p > 0.2). Then the models first without item, then without participant as random effects were compared to the full model by the anova() function. Both random effects contributed significantly to the outcome of the full model, thus, results are reported based on the maximal model.

For the children’s group, the effect of picture type was highly significant (z = 3.73, p < 0.001). Adults’ data did not show a significant difference for picture type (z = 1.82, p = 0.07), but there was a trend similar to children’s responses.

Similarly to experiment 1, children’s and adults’ data were analysed in separate models. Model simplification was carried out separately, but parallelly on children’s and adults’ data. The most complex model that converged was one that included random slopes for participants and random intercept for items. A comparison between models with both picture type and list as fixed effects as opposed to models including only picture type did not show significant differences (p > 0.45 for both models). Therefore, results reported here are based on models with only picture type as a fixed effect.

The model described above did not show a main effect of picture type in either age group (children: z = 1.29, p = 0.197; adults: z = –0.37, p = 0.715). Subsequently, models were further simplified by omitting random slope estimates for both participants and items, i.e. calculating only intercepts for both random effects. The two models (with and without random slopes for participants) were not significantly different according to the anova() function, however, results changed substantially in the child group. This time, the z-value was slightly higher than 2, i.e. the p-value went slightly below the significance level defined as 0.05 (z = 2.10, p = 0.036).

The different outcomes in modelling children’s responses raise a methodological question about the choice of the appropriate mixed-effect models. According to Barr et al. (

The raw data of the experiments are accessible in the OSF repository

The authors have no competing interests to declare.