## U.S. Department of Defense

#### Date of this Version

2011

#### Citation

*The Leadership Quarterly* 22 (2011) 259–270; doi:10.1016/j.leaqua.2011.02.002

#### Abstract

Issues regarding workplace spirituality have received increased attention in the organizational sciences. The implications of workplace spirituality for leadership theory, research, and practice make this a fast growing area of new research and inquiry by scholars. The purpose of this research was to test a dynamic relationship between spiritual leadership and spiritual wellbeing (i.e., a sense of calling and membership), and key organizational outcomes in a sample of emerging military leaders. Using structural equation modeling (SEM), results revealed a positive and significant relationship between spiritual leadership and several unit-level outcomes, including organizational commitment and four measures of performance. These relationships were explained or mediated by spiritual well-being. Implications for research and practice are discussed.

**Notice:** Due to the Retraction, the full text of this article is no longer available here. The download button links to a blank document.

## Comments

This article has been retracted: please see Elsevier Policy on Article Withdrawal (http://www.elsevier.com/locate/withdrawalpolicy).

This article and the associated Corrigendum have been retracted at the request of the Senior Editor of

The Leadership Quarterly.After concerns were raised about possible problems of reporting in this paper, the Senior Editor consulted with the two previous Senior Editors of

The Leadership Quarterlyand a methodologist (M1) (not the claimant) to assess the seriousness of the allegations and to make a preliminary determination concerning the allegations’ merits. All concurred that there were serious problems in this paper. The methodologist (M1) prepared a report outlining the problems and this report was forwarded to a second methodologist (M2) to confirm the correctness of methods used by the first methodologist to detect the problems. The second methodologist attested to the correctness of the first methodologist's analyses. The Senior Editor then contacted the authors to inform them of the problems identified in the paper. The authors were asked to respond to concerns raised and encouraged to send the original data from this paper to the Senior Editor for reanalysis.The authors did not provide the original data, but rather sent a letter replying to the methodology report, along with new analytic results. These new results were reviewed by a third methodologist (M3) as well as the author of the original methodologist report (M1). Both agreed that the reanalysis failed to address the concerns raised. Reanalysis results performed by the authors further supported concerns about serious errors and misstatements in the published article.

The Senior Editor has concluded that the article had misreported findings, and that both the article and the Corrigendum reported models that failed to fit the data, and used incorrect model estimation, which thus compromised the scientific review process. Specifically, the initial fit results were reported incorrectly for the model reported in Figure 2 (e.g., CFI = .96; RMSEA = .08). This was subsequently corrected with the Corrigendum (e.g., CFI =.42; RMSEA =.47). However, the Corrigendum did not specify that the substantive model reported in Figure 2 failed to fit the data; thus, the reported estimates are untrustworthy (on the basis of the chi-square test or approximate fit indexes.) The authors re-analyzed the data and noted that the fit indices reported in the corrigendum were incorrect and should have been CFI=.91; RMSEA= .175 (which, coupled with a significant chi-square test indicates a failed model, even if using solely indexes of fit, (Hu & Bentler, 1999) ). In addition, the authors stated that they tested for the existence of a higher-order model, which is not possible if the model only has three first-order factors (Rindskopf & Rose, 1988). The authors went on to use this higher order factor as a predictor, even though such a higher order factor is not mathematically distinguishable from the three first-order factors. In addition, the fit of this “higher-order model” was misreported by a large factor; the RMSEA value was reported to be .07, when it should actually have been .17 if using the sample that was indicated in the paper (n = 62); the authors, however, in their response to the editor indicated that the CFA results were performed at the individual level of analysis using n = 333. This information was never mentioned in the article—the authors had in fact reported that the number of individuals that completed the surveys was n = 248 (on which aggregated results were later undertaken). It is therefore concluded that estimates and data used were misreported.

In response to the comments raised, the authors commented that they omitted correlating two disturbances of the dependent variables in the analysis carried out for the corrigendum and argued that their new model with the correlated disturbances (a) fit, and (b) did not have different estimates from the model without the correlated disturbances. Given the fact that the model estimates were similar, the authors stated that faith can be placed in the estimates of the first model. However, the new model that was estimated still had very bad fit (e.g., significant chi-square and RMSEA = .175). Thus, according to the authors’ re-analysis, the correct values indicate very poor model fit. The authors went on to suggest that some fit indexes cannot be trusted at small sample sizes and should be discounted. However, even if one were to have corrected the indexes for small sample bias, using analytical procedures known a while back (Bartlett, 1937, 1954), the model would have still failed to fit (see Herzog & Boomsma, 2009; Herzog, Boomsma & Reinecke, 2007).

Therefore, the reanalyzed model failed to fit and has dubious estimates, along with initially misreported results. Finally, the authors estimated the models at the individual level, without declaring this fact in the review process, which could substantially bias estimates and model fit statistics (i.e., they did not use a cluster or “sandwich” correction for the chi-square statistic, on which all fit statistics depend, nor did they correct the standard errors of the estimates for clustering, cf. Muthén & Satorra, 1995).

As a consequence of the processes outlined above, the scientific trustworthiness of both the original article and its Corrigendum cannot be established. However, intentional wrongdoing should not be inferred.

References:

Bartlett, M. S. 1937. Properties of sufficiency and statistical tests.

Proceedings of the Royal Society of London Series a-Mathematical and Physical Sciences, 160(A901): 0268-0282. doi:10.1098/rspa.1937.0109Bartlett, M. S. 1954. A note on the multiplying factors for various chi-square approximations.

Journal of the Royal Statistical Society Series B-Statistical Methodology, 16(2): 296-298. http://0-www.jstor.org.library.unl.edu/stable/2984057Herzog, W., Boomsma, A., & Reinecke, S. 2007. The model-size effect on traditional and modified tests of covariance structures.

Structural Equation Modeling, 14(3): 361. doi: 10.1080/10705510701301602Herzog, W. & Boomsma, A. 2009. Small-sample robust estimators of noncentrality-based and incremental model fit.

Structural Equation Modeling, 16(1): 1-27. doi: 10.1080/10705510802561279Hu, L. & Bentler, P. M. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives.

Structural Equation Modeling, 6(1): 1-55 doi: 10.1080/10705519909540118Muthén, B. & Satorra, A. 1995. Complex sample data in structural equation modeling. In P. V. Marsden (Ed.),

Sociological Methodology: 267-316. Washington, DC: American Sociological Association.Rindskopf, D., & Rose, T. 1988. Some theory and applications of confirmatory second-order factor analysis.

Multivariate Behavioral Research, 23,51-67. doi: 10.1207/s15327906mbr2301_3