The International Journal of Selection and Assessment recently included a feature article on the gamification of assessment. While the research methodology in the article was sound, I could not help but think that the article in many ways symbolised what is wrong with much of the assessment literature that emphasises psychometric properties as opposed to practical utility.

The article notes that: “Our findings support the applicability of game elements into a traditional form of assessment built to assess candidates’ soft skills”. However, this claim of utility ignores the real question of whether gamified assessments add anything, over an above traditional assessments, to selection accuracy. Can you imagine any other area where manufacturers make evaluative judgements without reception of how this improves the status quo? In all areas of life, we are aiming for improvements on the status quo, not replication and it is only when a demonstrable change is forthcoming that new technology is adopted. Psychometric tests are the one exception.

The paper shows that a gamified solution shares construct variance with a standard situational judgement assessment and can therefore to measure psychological phenomena. Correlations indicate good relationships with other self-report tools measuring flexibility, decision making and resilience.

However, the questions that practitioners want to know remain overlooked. Practitioners are likely to use SJTs in conjunction with other measures and are therefore most interested in the incremental gain for using such measures over and above traditional selection methods such as interviews and personality. Not only does the study not address the issue of incremental validity but the whole relationship to real work outcomes is sadly missing. As is too often the case, the paper focuses on psychometric properties and does not address real-world outcomes.

Where the researchers discuss practical implications, these are merely assumptions made in the absence of evidence. A constant point raised is the issue of fakeability, with it argued that gamified assessments are harder to fake. However, the researchers did not study this effect and are so often the case comments around fakeability are simply assumptions. As argued convincingly by Hammond and Barrett in 1996, assessing whether test takers are trying to game their results is not necessarily difficult, but without a direct measure of fakeability (e.g. social desirability), the extent with which faking is occurring can’t is detectable. The assumption is therefore unverifiable.

Psychometrics has real promise for aiding the science of personnel selection. However, for us to achieve credibility outside of academic circles, we must focus on practitioners needs. Our guiding principle is to be the scientist and practitioner. Our academic journals need to start to live up to this guiding principle.

References

Georgiou K, Gouras A, Nikolaou I. (2019) Gamification in employee selection: The development of a gamified assessment. Int J Select Assess. 2019;00:1–13.  https://doi.org/10.1111/ijsa.12240

Hammond, S. and Barrett, P.T. (1996) The Psychometric and Practical Implications of the use of Ipsative, forced-choice format, Questionnaires. Proceedings of the BPS Occupational Conference, January, 135-144. Leicester: BPS Press.