The Standards for Educational and Psychological Testing (2014) establish validity, reliability/precision, and fairness in testing as foundational principles guiding the development, administration, interpretation, evaluation and use of tests.
- Validity, the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests (p.11).
- Reliability/precision, the consistency of scores across replications of a testing procedures (p.33)
- Fairness, responsiveness to individual characteristics and test contexts so that test scores will yield the same meaning for intended uses for all test takers (p. 50).
Validity remains the overarching concept because reliability/precision are requisites to it and fairness is a fundamental validity issue (p. 49) and underscores the need to assure the validity of scores for all examinees groups who are part of the intended test population.
Do existing tests live up to these standards? With some prominent exceptions, we think that tests and other assessments used in research and practice have often fallen short. Too often validity studies focus on psychometric analyses of reliability, precision and dimensionality rather than laying out a comprehensive evidence-based argument that justifies the use of scores for particular purposes and for all intended examinees.
This special issue of Educational Assessment is intended to point the way toward improvement by shining a light on exceptional validation work and providing critical insight on feasible models for the field. The special issue also will serve to introduce a new, regular section in Educational Assessment on assessment validation, which will publish comprehensive, peer-reviewed validation efforts that reflect the standards of the field. Through this new section, Educational Assessment hopes to further encourage attention to the Standards by providing an academic home for serious validation work.
We are looking for manuscripts that describe comprehensive validation work to support the use of a particular test or assessment for a specified purpose. Manuscripts should clearly describe the purpose of the assessment, targeted constructs, and intended examinees and lay out a validation argument or framework that supports the intended interpretation of scores for the intended purpose and examinee population.
The manuscript should incorporate multiple sources of evidence, such as evidence based on: test content, response processes, internal structure; relations to other variables, consequences and use, as well as evidence of reliability/precision and fairness and should clearly indicate the relevance of each analysis to the intended interpretation and use of scores. Given the length restrictions (suggested below), the manuscript might describe individual empirical studies in more general terms than is typically the cases in journal articles, with references to more thorough technical descriptions available elsewhere. Manuscripts would not typically focus on a single study but can draw from multiple studies of the same assessment. As a result much of the manuscript might resemble a focused literature review of the available theory and studies supporting a particular use.
Eligible validation manuscripts can focus on any kind of test or assessment, for example, cognitive tests; assessments of attitudes, predispositions or other non-cognitive variables; and measures of climate and practice. In any case, the manuscript should provide a clear statement of an interpretation/use, and should provide support (backing) for the interpretation/use.
How to Submit your Manuscript
Manuscript length generally should be limited to 25-35 pages, plus tables and figures. We can provide additional space on the journal website for supplementary material, if necessary. For example, if the manuscript references more thorough descriptions of specific studies or data that are not readily available, they could be provided on the journal website
Manuscripts should be submitted by February 28, 2018.
- Special Issue Editor: Michael Kane
- Special Issue Editor: Joan Herman