RTD’s ECD Glossary

This glossary does not aim to cover the entire range of ECD or ECD terminology. Rather, it focuses primarily on the elements of ECD that are of most interest to those working on content development.

  • Knowledge, skill or ability that may be necessary to complete a task, but are not the intended target of the task.

  • Any product of any process or task within ECD. Lists, charts and tables are artifacts. Written explanations are artifacts. Student work is an artifact. Anything that one can read or examine is an artifact.

  • The requirements for assembling the assessment. This includes requirements/targets for distributions various item features (content, format, time, passage types, etc.). Similar to a test blueprint.

  • A required element, feature or quality of task, laid out in a task model. These are the traits or qualities of a task that are thought most necessary to elicit evidence of the targeted skill/focal KSA.

  • A particular statement about the test taker’s knowledge, skill or ability that a test aims to assess or support. Claims can be very fine-grained, but do not need to be. However, claims should be quite specific as to the precise cognition or KSA that they address. A single assessment should support a wide and diverse range of claims.

  • The whole set of models that make up the flow of ideas, relationships and argument in an ECD-based assessment. Everything.

  • The template for the fully laid out assessment argument for the claims made by assessments, including reasoning, KSAs, evidence descriptions and task model.

  • The large task of attempting to build lists and explanations of all of the knowledge, skills and abilities (KSAs) that make up the domain. This includes KSAs that may not appear on the test. This work is often best done with subject matter experts (SME) whose work is guided and facilitated by test development professions who understand the ECD process. Domain analysis depends on expert judgment, but it also depends on examination of documentation and explanations from various sources. They could include curricular guides, syllabi, published works on the content domain, prior assessments and much much more. There is no formula or set approach to conducting a domain analysis. Note that many assessments are commissioned for domains that already have domain models (or partial domain models), and therefore do not require a domain analysis.

  • The large task of organizing the KSAs that were found in the Domain Analysis. This consists of mapping the connections and links between them. Again, this work often relies of SMEs and expert ECD facilitator’s. Again, the domain model may well include things that will not (or cannot) be included on the eventual assessment. Again, there is no formula or set approach for creating a domain model or for what a domain model should look like. Note that many assessments are commissioned for domains that already have domain models (or partial domain models), such as state content standards.

  • The area or broad construct that the test is intended to assess knowledge and/or mastery of.

  • The qualities of test takers’ work products that are needed to support individual claims.

  • A description of a quality of student work that supports a particular claim.

  • That which can be directly observed, as opposed to inferred. Test takers’ work products and elements of these products are evidence, because they can be directly observed. Aspect of student cognition are not evidence, because they must be inferred (i.e., we cannot actually get inside test takers’ head and know exactly what is going in there).

  • The primary knowledge, skill and/or ability targeted by task model (or claim or item).

  • Instructions that explain how test takers’ work product are turned into scores (e.g., rubrics and other scoring guides) and how scores on individual items or tasks are statistically combined into the final reported score(s) or results.

  • Anything that is part of what test takers encounter when engaging with an item (e.g., stimulus, instruction, stem, etc.).

  • Instructions or requirements for how items and other presentation materials are presented to test takers. They may include many elements style guides, particular to the form and platform of delivery.

  • The psychometric model for an assessment that represents test takers’ proficiency. In some cases, it may be a single unidimensional conception of mastery. It may be a multi-dimensional set of traits. It could be a linked set of masteries that depend on and feed into each other. Historically, standardized test have often been built upon a unidimensional conception, but many tests have more complex representations.

  • A formal explanation of a particular kind of class of task or item. Because task models describe features and variables in this class of tasks or items, they are not specific enough to serve as item templates – though they can be used to generate item templates. They may offer explanations of requires features to assess a KSA and/or appropriate ways to vary items. They describe what should be presented to test takers and perhaps the products that test takers will produce. There is no set format for a task model. (Note that the ECD meaning of task is not quite the same as the RTD meaning of the term.)

  • Elements of a task model that describe features of tasks that may intentionally vary from item to item (e.g., to alter difficulty).

  • Another term for task model variable.

  • What a test taker produces in response to an item or a task.