Item Logic

Downloads

Item Logic (overview)

Item Logic ELA Deep Drive

Item Logic Mathematics Deep Dive

Item Logic Science Deep Dive

 

The logic of an item is a rough outline or abstraction of the cognitive path that the items prompts — or is intended to prompt — in test takers. Like outlines and abstractions more generally, the level of detail in item logic can vary greatly.

There is no standard format or list of required elements for an explanation for the logic of an item. In fact, item logic is not almost never recorded or documented anywhere. Historically, item logic has often been taken for granted and been under-examined. Bits of it has been explained in in the rationales that CDPs sometimes write to explain multiple choice items’ answer options — though those rationales are both more detailed and less complete than item logic requires.

An explanation of an item’s logic should make clear how the item elicit’s evidence of the targeted cognition. Because item logic lacks the details and particulars of an individual item, it can be easier to spot flaws in the design of an item by examining its logic. That is, the lack of distracting details that might call for additional work or obscure test taker’s anticipated cognition, a CDP can focus more clearly on an item’s ability to elicit the targeted cognition.

Item logic is not an item template, because item templates merely explain what an item should look like and what should be presented to test takers. Rather, item logic is about the cognitive path of test takers. Exactly what might be presented to prompt that cognitive path can vary. Item logic is always missing from item specifications and inclusion of item logic is an important part of RTD task models.

Part of the professional growth of CDPs as they become more expert is their mastery of the growing library they keep in their heads of item logics that are effective in their content area, often tied to the standards that those item logics are well paired with. This enables their ability to quickly recognize the logic of most items as soon as they look at them.

Where Item Logic Comes From

CDPs, teachers and students have vast experience with content and with tests — standardized and others. The adults were once students themselves, learning the content on the test in ways that are rarely that different than today’s students do. Content is often assessed — both in classroom assessment and in formal assessment — is rather conventional ways. Yes, that is how we were assessed on that content, and yes, that is how it seems right to assess it today. Or, as Tevye famously sang, Tradition!

Some of the most valuable innovation in assessment is in item logic. When item writers and CDPS develop a new way to get at important content, they can open doors to more valid items. They may offer novelty in ways that undermine the emphasis on drills and other inappropriate test prep that reduce the cognitive complexity of assessments.

Unfortunately, item logic is often borrowed from other standards or assessment targets. But logic that works to elicit evidence for one standard is usually inappropriate to elicit evidence for another standard, unless stated in such abstract terms as to provide no insight into an item.

Elements of Item Logic

Because of the many ways that standards vary and the fact that there is no singular desired level of detail for explanations of item logic, there is no definitive list of the elements that should be included. But we can offer a list of some of the elements that might be included, when appropriate.

  • What the test taker is expected to know before encountering the item

  • How the test taker might make use of the elements of the stimuli of the item or item set

  • The steps that a test taker might take to get to their response (required, though the level of detail varies)

  • The form of the test taker’s response

  • How the modality of the item influences the test taker’s cognitive path

  • How the targeted cognition fits into the test taker’s cognitive path

  • How the test taker’s response constitutes affirmative and/or negative evidence of their proficiency with the targeted cognition.

Like so many aspects of professional practice in any field, less experienced practitioners benefit from being more explicit when thinking through their work and more experienced practitioners can usually trust their finely honed professional judgment and instincts to work through things less consciously. Of course, when the most expert practitioners go wrong, it is because they took something for granted that they could have recognized had they been more deliberate in their thinking.

Therefore, the best practitioners get a sense for when they need to slow down and be more explicit and deliberate, consciously thinking through more of the potential elements of the logic of the item they are examining.

Some Common Mistakes in Item Logic

Many problems with items exist on the level of its logic. These problems cannot be fixed with tinkering or wordsmithing. Rather, they usually require wholesale reworking of an item. When CDPs review items for initial intake from item writers, they should be looking at whether each item’s logic actually supports the targeted cognition. If they do not, that item is likely not worth the additional work it would take to make it align properly.

  • One of the the most common mistake is when an item’s logic is simply too underdeveloped. Yes, it should be an outline or abstraction, but there are limits. It is not enough to say, “The item prompts the test taker to solve a problem that requires — or perhaps just could – be solved with the targeted cognition.” That is so generic as to apply to virtually any items in a content area. Item logic needs more detail and specificity than that. Perhaps something like, “In order to get to the correct sum of two numbers, the test taker must carry (i.e., the targeted cognition) once.” That would be better, but might still be lacking in other areas.

  • The item logic does not account for the modality and item type. For some reason, the item writer and/or CDP has not recognized that the modality of the item substantially alters the nature of the cognitive path that is prompted. Generally, selected response item require test takers to recognize or pick out an answer, rather than to come to an answer themselves. For example, if an assessment target was, “Students can come up with rhymes for words,” it would be quite different to i) ask them “What rhymes with hard?” than to ii) ask them, “Which of of the following words rhyme with hard? a. harm b. cored c. bard d. far.”

  • The logic is borrowed from a related standard and so it feels like it is is appropriate, but it does not actually lead to evidence of the targeted cognition.

  • Items that use some of the language of a standard might not actually prompt the targeted cognition. Such echos from standard to item are no substitute for actually thinking through what an item actually might prompt.

  • Confusing the details and particulars of an item with its logic. That is, mistaking the work of polishing the details (e.g., addressing item hygiene issues) with thinking about the logic of the item. In fact, those details should be abstracted away (i.e., kinda ignored) when thinking about the item’s logic.

  • Without a doubt, the single most common mistake in item logic is a major contributor to most other mistakes: the item’s logic is taken for granted and under-examined. That is, the item writer and/or CDP did not think it through to make sure it was appropriate and effective.

Of course, there are other sorts of mistakes, as well. To learn more about them, see the Item Logic packet, downloadable from the sidebar to the left.

Making use of the Logic of an Item in Item Development

When one is faced with an item in development, the first step in making use of a the logic of an item is merely to look for it. Examine the item and think about the cognitive path it prompts in yourself or might prompt in test takers. One might start by looking at the verbs in the directions or the stem of the item. Simply thinking through the plain language of the item and how test takers would respond is the safest starting point.

With some sense of the logic of the item in mind — if not set down on paper — a CDP should examine it in light of the assessment target or standard. Is the logic actually aimed at the targeted cognition? If it is not, the CDP should either abandon the item or — as when the project needs to fill a shortage of items aligned to a particular standard — work on the the level of the logic if the item before worrying about the details of the particular item.

Even when the logic of an item is aligned to the targeted cognition, there is still opportunity for an item to go astray. I see the logic that this item is trying to use, but does it actually stick to it successfully? There are many ways in which an item can offer alternative paths to a successful response and/or construct-irrelevant obstacles, such that the implementation of the item departs from the original logic of the item. In fact, this is quite common. Much of the work of item refinement is reworking the item to hew more closely to its intended logic.

The third major way to make use of item logic in item development is to consider who is likely to follow that item logic. What sorts of test takers might read this item and then do something other than the item logic lays out? This is the consideration of fairness issues and the application of the RTD pillar practice of radical empathy.