Selected-Response Exams Best Practices

A photograph of a selected-response exam answer sheet with a pencil, eraser, and pencil sharper laying on top.

Choosing the appropriate assessment type for measuring student learning outcomes requires careful consideration. An assessment, test, or exam is classified as selected-response or constructed-response based on the item types used. An exam using multiple-choice, true/false, matching, or any combination of these item types is called a selected-response exam because the student “selects” the correct answer from available answer choices. A selected-response exam is considered to be an objective exam because there is no rating of the student’s answer choice – it is either correct or incorrect. Essay and short answer exams are constructed-response exams because the student has to “construct” the answer. Constructed response items require the student to answer a question, commonly referred to as a “prompt.” A constructed-response exam is considered to be a subjective exam because the correctness of the answer is based on a rater’s opinion, typically with the use of a rubric scale to guide the scoring.

As part of planning which type of test you will administer, consider that when it comes to scoring the results, students’ answer choice on an objective question is either one-hundred percent correct or one-hundred percent incorrect. Answers on essay questions can be partially correct – for example, if an essay answer is half right based on the rubric scale, then the student could get a grade of C for the answer. This article discusses advantages and disadvantages to the different types of selected-response tests, as well as best practices.

Conventional Multiple-Choice Test Items

The conventional multiple-choice (MC) item format has a stem that poses the problem or question and three or four answer choices (options). One of the choices is the undeniably correct answer, and the other options are, unquestionably, incorrect answers. Advantages of the MC format are that faculty and students are familiar with the format, it is cost-effective, easy to administer, easy to score, and can cover a broad range of content. Disadvantages are that student test-taking ability may be part of the measurement, questions may be misinterpreted, the focus may be more on low-level knowledge, guessing is a factor, and constructing high quality MC test items is very difficult. To learn more about writing multiple choice items, see this Teaching Commons article.

Matching items

Matching items are somewhat like MC items in that there are item stems (phrases or statements) and answer choices that are required to be matched to the item stems. There should always be one more answer choice than the number of item stems. Haladyna (1999) suggests that: “Matching items are well suited for testing understanding of concepts and principles” (p. 50). Advantages of the matching format are efficiency in that it can capture in one matching item what it would take several MC items to cover, and it works well for testing associations such as vocabulary, definitions, descriptions, and examples of concepts. A disadvantage is that there is a tendency to mix the answer choices so that they are not a homogeneous set, such as using people and places as choices in the same item. The answer choices need to be homogenous, such as a set of all people or all places (Haladyna, 1999).

True-False Items

True-false items have the advantage of being easy to write, more can be given in the same amount of time compared to MC items, reading time is minimized, and they are easy to score. Haladyna (1999) cites several studies that recommend true-false items be used with caution or not at all, and cites some studies that defend the use of the true-false format. The main argument against is the large error component due to guessing, particularly when there is one generic stem followed by a group of true-false items. Additional disadvantages are that the items tend to reflect trivial information, promote rote learning, and are highly subject to guessing. Haladyna (1999) concludes that: “This format is not recommended for standardized testing programs but may be useful for classroom testing, and is certainly open to experimentation” (p. 57).

Complex Multiple-Choice Items

Complex MC items provide a format in which there can be one or more correct answers. For example, the item question could ask which planets have a particular description, and three planet names, numbered 1, 2, and 3 would be provided. Following the three planet names could be four answer choices, such as:

A. 1 and 2

B. 2 and 3

C. 1 and 3

D. 1, 2, and 3

In certain disciplines, such as the sciences and medicine, where there are situations where there could be more than one right answer, the complex format is advantageous. There will be one correct answer combination, and the other combinations will be incorrect. Some of the limitations are that the complex MC items are likely to be more difficult than conventional MC items, knowing one of the two answers in an answer choice could help the student eliminate other answer choices and would also facilitate guessing, and the format is often difficult to construct and edit. The complex format also requires more reading and thinking time by the student, so the number of items on the test would have to be limited to the time allotted for the assessment.

Multiple True-False Items

Similar to the complex MC item is the multiple true-false item where a list of several answer choices follows the item stem (question or statement), and the student is to indicate which of the choices are true and which are false. The answer choices can be words or phrases, and the number of answers can be lengthy, which allows the advantage of covering many items in a short period of time. Frisbie (1992) research’s shows that the format is highly effective in terms of reliability and validity. Limitations are that content is limited because the format uses lists of terms and phrases as examples for understanding concepts and guessing is a factor.

Advanced Multiple-Choice Item-set Format 

A “set” of MC items can be used to measure applied knowledge, higher order thinking and reasoning skills, and comprehension of text. The item set would be composed of several items that, as a group, are intended to measure specific student learning outcomes. The number of items in a set would be limited by the time allotted for the exam and by the point at which adding more items to the set begins to give clues or cues that would help the student answer other items in the set. The stem for item sets can include a vignette, a case study, a graph or table, or a detailed description with multiple elements. It is important to make certain that the MC items in the set all relate to what is in the stem (Haladyna, 1999).

For advanced items, such as an applied knowledge item, the stem can consist of multiple parts. The stem can include extended or ancillary material such as a vignette, a case study, a graph, a table, or a detailed description which has multiple elements to it. Anything may be included as long as a good match between item and stem is assured. The stem should end with a lead-in question explaining how the respondent must answer. For example, a medical multiple choice item lead-in question may ask "What is the most accurate diagnosis?" or "What is the most likely cause?" in reference to a case study that was previously presented.

References

Frisbie, D.A. (1992). The status of multiple true-false testing. Educational Measurement: Issues and Practices, 5, 21-26.

Haladyna, T.M. (1999). Developing and Validating Multiple-Choice Items. Mahwah, NJ: Lawrence Erlbaum Associates.