8 Easy Steps to Create a Traditional Classroom Achievement Test (TCAT)

Tests are Essential!

If you’ve ever been presented the opportunity to create a test, of any kind, you might be wondering, “Where in the world do I begin?” or “What does a good test look like?” If this is you, or your a seasoned test creator and just looking for a few pointers to make sure you are on the right track, the next 8 steps will lead you down the path to create effective, efficient, and reliable assessments.

  • Step 1: Perform an Assessment of the Situation
  • Step 2: Develop Learning Targets and Benchmarks
  • Step 3: Create Test Blueprint
  • Step 4: Focus of Intellectual and Thinking Skills
  • Step 5: Decide on Test Organization
  • Step 6: Develop Item Directions
  • Step 7: Develop a Test Scoring Plan (Rubric)
  • Step 8: Test Reliability and Validity

The Process

The examples contained herein are based on a citizenship review test for learners who would like to brush up on some basic civics topics. While the content is specific to this particular test, the concepts are transferrable to any content area and for all TCAT assessment creations.

Step 1: Perform an Assessment of the Situation

According to the U.S. Citizenship and Immigration Services (USCIS) website over the past ten years, the United States “welcomed more than 7.2 million naturalized citizens.” In 2018, more than 750,000 people took the citizenship test, with a 90% pass rate.

Unfortunately, research compiled from USA Today, Newsweek, the Woodrow Wilson National Fellowship Foundation and Lincoln Park Strategies showed that, “only about one-third of Americans would pass a basic multiple-choice U.S. citizenship test based on the 100 civics questions that immigrants need to know how to answer before they can become citizens” (Stynch, 2018). 

Step 2: Develop Learning Targets and Benchmarks

Research by Feldman (2010) explained how benchmarks could be used as an “umbrella structure to support curricular planning, assessment, and feedback” (p. 233). Learning Target 1 Benchmark 1 from Appendix A Table 1 challenges students to use reasoning, which is a thinking skill, to achieve mastery of the benchmark. Learning Target 2, Benchmark 2, also references an intellectual skill, which learners must “recognize and identify” historical figures.

Step 3: Create Test Blueprint

A blueprint is composed of objectives, test items, directions, and scoring methods (Smith & Ragan, 2005). Creating a blueprint is essential because it helps provide consistency across the test by guiding specific intellectual or thinking skills; the students must demonstrate mastery.

Woman's hand using tablet to obtain directions
Photo by Pixabay on Pexels.com

Step 4: Focus of Intellectual and Thinking Skills

Intellectual skills, also known as cognitive skills, “requires the learner to do some unique cognitive activity” (Dick and Carey, 2005, p. 43). The four best known intellectual skills are making discriminations, applying rules, forming concepts, and problem-solving. The learner uses these skills to classify information according to its category or characteristic.

Step 5: Decide on Test Organization

While there are many item formats for assessments, multiple-choice, true-false, fill-in-the-blank, short answer, and extended response are those addressed in this paper. Supply and select responses provide an opportunity for a demonstration of both lower and higher-order thinking skills. Some may say that multiple-choice and true-false questions can only be used to assess the lower-level thinking skills. However, a well written multiple-choice or true-false item can be developed to assess higher-level thinking skills.

Step 6: Develop Item Directions

While test item formats are selected and crafted to assess various cognitive abilities, such as thinking and intellectual skills, the directions for the assessment serves as a guide for the examinees of how to navigate the assessment. According to Hale and Astolfi (2014), assessment directions should be written clearly and directly, that conveys the intent of the test item. General item writing guidelines suggest designing the assessment and directions at the level of the examinee and using correct grammatical and writing conventions. Using precise wording is essential to communicate directions of a set of test items (Hale & Astolfi, 2014). 

Stack of books with tablet on top of table.
Photo by Pixabay on Pexels.com

Step 7: Develop a Test Scoring Plan (Rubric)

In select response items, the learner will recall information, apply thinking and intellectual skills to choose the correct answer. These types of test items can be used to access both lower-level and higher-level thinking skills. In the scoring plan it should be explained how each multiple-choice question is evaluated, to include which learning target and benchmark is being addressed. This approach adds content validity which is the “degree to which the items in an instrument adequately represent the universe of content” (Sandqvist, Gullbergb, Henriksson, and Gerdlec, 2008, p. 441). 

Step 8: Test Reliability and Validity

After all of your hard work creating learning targets, benchmarks, organizing the items, and writing the directions, you must make sure your assessment is reliable and valid (even if it is just for your classroom).

While there are various forms of reliability and validity, this specific assessment only focused on Internal consistency reliability (ICR) and Content Validity.

ICR is established when there are multiple items on an assessment that adhere to the same benchmark, which fulfills the same learning target (Wright, 2006).  

A demonstration of ICR in this classroom assessment can be evaluated in test Items 1 and 7, which test comprehension level, test Items 5 and 6, which assess the application level, test Items 9 and 10, which corresponds with analysis, all coinciding with Learning Target 1. Test Items 2 and 3 respond to the knowledge level, test Item 4 application level, test Items 8 and 11 analysis-level, and test Item 12 is a synthesis item all fulfilling Learning Target 2. The item formats varied between, multiple-choice, true and false, short and extended response types to ensure achievement of the learning targets on multiple levels of the domains of learning. 

Content validity is the demonstration that items on a test are representative of the content or domain that it is supposed to be assessing (Popham, 2006). Attention was undertaken to ensure that there was only one valid answer for each select response, and there were no apparent clues in the stem to prompt the test-taker to choose one answer over another. The questions were designed to challenge the learner to access both intellectual and thinking skills.

*All of the content herein is an excerpt from a paper written for the MSID program at Saint Leo University by the Author of this article.

Leave a Reply

Powered by WordPress.com.

Up ↑

%d bloggers like this: