Measuring Literacy: Performance Levels for Adults (2005)

Chapter: Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data

Previous Chapter: Appendix B Examination of the Dimensionality of NALS
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Appendix C
July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data

As described in the body of the report, the Committee on Performance Levels for Adult Literacy convened two bookmark standard-setting sessions in 2004, one in July to gather panelists’ judgments about cut scores for the 1992 National Adult Literacy Survey (NALS) and another in September to collect judgments about cut scores for the 2003 National Assessment of Adult Literacy (NAAL). This appendix details how the bookmark procedure was implemented and reports results for the July session, and Appendix D presents similar information for the September session. Following the text are the background materials, which include the agenda, participant questionnaires, tables, and figures for the July session. The appendix concludes with technical details about the data files that the committee used for the standard settings; this information is provided to assist the U.S. Department of Education and its contractors with any follow-up analyses that need to be conducted with respect to the cut scores for the performance levels.

BOOKMARK STANDARD SETTING WITH THE 1992 NALS DATA

The July 2004 session was held to obtain panelists’ judgments about cut scores for the 1992 NALS and to collect their feedback about the performance-level descriptions. Several consultants assisted the committee with the standard setting, including Richard Patz, one of the original developers of the bookmark procedure.

A total of 42 panelists participated in the standard setting. Background information on the panelists was collected by means of a questionnaire (a

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

blank questionnaire is included in Background Materials at the end of this appendix). A majority (85 percent, n = 28) had managerial responsibilities for adult education in their states or regional areas, although many panelists were instructors as well as program coordinators or directors. Most panelists worked in adult basic education (66 percent, n = 22), general educational development or GED (54 percent, n = 18), or English language instruction (51 percent, n = 17) settings. Almost half (45 percent, n = 15) reported they were very familiar with NALS prior to participating in the standard-setting activities; 42 percent (n = 14) reported that they were somewhat familiar with NALS. Only four participants (12 percent) who completed the questionnaire said they were unfamiliar with NALS prior to the standard setting.

Panelists were assigned to tables using a quasi-stratified-random procedure intended to produce groups with comparable mixtures of perspectives and experience. To accomplish this, panelists were assigned to one of nine tables after being sorted on the following criteria: (1) their primary professional responsibilities (instructor, coordinator or director, researcher), (2) the primary population of adults they worked with as indicated on their resumes, and (3) the areas in which they worked as indicated on their resumes. The sorting revealed that panelists brought the following perspectives to the standard-setting exercise: adult basic education (ABE) instructor, English for speakers of other languages (ESOL) instructor, GED instructor, program coordinator or director, or researcher. Panelists in each classification were then randomly assigned to one of the nine tables so that each group included at least one person from each of the classifications. Each table consisted of four or five panelists and had a mixture of perspectives: instructor, director, researcher, ESOL, GED, and ABE.

Once panelists were assigned to tables, each table was then randomly assigned to two of the three literacy areas (prose, document, or quantitative). The sequence in which they worked on the different literacy scales was alternated in an attempt to balance any potential order effects (see Table C-1). Three tables worked with the prose items first (referred to as Occasion 1 bookmark placements) and the document items second (referred to as Occasion 2 bookmark placements); three tables worked with the document items first (Occasion 1) and the quantitative items second (Occasion 2); and three tables worked with the quantitative items first (Occasion 1) and the prose items second (Occasion 2).

Ordered Item Booklets

For each literacy area, an ordered item booklet was prepared that rank-ordered the test questions from least to most difficult according to the responses of NALS examinees. The ordered item booklets consisted of all

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

the available NALS tasks for a given literacy area, even though with the balanced incomplete block spiraling design used for the assessment, no individual actually responded to all test questions. The tasks were arranged in the ordered item booklets so that the question appeared first (one question per page) followed by the stimulus materials (e.g., a newspaper article, a bus schedule, a graph) and the scoring rubric. Accompanying each ordered item booklet was an item map that listed each item number and a brief description of the item. The number of items in each NALS ordered item booklet was 39 for prose literacy, 71 for document literacy, and 42 for quantitative literacy.

Training Procedures

Two training sessions were held, one just for the table leaders, the individuals assigned to be discussion facilitators for the tables of panelists, and one for all panelists. The role of the table leader was to serve as a discussion facilitator but not to dominate the discussion or to try to bring the tablemates to consensus about cut scores. Table leaders also distributed standard-setting materials to each table member, guided the discussions of the content and context characteristics that differentiated NALS test items from each other, led the discussion of the impact data for the final round of bookmark placements, and ensured that security procedures were followed. Table leader training was held the day before the standard setting to familiarize the table leaders with their roles, the NALS materials, and the agenda of activities for the standard-setting weekend. (The agenda for the July session is included in Background Materials at the end of this appendix.) Panelist training was held the morning of the standard setting. Richard Patz facilitated both training sessions and used the same training materials for both sessions. This helped ensure that the table leaders were well acquainted with the bookmark process.

The training began with an overview of NALS (skills assessed by the tasks in the three literacy areas, administrative procedures, etc.), followed by background about the committee’s charge and the timing of its work. Panelists were told that the cut scores that resulted from the bookmark procedure would be the group’s recommendations to the committee but that it would ultimately be up to the committee to determine the final cut scores to recommend to the Department of Education. Panelists then received instruction in the elements and procedures of the bookmark method.

Conducting the Standard Setting

Once the training session was completed, the bookmark process began by having each panelist respond to all the questions in the NALS test

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

booklet for their assigned literacy scale. For this task, the test booklets contained the full complement of NALS items for each literacy scale, arranged in the order that test takers would see them but not ranked-ordered as in the ordered item booklets. Afterward, the table leader facilitated discussion of differences among items with respect to knowledge, skills, and competencies required and familiarized panelists with the scoring rubrics. Panelists were expected to take notes during the discussion, which would be used in making their judgments.

Panelists then received the ordered item booklets. They discussed each item and noted characteristics they thought made one item more difficult than another. The table leader distributed the performance-level descriptions.1 Each table member then individually reviewed the performance-level descriptions, the items in the ordered item booklet, the scoring rubrics, and their notes about each item and proceeded to independently place bookmarks to represent cut points for basic, intermediate, and advanced literacy; this first bookmark placement constituted Round 1.

On the second day of standard setting, each table received a summary of the Round 1 bookmark placements made by each table member and were provided the medians of the bookmark placements (calculated for each table). Table leaders facilitated discussion among table members about their respective bookmark placements, moving from basic to intermediate to advanced literacy, without asking for consensus. Panelists were given just under two hours to deliberate about differences in their bookmark placements before independently making judgments for Round 2. Throughout the standard setting, staff members, consultants, assistants, and four committee members observed the interactions among the panelists as they discussed the characteristics of the items and their reasons for selecting their bookmark placements.

For Round 3, each table again received a summary of the Round 2 bookmark placements made by each table member as well as the medians for the table. In addition, each table received impact data, that is, the proportion of the 1992 population who would have been categorized at the below basic, basic, intermediate, or advanced literacy level based on the table’s median cut points. After discussion of the variability of Round 2 judgments and the impact of their proposed cut points on the percentages of adults who would be placed into each of the four literacy groups, each panelist made his or her final judgments about bookmark placements for the basic, intermediate, and advanced literacy levels. This final set of judgments concluded Round 3.

After Round 3, panelists were asked to provide feedback about the

1  

The performance-level descriptions used in July are presented in Table 5-2 of the report.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

performance-level descriptions by reviewing the items that fell between each of their bookmarks and editing the descriptions accordingly. That is, the items in the booklet up to, but not including, the first bookmark described the basic literacy level. Panelists reviewed these items and revised the descriptions to better fit the items that fell within this level. They were asked to do the same for the intermediate and advanced performance-level descriptions.

On the afternoon of the second day, the processes described above were repeated for the second literacy area. Round 1 was completed on the second day; Rounds 2 and 3 were completed on the third day. The standard setting concluded with a group session to obtain feedback from the panelists.

Using Different Response Probability Instructions

In conjunction with the July standard setting, the committee collected information about the impact of varying the instructions given to panelists with regard to the criteria used to judge the probability that an examinee would answer a question correctly (the response probability). The NALS results were reported in 1992 using a response probability of 80 percent, a level commonly associated with mastery tests. Some researchers have questioned the need for such a strict criterion for an assessment like NALS, for which there are no individual results, and recommend instead using a more moderate response probability level of 67 percent (e.g., Kolstad, 2001). The authors of the bookmark method also recommend a 67 percent response probability level (Mitzel et al., 2001). Because the issue of response probability had received so much attention in relation to NALS results, the committee arranged to collect data from panelists about the impact of using different (50, 67, or 80 percent) response probability values. Specifically, we were interested in evaluating (1) the extent to which panelists understand and can make sense of the concept of response probability level when making judgments about cut scores and (2) the extent to which panelists make different choices when faced with different response probability levels. Panelists were told that they would be given different instructions to use in making their judgments and that they should not discuss the instructions with each other.

As described earlier, the panelists were grouped into nine tables of four or five panelists each. Each group was given different instructions and worked with different ordered item booklets. Three tables (approximately 15 panelists) worked with booklets in which the items were ordered with a response probability of 80 percent and received instructions to use 80 percent as the likelihood that the examinee would answer an item correctly. Similarly, three tables used ordered item booklets and instructions consistent with a response probability of 67 percent, and three tables used or-

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

dered item booklets and instructions consistent with a response probability of 50 percent.

Panelists received training in small groups about their assigned response probability instructions. The additional training session gave detailed instructions to panelists on one of three difficulty levels (50, 67, or 80 percent). These specialized instructions are summarized in Background Materials at the end of this appendix. Each table of panelists used the same response probability level for the second content area as they did for the first.

Refining the Performance-Level Descriptions

The performance-level descriptions used at the July standard setting consisted of overall and subject-specific descriptors for the top four performance levels. In accord with typical bookmark procedures, concrete examples of stimulus materials (e.g., newspaper articles, almanac) or types of tasks (e.g., read a bus schedule, fill out an employment application form) had been intentionally omitted from the performance-level descriptions because including specific examples tends to overly influence panelists’ judgments about the bookmark placements. Omission of specific examples allows the panelists to rely on their own expertise in making judgments.

Panelists’ written comments about and edits of the performance levels were reviewed. Many panelists commented about the lack of concrete examples, saying that a few examples would have helped them. Some were concerned that NALS did not have enough items at the upper end of the spectrum for them to confidently make a distinction between intermediate and advanced categories. They also suggested edits, such as adding the modifier “consistently” to the levels higher than below basic, asked for clarification of adjectives such as “dense” versus “commonplace” text and “routine” versus “complex” arithmetic operations. In addition, the panelists raised questions about the scope of the NALS quantitative assessment and the extent to which it was intended to evaluate arithmetic skills versus functional quantitative reasoning. They also pointed out inconsistencies in the wording of the descriptions, moving from one level to the next. The committee used this feedback to rethink and reword the level descriptions in ways that better addressed the prose, document, and quantitative literacy demands suggested by the assessment items.

Revised descriptions were used for the September standard-setting session. The following types of changes were made. The introduction to the descriptions was rewritten to include the phrase, “An individual who scores at this level, independently, and in English …,” reflecting the nature of the NALS and NAAL as tests of literacy in English in which examinees complete the test items with minimal or no help from the interviewer or other

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

family members or individuals. In addition, the subject-area descriptions were revised to better reflect the range of literacy skills needed for the NALS items without specifying the types of NALS items or stimuli used. Four panelists who had participated in the July standard-setting session were invited to review the revised performance-level descriptions prior to the September standard setting, and their feedback was used to further refine the descriptions.2

Panelists’ Evaluation of the Standard Setting

At the end of the July standard-setting session, panelists were asked to complete a satisfaction questionnaire (a blank questionnaire is included in Background Materials at the end of this appendix). Almost all of the participants reported that they were either very satisfied (59 percent, n = 20) or satisfied (35 percent, n = 12) with the standard-setting training, while only two participants reported that they were not satisfied with the training they received. Almost three-quarters of the participants (74 percent, n = 25) reported being very satisfied with their table interactions and discussions; roughly a quarter (26 percent, n = 9) reported that they were satisfied with the logistical arrangements. The contributions and guidance of the table leaders were perceived as mainly very satisfactory (53 percent, n = 18) or satisfactory (32 percent, n = 11). Only two participants (6 percent) indicated that their table leaders were not satisfactory. Both of these individuals wrote on their evaluations that their table leaders were overly talkative and did not facilitate discussions among the table members. The majority of comments indicated that participants thought their table leaders were well organized, adept at facilitating discussion, and kept the table members focused on the standard setting tasks.

The organization of the standard-setting session was well received: over half of the participants (68 percent, n = 23) were very satisfied and 32 percent (n = 11) reported satisfaction with the session. Participants also reported being satisfied with their work during the standard setting—94 percent of the participants reported that they were either very satisfied (44 percent, n = 15) or satisfied (50 percent, n = 17) with the cut scores decided by their table, indicating a high level of participant confidence in both the process and the product of the standard-setting session. In addition, 85 percent (n = 29) and 12 percent (n = 4) reported that participation in the standard-setting session was very valuable or valuable to them, respectively.

2  

The performance-level descriptions used in September are presented in Table 5-3 of the report.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Besides giving feedback on the standard-setting session, panelists were also very helpful in suggesting ways in which the September standard-setting session would benefit from the perspective of those who had just completed the process. For example, the participants reflected a range of adult education areas, such as ABE, GED, and ESL. While the experiences and perspectives of these individuals were useful and appropriate for the standard-setting task, the July participants asked that the committee consider broadening the array of perspectives for the September gathering by including middle school or high school language arts teachers and professionals familiar with human relations, employment testing, or skills profiling. The July participants commented that the table discussions needed these additional perspectives to better conceptualize the range of literacy skills within the performance levels. In addition, the panelists commented that they would have liked to have seen a broader representation of community types (e.g., rural, suburban, urban) reflected in the table discussions because the needs of adult learners and their environments play a factor in program availability and access to various literacy materials represented in NALS. The committee agreed and solicited participation from members of these professional and geographic areas for the September standard setting.

RESULTS OF STANDARD SETTING WITH 1992 DATA

In an effort to provide results that can be fully understood and replicated, this section provides complete results from the July standard setting reported separately by literacy area.

Prose Literacy

A complete listing of all judgments made by each panelist who reviewed the prose literacy scale at the July standard-setting session is presented in Tables C-2A, C-2B, and C-2C respectively, for Basic, Intermediate, and Advanced. The information included in the table consists of each participant’s bookmark placement for each round, as well as the corresponding scale score.3 The table number and response probability (rp) level used by each panelist are provided, as well as an indication of whether a

3  

The item parameters used for the July standard setting were those available in the public data file. The transformation constants used to convert theta estimates to scaled scores follow—prose: 54.973831 and 284.808948; document: 55.018198 and 279.632461; quantitative: 58.82459 and 284.991949.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

given literacy scale was reviewed by the panelist first (i.e., Occasion 1) or second (i.e., Occasion 2).

Figure C-1 illustrates the bookmark placement results on the scale score metric by round and table. The top three graphs present the results for Occasion 1 (Tables 1, 4, and 7), and the bottom three graphs show the results for Occasion 2 (Tables 2, 5, and 8). The lines are differentiated by performance level to indicate panelists’ cut score recommendations: the upward-facing triangles (Δ) indicate the cut score each panelist recommended for the basic literacy performance standard, the asterisks (*) represent the intermediate literacy performance standard, and the downward-facing triangles (∇) indicate the advanced literacy performance standard. The median Round 3 placement for the table for each cut score is indicated by a standalone symbol (Δ, *, or ∇) on the right-hand side of each graph. The numbers below each graph represent the scale scores corresponding to the median basic, intermediate, and advanced literacy values for the given table.

The graphs in Figure C-1 reflect panelist behavior similar to other, published, bookmark standard-setting sessions (Lewis et al., 1998). That is, as the rounds progress, the variability in bookmark placements tends to decrease, resulting in a relative convergence of bookmark location by the end of the third round. As Figure C-1 illustrates, however, convergence did not always happen, given that bookmark placement reflects individual decisions and biases.

Panelists at Tables 1 and 2 used an 80 percent response probability level (rp80); Tables 4 and 5 were assigned an rp level of 67 percent (rp67); and Tables 7 and 8 were instructed to use a 50 percent response probability level (rp50). Across Tables 1, 4, and 7, there was generally more agreement among panelists in the basic and intermediate cut scores at the conclusion of the Round 3, but the final placements of the advanced cut score varied considerably. A somewhat different pattern is seen across Tables 2, 5, and 8. Panelists at Tables 5 and 8 appeared to reach consensus regarding the cut scores for the basic performance level, Table 2 participants achieved consensus on the cut scores for the intermediate level; and Table 5 achieved consensus on the cut score for the advanced level.

Round 3 data from the two occasions were combined and descriptive statistics calculated. This information is reported by rp level for the prose literacy scale in Table C-3. Across performance levels and rp levels, the standard errors were lowest with the 67 percent response probability level.

Document Literacy

Panelists at six of the nine tables reviewed NALS items from the document literacy scale. A complete listing of all judgments made by each pan-

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

elist who reviewed the document literacy scale at the July standard-setting session is presented in Tables C-4A, C-4B, and C-4C.

Figure C-2 shows the bookmark placement results on the scale score metric for each of the three Occasion 1 (top three graphs) and Occasion 2 (bottom three graphs) cut scores by round and table. Panelists at Tables 3 and 1 used rp80, panelists at Tables 6 and 4 used rp67, and panelists at Tables 9 and 7 used rp50. Final bookmark placements for Table 9 are taken from Round 2, due to a data processing in the Round 3 results for that table.

As with prose literacy, the variability of bookmark placements decreased as the rounds progressed. At all of the tables, there appeared to be more agreement with regard to the cut scores for the basic and intermediate performance levels than for the advanced level. Although some convergence in the advanced cut scores was observed as the rounds progressed, the Round 3 bookmark placements are quite disparate.

Summary statistics for the Occasion 1 and Occasion 2 combined data are presented in Table C-5. Unlike the data for prose literacy, the standard error of the mean for document literacy across rp levels and performance levels was lowest for rp50 and highest for rp80.

Quantitative Literacy

Panelists at six of the nine tables reviewed NALS items from the quantitative literacy scale. A complete listing of all judgments made by each panelist who reviewed the quantitative literacy scale at the July standard-setting session is presented in Tables C-6A, C-6B, and C-6C. The Occasion 1 (top three graphs) and Occasion 2 (bottom three graphs) bookmark locations and corresponding scale scores reported by each panelist by round and rp level are given in Figure C-3. Panelists at Table 2 and 3 used rp80, panelists at Table 5 and 6 used rp67, and panelists at Tables 8 and 9 used rp50.

Overall, panelists tended to approach consensus on the cut scores for the basic and intermediate performance levels, although this was not true for Tables 3 or 5. Considerable disparity was evident in the cut scores for the advanced level, and this variability was maintained across all three rounds.

Summary statistics on the combined Occasion 1 and Occasion 2 data are given in Table C-7. The standard error was highest in the basic and advanced performance levels for rp67 and in the intermediate performance level for rp80.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Results from Comparison of Different Response Probability Levels

The purpose of using the different response probability instructions was to evaluate the extent to which the different response probability criteria influenced panelists’ judgments about bookmark placements. It would be expected that panelists using the lower probability criteria would place their bookmarks later in the ordered item booklets, and, as the probability criteria increase, the bookmarks would be placed earlier in the booklet.

Bookmark placements are converted to scaled scores in two steps. First the item response theory (IRT) model (here, the two-parameter logistic model, or 2PL) is used to calculate the theta value at which an individual would be expected to answer the item correctly at the specified probability level (see equation 3-1 in the technical note to Chapter 3). Then the theta value is transformed to a scale score value using a linear transformation equation.

Typically, the IRT model equation estimates the value of theta associated with a 50 percent probability of correctly answering an item. As described in the technical note to Chapter 3, the equation can be solved for different probabilities of a correct response. Thus, when the response probability value is 67, the theta estimate is the value at which one would have 67 percent chance of answering the item correctly. Likewise, when the response probability is 80, the theta estimates the value at which one would have an 80 percent chance of answering the item correctly. For a given item, the theta values will increase as the response probability moves from 50 to 67 to 80; the scaled scores will similarly increase.

If panelists apply the different response probabilities correctly, they should shift their bookmark placements in such a way that they compensate exactly for the differences in the way the bookmark placements are translated into thetas and to cut scores. That is, ideally, panelists should compensate for the different response criteria by placing their bookmarks earlier or later in the ordered item booklet. If they are compensating exactly for the different instructions, the theta (and scale score) associated with the bookmark placement should be identical under the three different response probability instructions, even though the bookmark locations would differ. Given these expectations for panelists’ implementation of the response probability criteria, we further examined both the bookmark placements and the resulting scaled cut scores.

In the body of the report, we presented the median results for the Round 3 judgments, as it is these judgments that are typically used in determining the final cut scores. Here we examine the Round 1 judgments, as these would be expected to be more independent than those made after group discussions.

In addition, we look at the results separately by occasion. That is, as

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

shown in the design for the standard setting (see Table C-1), the order in which the literacy areas were assigned to tables of panelists was alternated so that each literacy area was worked on during Occasion 1 by one table and Occasion 2 by another. The panelists worked with the different areas on different days of the standard setting, with time for interaction with other panelists during the evening. We decided that there might be differences in the way the panelists interpreted and implemented the rp instructions on the first occasion, before there was time for conversation with others (despite the instructions that they should not compare their instructions with each other). We therefore examined results for the first occasion and for the two occasions combined.

Examination of Bookmark Placements

To examine the extent to which panelists adjusted their judgments based on the different response probability instructions, we first examined the bookmark placements. Tables C-8, C-9, and C-10 present the Round 1 median bookmark placements for the different rp values, separated by Occasion 1 and for the two occasions combined. The median bookmark placements for intermediate and advanced on Table C-8 (prose) demonstrate the expected pattern; that is, the median bookmark placements increased as the rp criteria decreased.

Regression analyses were run to evaluate whether the response probability criteria had a statistically significant effect on bookmark placement. To increase statistical power for detecting differences, the analyses were conducted by combining all of the judgments into a single data set, which resulted in a total of 252 judgments. Because panelists each made multiple judgments, robust standard errors were calculated with clusters at the panelist level for evaluating statistical significance. A series of dummy codes were created to represent each combination of literacy area and performance level. The rp values were maintained in their original numeric form (50, 67, and 80).

This regression resulted in an R2 of .91 (p < .001) and a negative coefficient (−.07) for the rp variable, which approached statistical significance (p = .075). This result suggests a tendency for a negative relationship between rp criteria and bookmark placement. That is, as rp criteria increased, bookmark placement tended to decrease (i.e., bookmarks were placed earlier in the ordered item book). On average, over the different literacy areas and performance levels, a coefficient of −.07 for the rp variable means that panelists using the rp80 instructions placed their bookmarks roughly two items earlier than did the panelists using the rp50 instructions. This is the general pattern that one would expect if panelists were implementing the rp instructions as intended, although the next

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

section shows that the size of the difference is smaller than the ideal adjustment.

Follow-up analyses were run to examine the effect of the rp criteria separately for each combination of literacy area and performance level, which resulted in nine individual regressions (3 literacy areas × 3 performance levels). For these analyses, dummy codes were created to represent the rp50 and rp80 conditions. The coefficients associated with the dummy codes provide an indication of the extent to which the panelists adjusted their judgments according to the response probability instructions. If panelists were appropriately adjusting their judgments, the coefficient associated with rp50 should be positive (bookmark placed later in the ordered item booklet than when rp67 instructions were used), and the coefficient associated with rp80 should be negative (bookmark placed earlier in the ordered item booklet than when rp67 instructions were used).

Tables C-11, C-12, and C-13 present the results for Occasion 1 judgments (first column) and for Occasion 1 and 2 judgments (second column), respectively, for prose, document, and quantitative literacy. For Occasion 1, seven of the nine rp50 coefficients are positive, and five of the nine coefficients for rp80 are negative, although very few of the coefficients are statistically significant, even at the significance level of p <.10. Similar results are evident for Occasion 1 and 2 combined: seven of the nine rp50 coefficients are positive, and four of the nine rp80 coefficients are negative. Overall, these results show a statistically weak trend in the direction of the correct adjustment to the different rp conditions.4

Examination of Scaled Cut Scores

Regressions were run in a similar fashion when the dependent variable was the scaled cut score. The resulting coefficient for the rp criteria was 1.33, which was statistically significant (p < .001). The value of this coefficient suggests a positive relationship between the rp criteria and scaled cut scores; that is, as rp value increases, so do the cut scores. If it were the case that the panelists were insensitive to the rp instructions—making the same bookmark placements on average in all three rp conditions—a positive relationship between the rp condition and the scaled cut scores would result

4  

In addition, a follow-up questionnaire asked panelists what adjustments they would have made to their bookmark placements had they been instructed to use different rp criteria. For each of the three rp criteria, panelists were asked if they would have placed their bookmarks earlier or later in the ordered item booklet if they had been assigned to use a different rp instruction. Of the 37 panelists, 27 (73 percent) indicated adjustments that reflected a correct understanding of the rp instructions.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

simply from the effect of the rp condition on the equations used to transform the bookmark placements into the corresponding scale scores. The preceding section shows that the panelists were not insensitive to the rp conditions, however, making adjustments that tended in the correct direction with borderline statistical significance. Given the strong relationship between the rp condition and the scaled cut scores, however, it is clear that the size of the adjustment made by the panelists falls short of the ideal.

As before, a series of follow-up regressions were run, one for each combination of literacy area and performance levels. Dummy codes were again created to represent rp50 and rp80 conditions. If panelists were appropriately adjusting their judgments to compensate for the different response probability instructions, the scale score associated with the bookmark placements should, ideally, be identical under the three conditions. For these analyses, the focus is on the statistical significance of the coefficients; that is, ideally, the coefficients associated with the two rp conditions should not be statistically significant.

Tables C-14, C-15, and C-16 present the results for Occasion 1 judgments (first column) and for Occasion 1 and 2 judgments (second column), respectively, for prose, document, and quantitative literacy. For Occasion 1, four of the nine rp50 coefficients are statistically significant, and five of the nine rp80 coefficients are statistically significant. For Occasions 1 and 2 combined, four of the nine rp50 coefficients are statistically significant, and six of the nine rp80 coefficients are statistically significant. These results suggest a strong relationship between the rp condition and the scaled cut scores.

ACKNOWLEDGMENTS

The committee wishes to acknowledge the assistance and contributions of individuals who served as panelists for the bookmark standard setting and provided valuable input on the performance-level descriptions. The complete list of participants appears at the end of Appendix D.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

BACKGROUND MATERIALS

July Standard-Setting Session

Item

Page

Tables

C-1

 

Design of the Bookmark Standard Setting with NALS Data, July 2004

 

249

C-2

 

Participants’ Bookmark Placements and Associated Scale Scores for Prose Literacy, July 2004

 

250

C-3

 

Summary Statistics for the Round 3 Judgments for Prose by Response Probability (RP) Level, July 2004

 

256

C-4

 

Participants’ Bookmark Placements and Associated Cut Scores for Document Literacy, July 2004

 

258

C-5

 

Summary Statistics for the Round 3 Judgments for Document Literacy by Response Probability (RP) Level, July 2004

 

264

C-6

 

Participants’ Bookmark Placements and Associated Cut Scores for Quantitative Literacy, July 2004

 

266

C-7

 

Summary Statistics for the Round 3 Judgments for Quantitative Literacy by Response Probability (RP) Level, July 2004

 

272

C-8

 

Summary of Round 1 Bookmark Placements and Cut Scores for Prose Literacy by Response Probability (RP) Level and Occasion, July 2004

 

273

C-9

 

Summary of Round 1 Bookmark Placements and Cut Scores for Document Literacy by Response Probability (RP) Level and Occasion, July 2004

 

274

C-10

 

Summary of Round 1 Bookmark Placements and Cut Scores for Quantitative Literacy by Response Probability (RP) Level and Occasion, July 2004

 

275

C-11

 

Regression Results for Bookmark Placements for Prose Literacy, July 2004

 

276

C-12

 

Regression Results for Bookmark Placements for Document Literacy, July 2004

 

277

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

C-13

 

Regression Results for Bookmark Placements for Quantitative Literacy, July 2004

 

278

C-14

 

Regression Results for Cut Scores for Prose Literacy, July 2004

 

279

C-15

 

Regression Results for Cut Scores for Document Literacy, July 2004

 

280

C-16

 

Regression Results for Cut Scores for Quantitative Literacy, July 2004

 

281

Figures

C-1

 

Prose literacy cut scores by round for participants at each table, July 2004

 

282

C-2

 

Document literacy cut scores by round for participants at each table, July 2004

 

283

C-3

 

Quantitative literacy cut scores by round for participants at each table, July 2004

 

284

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Agenda

Bookmark Standard-Setting Session for the National Adult Literacy Survey (NALS)

National Research Council, Washington, DC

July 16-19, 2004

Friday, July 16, 2004—The Day Before the Standard-Setting

1:00–2:30 PM

Welcome, Introductions

Stuart Elliott, Judy Koenig, NRC

Rich Patz, Consultant to NRC

Training for Table Leaders

2:30–2:45 PM

Break

2:45–5:00 PM

Training for Table Leaders continued

Saturday, July 17, 2004—Day 1 of Standard-Setting

8:00–8:30

AM Participant registration

Continental breakfast

8:30–9:00 AM

Welcome, Introductions

Stuart Elliott, Judy Koenig, NRC

Rich Patz, Consultant to NRC

9:00–10:20 AM

Training

10:20–10:30 AM

Break

10:30 AM–Noon

Training continued

Noon–1:00 PM

Lunch

1:00–2:00 PM

Round 1 (1st subject area)

Participants review all items of NALS (1st subject area) individually

2:00–4:00 PM

Participants at each table, as a group, study and discuss items in the ordered item booklets

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

3:30–4:15 PM

Additional training for bookmark procedure

 

3:30–3:40 PM – Tables 7, 8, 9

3:45–3:55 PM – Tables 4, 5, 6

4:05–4:15 PM – Tables 1, 2, 3

4:00–5:00 PM

Bookmark placement directions given and Round 1 judgments made (judgments are made individually)

5:00 PM

First day adjourned

Sunday, July 18, 2004—Day 2 of Standard-Setting

7:30–8:00 AM

Continental breakfast

8:00–9:45 AM

Round 2 (1st subject area)

Tables receive data from their Round 1 judgments

Bookmark directions given for Round 2

As a group, discussion about Round 1 data

Round 2 judgments made individually

9:45–10:30 AM

Break

10:30 AM–Noon

Round 3 (2nd subject area)

Tables receive impact data from their Round 2 judgments

Bookmark directions given for Round 3

As a group, discussion about Round 2 data

Round 3 judgments made individually

Individually, each panelist suggests edits to performance-level descriptions

1:30–2:30 PM

Round 1 (2nd subject area)

Participants review all items of NALS (2nd subject area) individually

2:30–4:30 PM

Participants at each table, as a group, study and discuss items in the ordered item booklets

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

4:30–5:30 PM

Bookmark placement directions given and Round 1 judgments made (judgments are made individually)

5:30 PM

Second day adjourned

Monday, July 19, 2004—Day 3 of Standard-Setting

7:30–8:00 AM

Breakfast on one’s own; please save receipts

8:00–9:45 AM

Round 2 (2nd subject area)

Tables receive data from their Round 1 judgments

Bookmark directions given for Round 2 bookmark placement

As a group, discussion about Round 1 data

Round 2 judgments made individually

9:45–10:30 AM

Break

10:30 AM–Noon

Round 3 (2nd subject area)

Tables receive impact data from their

Round 2 judgments placement

As a group, discussion about Round 2 data

Round 3 judgments made individually

Individually, each panelist suggests edits to performance-level descriptions

Noon–1:00 PM

Group discussion

1:00 PM

Standard setting meeting adjourned

1:00–1:30 PM

Box lunch

1:30–2:30 PM

Large-group discussion on NALS performance-level descriptions

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Professional and Personal Information Questionnaire

Bookmark Standard-Setting Session for NALS

July 17-19, 2004

National Research Council, Washington, DC


Please answer the following questions in order for us to better understand the characteristics of our group of standard-setting participants.


1. Do your professional responsibilities include direct or managerial responsibilities for the education of adults?


_____ No. Please characterize your professional responsibilities:

_____ Yes. For how many years have you had such responsibilities?


If you answered ‘yes’ to question 1, please answer the following questions:


2. I am involved in adult education in the following roles (please check all that apply):


_____ I am directly involved as an instructor

_____ I am involved in a managerial capacity


3. How would you characterize the educational setting for these adults (check all that apply):

_____ Traditional high school

_____ English language instruction

_____ Vocational high school

_____ Community college

_____ Alternative high school

_____ 4-year college or university

_____ Adult basic education

_____ Graduate or professional program school

_____ GED program

 

_____ Other. Please describe: __________________________________

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

4. How familiar were you with the National Adult Literacy Survey (a.k.a. NALS) before your participation in the standard-setting activities?


_____ Unfamiliar _____ Somewhat familiar _____ Very familiar


Please tell us about yourself (optional):

Gender: _____ Male _____ Female

Age:

_____ 20-29

_____ 30-39

_____ 40-49

 

_____ 50-59

_____ 60-69

_____ 70+

Race/Ethnicity: _____________

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Specialized Response Probability Instructions Used for the July Standard-Setting Session Instructions for RP50

Items in your booklet are ordered from easiest to most difficult. The easiest items can be answered correctly with a probability of .50 (i.e., 50 percent of the time) by the most people. The most difficult items can be answered correctly with a probability of .50 by the least number of people. In careful consideration of each performance-level description and each item’s difficulty, your task is to identify those skills (represented by items) that you expect persons in each literacy performance level to answer correctly with a probability of at least .50.


First, to establish your Basic Literacy performance level:

Place your bookmark to identify those items which you believe adults with Basic Literacy skills should be able to answer correctly at least 50 percent of the time. Items coming before your bookmark will be answered correctly at least 50 percent of the time by adults in your Basic Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 50 percent of the time by some adults in your Basic Literacy performance level. The least literate adult who meets your Basic Literacy standard will be able to answer the items just before your bookmark with probability just at or above .50 (50 percent). This same adult will be able to answer the items just after your bookmark with probability just below .50.

Next, to establish your Intermediate Literacy performance level:

Place your bookmark to identify those items which you believe adults with Intermediate Literacy skills should be able to answer correctly at least 50 percent of the time. Items coming before your bookmark will be answered correctly at least 50 percent of the time by adults in your Intermediate Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 50 percent of the time by some adults in your Intermediate Literacy performance level. The least literate adult who meets your Intermediate Literacy standard will be able to answer the items just before your bookmark with probability just at or above .50 (50 percent). This same adult will be able to answer the items just after your bookmark with probability just below .50.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Finally, to establish your Advanced Literacy performance level:

Place your bookmark to identify those items which you believe adults with Advanced Literacy skills should be able to answer correctly at least 50 percent of the time. Items coming before your bookmark will be answered correctly at least 50 percent of the time by adults in your Advanced Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 50 percent of the time by some adults in your Advanced Literacy performance level. The least literate adult who meets your Advanced Literacy standard will be able to answer the items just before your bookmark with probability just at or above .50 (50 percent). This same adult will be able to answer the items just after your bookmark with probability just below .50.

Instructions for RP67

Items in your booklet are ordered from easiest to most difficult. The easiest items can be answered correctly with a probability of .67 (i.e., 67 percent of the time) by the most people. The most difficult items can be answered correctly with a probability of .67 by the least number of people. In careful consideration of each performance-level description and each item’s difficulty, your task is to identify those skills (represented by items) that you expect persons in each literacy performance level to answer correctly with probability of at least .67.


First, to establish your Basic Literacy performance level:

Place your bookmark to identify those items which you believe adults with Basic Literacy skills should be able to answer correctly at least 67 percent of the time. Items coming before your bookmark will be answered correctly at least 67 percent of the time by adults in your Basic Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 67 percent of the time by some adults in your Basic Literacy performance level. The least literate adult who meets your Basic Literacy standard will be able to answer the items just before your bookmark with probability just at or above .67 (67 percent). This same adult will be able to answer the items just after your bookmark with probability just below .67.

Next, to establish your Intermediate Literacy performance level:

Place your bookmark to identify those items which you believe adults

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

with Intermediate Literacy skills should be able to answer correctly at least 67 percent of the time. Items coming before your bookmark will be answered correctly at least 67 percent of the time by adults in your Intermediate Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 67 percent of the time by some adults in your Intermediate Literacy performance level. The least literate adult who meets your Intermediate Literacy standard will be able to answer the items just before your bookmark with probability just at or above .67 (67 percent). This same adult will be able to answer the items just after your bookmark with probability just below .67.

Finally, to establish your Advanced Literacy performance level:

Place your bookmark to identify those items which you believe adults with Advanced Literacy skills should be able to answer correctly at least 67 percent of the time. Items coming before your bookmark will be answered correctly at least 67 percent of the time by adults in your Advanced Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 67 percent of the time by some adults in your Advanced Literacy performance level. The least literate adult who meets your Advanced Literacy standard will be able to answer the items just before your bookmark with probability just at or above .67 (67 percent). This same adult will be able to answer the items just after your bookmark with probability just below .67.

Instructions for RP80

Items in your booklet are ordered from easiest to most difficult. The easiest items can be answered correctly with a probability of .80 (i.e., 80 percent of the time) by the most people. The most difficult items can be answered correctly with a probability of .80 by the least number of people. In careful consideration of each performance-level description and each item’s difficulty, your task is to identify those skills (represented by items) that you expect persons in each literacy performance level to answer correctly with probability at least .80.


First, to establish your Basic Literacy performance level:

Place your bookmark to identify those items which you believe adults with Basic Literacy skills should be able to answer correctly at least 80 percent of the time. Items coming before your bookmark will be an-

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

swered correctly at least 80 percent of the time by adults in your Basic Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 80 percent of the time by some adults in your Basic Literacy performance level. The least literate adult who meets your Basic Literacy standard will be able to answer the items just before your bookmark with probability just at or above .80 (80 percent). This same adult will be able to answer the items just after your bookmark with probability just below .80.

Next, to establish your Intermediate Literacy performance level:

Place your bookmark to identify those items which you believe adults with Intermediate Literacy skills should be able to answer correctly at least 80 percent of the time. Items coming before your bookmark will be answered correctly at least 80 percent of the time by adults in your Intermediate Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 80 percent of the time by some adults in your Intermediate Literacy performance level. The least literate adult who meets your Intermediate Literacy standard will be able to answer the items just before your bookmark with probability just at or above .80 (80 percent). This same adult will be able to answer the items just after your bookmark with probability just below .80.

Finally, to establish your Advanced Literacy performance level:

Place your bookmark to identify those items which you believe adults with Advanced Literacy skills should be able to answer correctly at least 80 percent of the time. Items coming before your bookmark will be answered correctly at least 80 percent of the time by adults in your Advanced Literacy performance level. Items coming after your bookmark may be answered correctly, but they will be answered correctly less than 80 percent of the time by some adults in your Advanced Literacy performance level. The least literate adult who meets your Advanced Literacy standard will be able to answer the items just before your bookmark with probability just at or above .80 (80 percent). This same adult will be able to answer the items just after your bookmark with probability just below .80.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Satisfaction Questionnaire

Bookmark Standard-Setting Session for the National Adult Literacy Survey (NALS)

National Research Council, Washington, DC

July 17-19, 2004

Thank you for participating in the standard-setting meeting for the National Adult Literacy Survey (NALS). In order to help improve future standard-setting meetings, please complete the following questionnaire about your experiences this weekend.


1. How satisfied were you with the advance information given to you about the standard-setting meeting (e.g., memos with information about the hotel, location of the meeting)?

Very Satisfied

Satisfied Not

Satisfied

Please explain:

2. How satisfied were you with the food provided during the meeting?

Very Satisfied

Satisfied Not

Satisfied

Please explain:

3. How satisfied were you with your hotel accommodations?

Very Satisfied

Satisfied Not

Satisfied

Please explain:

4. How satisfied were you with the training you received on Saturday morning?

Very Satisfied

Satisfied Not

Satisfied

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

Please explain:

5. How satisfied were you with the room assignments and table discussions?

Very Satisfied

Satisfied

Not Satisfied

Please explain:

6. How satisfied were you with the contributions and guidance of the table leaders?

Very Satisfied

Satisfied

Not Satisfied

Please explain:

7. How satisfied were you with the organization of the standard-setting meeting?

Very Satisfied

Satisfied

Not Satisfied

Please explain:

8. How satisfied were you with the cut scores decided by your table?

Very Satisfied

Satisfied

Not Satisfied

Please explain:

9. How valuable do you feel your contribution was to the outcomes of the standard-setting meeting?

Very Satisfied

Satisfied

Not Satisfied

Please explain:

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

10. How valuable was your participation in the standard-setting meeting to you?

Very Satisfied

Satisfied

Not Satisfied

Please explain:

Please feel free to add additional suggestions or comments about the standard-setting meeting.


Thank you!

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-1 Design of the Bookmark Standard Setting with NALS Data, July 2004

 

RP 80

RP 67

RP 50

Table 1

Table 2

Table 3

Table 4

Table 5

Table 6

Table 7

Table 8

Table 9

First Literacy Area:

Prose

Quant.

Doc.

Prose

Quant.

Doc.

Prose

Quant.

Doc.

Second Literacy Area:

Doc.

Prose

Quant.

Doc.

Prose

Quant.

Doc.

Prose

Quant.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-2A Participants’ Bookmark Placements and Associated Cut Scores for Basic, Prose Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

1.1

1

0.80

1

11

262

11

262

6

226

1.2

1

0.80

1

14

276

11

262

10

256

1.3

1

0.80

1

6

226

6

226

6

226

1.4

1

0.80

1

8

250

8

250

6

226

2.1

2

0.80

2

6

226

6

226

6

226

2.2

2

0.80

2

11

262

11

262

12

263

2.3

2

0.80

2

8

250

8

250

8

250

2.4

2

0.80

2

6

226

6

226

6

226

2.5

2

0.80

2

3

208

5

224

6

226

4.1

4

0.67

1

5

197

6

211

5

197

4.2

4

0.67

1

5

197

6

211

5

197

4.3

4

0.67

1

7

225

7

225

6

211

4.4

4

0.67

1

10

241

7

225

5

197

4.5

4

0.67

1

11

242

7

225

5

197

5.1

5

0.67

2

5

197

6

211

6

211

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

5.2

5

0.67

2

7

225

6

211

6

211

5.3

5

0.67

2

5

197

6

211

6

211

5.4

5

0.67

2

7

225

6

211

6

211

5.5

5

0.67

2

7

225

6

211

6

211

7.1

7

0.50

1

5

171

10

217

10

217

7.2

7

0.50

1

10

217

10

217

10

217

7.3

7

0.50

1

10

217

10

217

10

217

7.4

7

0.50

1

12

230

12

230

12

230

8.1

8

0.50

2

6

194

6

194

6

194

8.2

8

0.50

2

6

194

6

194

6

194

8.3

8

0.50

2

7

195

6

194

6

194

8.4

8

0.50

2

5

171

 

8.5

8

0.50

2

6

194

6

194

6

194

Missing data: Participant 8.4 left after Round 1 of Occasion 2 due to a schedule conflict.

aThe first participant at each table (i.e. 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark placements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark placement, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-2B Participants’ Bookmark Placements and Associated Cut Scores for Intermediate, Prose Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

1.1

1

0.80

1

20

289

23

316

23

316

1.2

1

0.80

1

23

316

23

316

22

314

1.3

1

0.80

1

23

316

23

316

23

316

1.4

1

0.80

1

18

287

23

316

24

317

2.1

2

0.80

2

20

289

20

289

20

289

2.2

2

0.80

2

20

289

20

289

20

289

2.3

2

0.80

2

22

314

20

289

20

289

2.4

2

0.80

2

20

289

20

289

20

289

2.5

2

0.80

2

15

277

15

277

20

289

4.1

4

0.67

1

20

270

24

300

20

270

4.2

4

0.67

1

24

300

24

300

24

300

4.3

4

0.67

1

20

270

24

300

20

270

4.4

4

0.67

1

24

300

24

300

20

270

4.5

4

0.67

1

23

297

24

300

20

270

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

5.1

5

0.67

2

15

260

16

263

16

263

5.2

5

0.67

2

29

325

29

325

16

263

5.3

5

0.67

2

7

225

16

263

16

263

5.4

5

0.67

2

20

270

16

263

16

263

5.5

5

0.67

2

24

300

24

300

24

300

7.1

7

0.50

1

20

251

23

276

23

276

7.2

7

0.50

1

24

278

24

278

24

278

7.3

7

0.50

1

23

276

23

276

23

276

7.4

7

0.50

1

29

294

25

281

25

281

8.1

8

0.50

2

20

251

20

251

20

251

8.2

8

0.50

2

15

233

15

233

15

233

8.3

8

0.50

2

31

301

26

285

26

285

8.4

8

0.50

2

23

276

 

8.5

8

0.50

2

23

276

26

285

26

285

Missing data: Participant 8.4 left after Round 1 of Occasion 2 due to a schedule conflict.

aThe first participant at each table (i.e. 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark placements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark place ment, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-2C Participants’ Bookmark Placements and Associated Cut Scores for Advanced, Prose Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

1.1

1

0.80

1

34

371

34

371

30

349

1.2

1

0.80

1

33

363

33

363

32

362

1.3

1

0.80

1

39

433

39

433

39

433

1.4

1

0.80

1

26

329

34

371

32

362

2.1

2

0.80

2

24

317

24

317

27

333

2.2

2

0.80

2

32

362

30

349

32

362

2.3

2

0.80

2

37

410

32

362

32

362

2.4

2

0.80

2

25

324

24

317

27

333

2.5

2

0.80

2

25

324

24

317

24

317

4.1

4

0.67

1

38

401

39

407

34

343

4.2

4

0.67

1

34

343

38

401

34

343

4.3

4

0.67

1

30

329

38

401

24

300

4.4

4

0.67

1

40

424

40

424

37

391

4.5

4

0.67

1

36

359

37

391

36

359

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

5.1

5

0.67

2

33

336

33

336

33

336

5.2

5

0.67

2

40

424

40

424

33

336

5.3

5

0.67

2

20

270

33

336

33

336

5.4

5

0.67

2

37

391

40

424

33

336

5.5

5

0.67

2

31

333

33

336

33

336

7.1

7

0.50

1

37

370

37

370

37

370

7.2

7

0.50

1

39

378

40

380

40

380

7.3

7

0.50

1

40

380

40

380

40

380

7.4

7

0.50

1

36

333

36

333

36

333

8.1

8

0.50

2

31

301

31

301

31

301

8.2

8

0.50

2

30

300

30

300

30

300

8.3

8

0.50

2

36

333

36

333

36

333

8.4

8

0.50

2

32

305

 

8.5

8

0.50

2

38

378

37

370

37

370

Missing data: Participant 8.4 left after Round 1 of Occasion 2 due to a schedule conflict.

aThe first participant at each table (i.e. 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark placements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark placement, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-3 Summary Statistics for the Round 3 Judgments for Prose by Response Probability (RP) Level, July 2004

RP level

Basic

Intermediate

Advanced

0.50

0.67

0.80

0.50

0.67

0.80

0.50

0.67

0.80

Bookmark:

Median

8.00

6.00

6.00

23.50

20.00

20.00

36.50

33.00

32.00

Mean

8.25

5.60

7.33

22.75

19.20

21.33

35.88

33.00

30.56

Std. Dev.

2.49

0.52

2.24

3.69

3.16

1.66

3.68

3.46

4.30

Cut Score:

Median

205.61

210.58

226.09

277.1

269.52

289.34

351.6

336.25

362.36

Mean

207.21

205.25

236.24

270.88

273.01

301.03

345.76

341.59

357.25

Std. Dev.

14.33

6.88

15.53

18.70

14.28

13.89

33.73

22.66

32.98

Std. Error

5.07

2.18

5.18

6.61

4.52

4.63

11.93

7.17

10.99

N

8

10

9

8

10

9

8

10

9

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-4 FOLLOWS

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-4A Participants’ Bookmark Placements and Associated Cut Scores for Basic, Document Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

1.1

1

0.80

2

12

211

12

211

12

211

1.2

1

0.80

2

15

217

14

215

14

215

1.3

1

0.80

2

12

211

12

211

12

211

1.4

1

0.80

2

10

202

10

202

10

202

3.1

3

0.80

1

21

233

18

224

18

224

3.2

3

0.80

1

18

224

18

224

18

224

3.3

3

0.80

1

12

211

12

211

12

211

3.4

3

0.80

1

12

211

18

224

18

224

4.1

4

0.67

2

10

185

10

185

10

185

4.2

4

0.67

2

10

185

10

185

10

185

4.3

4

0.67

2

13

190

10

185

10

185

4.4

4

0.67

2

10

185

10

185

10

185

4.5

4

0.67

2

10

185

10

185

10

185

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

6.1

6

0.67

1

15

193

17

202

17

202

6.2

6

0.67

1

17

202

17

202

17

202

6.3

6

0.67

1

11

187

15

193

15

193

6.4

6

0.67

1

24

215

19

206

19

206

6.5

6

0.67

1

14

191

15

193

15

193

7.1

7

0.50

2

11

159

22

191

22

191

7.2

7

0.50

2

23

184

22

191

22

191

7.3

7

0.50

2

18

176

 

7.4

7

0.50

2

18

176

22

191

22

191

9.1

9

0.50

1

23

184

23

184

 

9.2

9

0.50

1

15

173

20

182

9.3

9

0.50

1

23

184

23

184

9.4

9

0.50

1

8

157

23

184

9.5

9

0.50

1

20

182

23

184

Missing data: Participant 7.3 left after Round 1 of Occasion 2 due to a schedule conflict. Table 9 Round 3 data are missing due to a processing error.

aThe first participant of each table (i.e. 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark placements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark placement, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-4B Participants’ Bookmark Placements and Associated Cut Scores for Intermediate, Document Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

1.1

1

0.80

2

40

257

40

257

40

257

1.2

1

0.80

2

27

240

30

244

30

244

1.3

1

0.80

2

40

257

40

257

40

257

1.4

1

0.80

2

42

260

42

260

42

260

3.1

3

0.80

1

48

276

48

276

48

276

3.2

3

0.80

1

43

261

47

275

45

267

3.3

3

0.80

1

48

276

48

276

48

276

3.4

3

0.80

1

48

276

48

276

48

276

4.1

4

0.67

2

47

247

47

247

47

247

4.2

4

0.67

2

56

271

49

253

49

253

4.3

4

0.67

2

34

226

51

255

51

255

4.4

4

0.67

2

34

226

47

247

47

247

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

4.5

4

0.67

2

38

233

47

247

47

247

6.1

6

0.67

1

48

252

56

271

56

271

6.2

6

0.67

1

56

271

56

271

56

271

6.3

6

0.67

1

38

233

51

255

51

255

6.4

6

0.67

1

58

284

56

271

56

271

6.5

6

0.67

1

44

242

44

242

51

2 55

7.1

7

0.50

2

46

222

53

226

53

226

7.2

7

0.50

2

44

216

44

216

44

216

7.3

7

0.50

2

53

226

 

7.4

7

0.50

2

42

221

46

222

46

222

9.1

9

0.50

1

55

211

55

211

 

9.2

9

0.50

1

42

221

53

226

9.3

9

0.50

1

46

222

55

211

9.4

9

0.50

1

32

199

55

211

9.5

9

0.50

1

47

227

55

211

Missing data: Participant 7.3 left after Round 1 of Occasion 2 due to a schedule conflict. Table 9 Round 3 data are missing due to a processing error.

aThe first participant of each table (i.e. 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark p lacements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark placement, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-4C Participants’ Bookmark Placements and Associated Cut Scores for Advanced, Document Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

1.1

1

0.80

2

58

310

58

310

58

310

1.2

1

0.80

2

41

259

48

276

48

276

1.3

1

0.80

2

58

310

58

310

58

310

1.4

1

0.80

2

64

330

64

330

64

330

3.1

3

0.80

1

68

378

70

386

70

386

3.2

3

0.80

1

64

330

64

330

64

330

3.3

3

0.80

1

71

388

69

380

69

380

3.4

3

0.80

1

66

343

66

343

66

343

4.1

4

0.67

2

65

296

65

296

69

324

4.2

4

0.67

2

69

324

69

324

69

324

4.3

4

0.67

2

57

279

69

324

69

324

4.4

4

0.67

2

65

296

65

296

65

296

4.5

4

0.67

2

58

284

69

324

69

324

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

6.1

6

0.67

1

73

378

73

378

73

378

6.2

6

0.67

1

73

378

73

378

73

378

6.3

6

0.67

1

71

359

72

363

72

363

6.4

6

0.67

1

72

363

72

363

72

363

6.5

6

0.67

1

73

378

73

378

73

378

7.1

7

0.50

2

65

279

67

286

67

286

7.2

7

0.50

2

69

327

68

305

68

305

7.3

7

0.50

2

72

358

 

7.4

7

0.50

2

67

286

65

279

65

279

9.1

9

0.50

1

70

339

70

339

 

9.2

9

0.50

1

65

279

68

305

9.3

9

0.50

1

72

358

70

339

9.4

9

0.50

1

55

211

68

305

9.5

9

0.50

1

65

279

68

305

Missing data: Participant 7.3 left after Round 1 of Occasion 2 due to a schedule conflict. Table 9 Round 3 data are missing due to a processing error.

aThe first participant of each table (i.e. 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark p lacements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark placement, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-5 Summary Statistics for the Round 3 Judgments for Document Literacy by Response Probability (RP) Level, July 2004

RP Level

Basic

Intermediate

Advanced

0.50

0.67

0.80

0.50

0.67

0.80

0.50

0.67

0.80

Bookmark:

Median

8.00

6.00

6.00

23.50

20.00

20.00

36.50

33.00

32.00

Mean

22.25

13.30

14.25

52.00

51.10

42.63

68.00

70.40

62.13

Std. Dev.

1.04

3.65

3.28

4.44

3.75

6.16

1.60

2.63

7.22

Cut Score:

Median

190.17

188.96

213.00

232.80

254.96

263.50

304.89

343.38

330.00

Mean

189.49

192.28

215.25

230.16

257.30

264.14

306.68

345.05

333.13

Std. Dev.

2.55

8.54

8.10

5.10

10.27

11.68

21.30

29.70

36.73

Std. Error

0.90

2.70

2.86

1.80

3.25

4.13

7.53

9.39

12.99

N

8

10

8

8

10

8

8

10

8

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-6 FOLLOWS

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-6A Participants’ Bookmark Placements and Associated Cut Scores for Basic, Quantitative Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

2.1

2

0.80

1

14

283

18

300

16

287

2.2

2

0.80

1

18

300

18

300

16

287

2.3

2

0.80

1

13

282

15

284

15

284

2.4

2

0.80

1

16

287

18

300

16

287

2.5

2

0.80

1

17

295

14

283

14

283

3.1

3

0.80

2

8

277

14

283

8

277

3.2

3

0.80

2

14

283

14

283

6

251

3.3

3

0.80

2

6

251

8

277

8

277

3.4

3

0.80

2

8

277

8

277

8

277

5.1

5

0.67

1

5

216

5

216

5

216

5.2

5

0.67

1

6

217

6

217

6

217

5.3

5

0.67

1

5

216

5

216

5

216

5.4

5

0.67

1

1

 

5

216

5

216

5.5

5

0.67

1

4

211

4

211

4

211

6.1

6

0.67

2

16

272

16

272

16

272

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

6.2

6

0.67

2

15

271

15

271

15

271

6.3

6

0.67

2

15

271

15

271

15

271

6.4

6

0.67

2

15

271

15

271

15

271

6.5

6

0.67

2

15

271

15

271

15

271

8.1

8

0.50

1

7

222

11

235

11

235

8.2

8

0.50

1

11

235

11

235

8

225

8.3

8

0.50

1

15

258

11

235

11

235

8.4

8

0.50

1

7

222

8

225

8

225

8.5

8

0.50

1

14

250

11

235

11

235

9.1

9

0.50

2

11

235

15

258

15

258

9.2

9

0.50

2

11

235

 

9.3

9

0.50

2

15

258

15

258

15

258

9.4

9

0.50

2

6

185

14

250

14

250

9.5

9

0.50

2

14

250

14

250

14

250

Missing data: Scale score cutpoint is undefined for bookmark placed on the first item (Participant 5.4, Round 1); Participant 9.2 left after Round 1 of Occasion 2 due to a schedule conflict.

aThe first participant of each table (i.e., 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark placements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark placement, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-6B Participants’ Bookmark Placements and Associated Cut Scores for Intermediate, Quantitative Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

2.1

2

0.80

1

30

349

36

369

30

349

2.2

2

0.80

1

36

369

36

369

32

355

2.3

2

0.80

1

27

342

30

349

30

349

2.4

2

0.80

1

21

322

36

369

30

349

2.5

2

0.80

1

32

355

27

342

27

342

3.1

3

0.80

2

22

326

29

346

22

326

3.2

3

0.80

2

26

334

29

346

20

309

3.3

3

0.80

2

30

349

30

349

30

349

3.4

3

0.80

2

16

287

22

326

16

287

5.1

5

0.67

1

25

307

18

276

18

276

5.2

5

0.67

1

14

271

25

307

25

307

5.3

5

0.67

1

14

271

17

272

17

272

5.4

5

0.67

1

14

271

18

276

18

276

5.5

5

0.67

1

18

276

18

276

18

276

6.1

6

0.67

2

25

307

25

307

25

307

6.2

6

0.67

2

37

347

27

311

27

311

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

6.3

6

0.67

2

25

307

25

307

25

307

6.4

6

0.67

2

27

311

26

309

27

311

6.5

6

0.67

2

34

338

26

309

26

309

8.1

8

0.50

1

21

263

30

299

30

299

8.2

8

0.50

1

26

284

30

299

26

284

8.3

8

0.50

1

34

310

30

299

30

299

8.4

8

0.50

1

27

288

30

299

29

298

8.5

8

0.50

1

29

298

29

298

29

298

9.1

9

0.50

2

28

289

25

282

25

282

9.2

9

0.50

2

25

282

 

9.3

9

0.50

2

34

310

25

282

25

282

9.4

9

0.50

2

25

282

25

282

25

282

9.5

9

0.50

2

25

282

25

282

25

282

Missing data: Participant 9.2 left after Round 1 of Occasion 2 due to a schedule conflict.

aThe first participant of each table (i.e., 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark placements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark placement, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-6C Participants’ Bookmark Placements and Associated Cut Scores for Advanced, Quantitative Literacy, July 2004

Response

Round 1

Round 2

Round 3

Participanta

Table

Probability

Occasion

BKb

SSc

BK

SS

BK

SS

2.1

2

0.80

1

39

389

39

389

39

389

2.2

2

0.80

1

41

421

41

421

41

421

2.3

2

0.80

1

43

436

41

421

41

421

2.4

2

0.80

1

39

389

39

389

39

389

2.5

2

0.80

1

42

433

38

382

38

382

3.1

3

0.80

2

39

389

38

382

35

366

3.2

3

0.80

2

35

366

35

366

30

349

3.3

3

0.80

2

42

433

39

389

39

389

3.4

3

0.80

2

31

351

39

389

31

351

5.1

5

0.67

1

41

387

37

347

37

347

5.2

5

0.67

1

37

347

37

347

37

347

5.3

5

0.67

1

25

307

25

307

25

307

5.4

5

0.67

1

30

324

37

347

37

347

5.5

5

0.67

1

32

329

32

329

32

329

6.1

6

0.67

2

43

410

43

410

43

410

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

6.2

6

0.67

2

43

410

43

410

43

410

6.3

6

0.67

2

39

356

39

356

39

356

6.4

6

0.67

2

43

410

43

410

43

410

6.5

6

0.67

2

43

410

43

410

43

410

8.1

8

0.50

1

39

323

40

351

40

351

8.2

8

0.50

1

40

351

40

351

37

321

8.3

8

0.50

1

43

384

43

384

43

384

8.4

8

0.50

1

35

316

41

359

37

321

8.5

8

0.50

1

41

359

41

359

41

359

9.1

9

0.50

2

39

323

39

323

39

323

9.2

9

0.50

2

37

321

 

9.3

9

0.50

2

42

360

40

351

40

351

9.4

9

0.50

2

38

322

39

323

39

323

9.5

9

0.50

2

39

323

39

323

39

323

Missing data: Participant 9.2 left after Round 1 of Occasion 2 due to a schedule conflict.

aThe first participant of each table (i.e., 1.1, 2.1, …, 9.1) is the table leader.

bDenotes the item number in the ordered item booklet on which the bookmark was placed (see pg. 112 for explanation of bookmark placements).

cDenotes the cut score associated with the bookmark placement. It is the RP location for the last item before the bookmark placement, converted to a scale score.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-7 Summary Statistics for the Round 3 Judgments for Quantitative Literacy by Response Probability (RP) Level, July 2004

RP level

Basic

Intermediate

Advanced

0.50

0.67

0.80

0.50

0.67

0.80

0.50

0.67

0.80

Bookmark:

Median

11.00

10.50

14.00

26.00

25.00

30.00

39.00

38.00

39.00

Mean

11.89

10.10

11.89

27.11

22.60

26.33

39.44

37.90

37.00

Std. Dev.

2.76

5.40

4.26

2.32

4.25

5.61

1.88

5.86

4.09

Cut Score:

Median

235.05

244.34

283.36

283.52

307.18

348.64

323.45

351.40

389.15

Mean

241.14

243.59

279.13

289.43

295.22

334.9

339.92

367.46

384.23

Std. Dev.

12.85

29.51

11.45

8.66

17.64

23.10

22.51

39.28

26.33

Std. Error

4.28

9.33

3.82

2.89

5.58

7.70

7.50

12.42

8.78

N

9

10

9

9

10

9

9

10

9

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-8 Summary of Round 1 Bookmark Placements and Cut Scores for Prose Literacy by Response Probability (RP) Level and Occasion, July 2004

RP Level

Basic

Intermediate

Advanced

0.80

0.67

0.50

0.80

0.67

0.50

0.80

0.67

0.50

Occasion 1

Median bookmark placement

9.5

7.0

10.0

21.5

23.0

23.5

33.5

36.0

38.0

Median cut score

256.0

225.0

217.0

302.5

297.0

277.0

367.0

359.0

374.0

N

4

5

4

4

5

4

4

5

4

Occasions 1 and 2

Median bookmark placement

8.0

7.0

6.0

20.0

21.5

23.0

32.0

35.0

36.0

Median cut score

250.0

225.0

194.0

289.0

283.0

276.0

362.0

351.0

333.0

N

9

10

9

9

10

9

9

10

9

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-9 Summary of Round 1 Bookmark Placements and Cut Scores for Document Literacy by Response Probability (RP) Level and Occasion, July 2004

RP Level

Basic

Intermediate

Advanced

0.80

0.67

0.50

0.80

0.67

0.50

0.80

0.67

0.50

Occasion 1

Median bookmark placement

15.0

15.0

20.0

48.0

48.0

46.0

67.0

73.0

65.0

Median cut score

217.5

193.0

182.0

276.0

252.0

221.0

360.5

378.0

279.0

N

4

5

5

4

5

5

4

5

5

Occasions 1 and 2

Median bookmark placement

12.0

12.0

18.0

42.5

45.5

46.0

64.0

70.0

67.0

Median cut score

211.0

188.5

176.0

260.5

244.5

221.0

330.0

341.5

286.0

N

8

10

9

8

10

9

8

10

9

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-10 Summary of Round 1 Bookmark Placements and Cut Scores for Quantitative Literacy by Response Probability (RP) Level and Occasion, July 2004

RP Level

Basic

Intermediate

Advanced

0.80

0.67

0.50

0.80

0.67

0.50

0.80

0.67

0.50

Occasion 1

Median bookmark placement

19.0

5.0

11.0

30.0

14.0

27.0

41.0

32.0

40.0

Median cut score

287.0

216.0

235.0

349.0

271.0

288.0

421.0

329.0

351.0

N

5

4

5

5

5

5

5

5

5

Occasions 1 and 2

Median bookmark placement

14.0

10.5

11.0

27.0

25.0

26.5

39.0

40.0

39.0

Median cut score

283.0

271.0

235.0

342.0

307.0

286.0

389.0

371.5

323.0

N

9

9

10

9

10

10

9

10

10

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-11 Regression Results for Bookmark Placements for Prose Literacy, July 2004

Number of Panelists

Occasion 1

Occasions 1 and 2

13

28

Basic

RP50

1.65a (2.06)b

0.54 (1.25)

RP80

2.15 (2.06)

1.21 (1.25)

Constant

7.60 (1.37)

6.90 (0.86)

R2

0.11

0.04

Intermediate

RP50

1.80 (1.86)

2.51 (2.18)

RP80

–1.20 (1.86)

–0.49 (2.18)

Constant

22.2 (1.24)

20.60 (1.50)

R2

0.19

0.07

Advanced

RP50

2.40 (2.64)

1.54 (2.41)

RP80

–2.6 (2.64)

–3.34 (2.41)

Constant

35.6 (1.76)

33.9 (1.66)

R2

0.24

0.14

aRegression coefficient.

bStandard error.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-12 Regression Results for Bookmark Placements for Document Literacy, July 2004

Number of Panelists

Occasion 1

Occasions 1 and 2

13

27

Basic

RP50

1.60a (3.40)b

4.27* (2.13)

RP80

–0.45 (3.61)

0.60 (2.20)

Constant

16.20 (2.41)

13.40 (1.46)

R2

0.03

0.16

Intermediate

RP50

–4.40 (4.58)

–0.08 (3.59)

RP80

–2.05 (4.86)

–3.30 (3.70)

Constant

48.8 (3.23)

45.3 (2.47)

R2

0.08

0.04

Advanced

RP50

–7.0* (2.72)

–0.93 (3.20)

RP80

–5.15 (2.89)

–6.35* (3.31)

Constant

72.4 (1.92)

67.6 (2.20)

R2

0.39

0.15

aRegression coefficient.

bStandard error.

*p < .10.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-13 Regression Results for Bookmark Placements for Quantitative Literacy, July 2004

Number of Panelists

Occasion 1

Occasions 1 and 2

14

29

Basic

RP50

6.60a** (2.10)b

1.40 (1.72)

RP80

11.40** (2.16)

2.97 (1.72)

Constant

4.20 (1.49)

9.70 (1.22)

R2

0.79

0.07

Intermediate

RP50

10.40* (2.87)

4.1 (3.20)

RP80

12.20** (2.95)

3.37 (3.20)

Constant

17.00 (2.03)

23.3 (2.27)

R2

0.58

0.08

Advanced

RP50

6.60* (2.05)

1.70 (2.59)

RP80

7.80* (2.11)

1.40 (2.59)

Constant

33.00 (1.45)

37.6 (1.83)

R2

0.46

0.03

aRegression coefficient

bStandard error

*p < .10

**p < .01

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-14 Regression Results for Cut Scores for Prose Literacy, July 2004

Number of Panelists

Occasion 1

Occasions 1 and 2

13

28

Basic

RP50

–11.65a (16.37)b

–18.99* (8.91)

RP80

33.10* (14.62)

25.70* (9.46)

Constant

220.40 (10.22)

217.10 (5.85)

R2

0.45

0.47

Intermediate

RP50

–12.65 (11.39)

–11.03 (11.54)

RP80

14.60 (10.80)

14.52 (10.25)

Constant

287.40 (7.27)

281.70 (8.99)

R2

0.35

0.18

Advanced

RP50

–5.95 (21.21)

–19.00 (19.39)

RP80

2.80 (28.12)

–1.78 (20.62)

Constant

371.20 (18.24)

361.00 (15.51)

R2

0.01

0.04

aRegression coefficient.

bStandard error.

*p < .10.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-15 Regression Results for Cut Scores for Document Literacy, July 2004

Number of Panelists

Occasion 1

Occasions 1 and 2

14

28

Basic

RP50

–21.60a* (7.25)b

–16.80** (4.67)

RP80

22.15** (7.28)

23.20** (4.57)

Constant

197.60 (5.04)

191.80 (3.10)

R2

0.76

0.74

Intermediate

RP50

–40.40** (10.69)

–30.17** (7.17)

RP80

15.85 (10.12)

14.38* (7.92)

Constant

256.40 (9.44)

248.50 (6.56)

R2

0.76

0.61

Advanced

RP50

–78.00* (26.52)

–31.72 (20.81)

RP80

–11.45 (14.17)

–2.50 (19.56)

Constant

371.20 (4.25)

333.50 (13.33)

R2

0.52

0.11

aRegression coefficient.

bStandard error.

*p < .10.

**p < .01.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

TABLE C-16 Regression Results for Cut Scores for Quantitative Literacy, July 2004

Number of Panelists

Occasion 1

Occasions 1 and 2

14

29

Basic

RP50

22.20a** (7.37)b

–8.20 (11.67)

RP80

74.20** (3.66)

38.47** (10.42)

Constant

215.20 (1.07)

243.20 (9.36)

R2

0.92

0.46

Intermediate

RP50

9.40 (10.51)

–11.80 (9.92)

RP80

68.20** (10.45)

36.40* (11.85)

Constant

279.20 (7.02)

300.60 (8.85)

R2

0.80

0.47

Advanced

RP50

7.80 (18.41)

–30.80* (14.97)

RP80

74.80** (17.12)

31.78* (16.64)

Constant

338.80 (13.63)

369.00 (13.01)

R2

0.65

0.40

aRegression coefficient.

bStandard error.

*p < .10.

**p < .01.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

FIGURE C-1 Prose literacy cut scores by round for participants at each table, July 2004. Symbols indicate basic (Δ), intermediate (*), and advanced (∇) cut-score judgments. Round 3 medians are depicted by standalone symbols.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

FIGURE C-2 Document literacy cut scores by round for participants at each table, July 2004. Symbols indicate basic (Δ), intermediate (*), and advanced (∇) cut-score judgments. Round 3 medians are depicted by standalone symbols.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.

FIGURE C-3 Quantitative literacy cut scores by round for participants at each table, July 2004. Symbols indicate basic (Δ), intermediate (*), and advanced (∇) cut-score judgments. Round 3 medians are depicted by standalone symbols.

Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 221
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 222
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 223
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 224
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 225
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 226
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 227
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 228
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 229
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 230
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 231
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 232
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 233
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 234
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 235
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 236
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 237
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 238
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 239
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 240
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 241
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 242
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 243
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 244
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 245
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 246
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 247
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 248
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 249
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 250
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 251
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 252
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 253
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 254
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 255
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 256
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 257
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 258
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 259
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 260
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 261
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 262
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 263
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 264
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 265
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 266
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 267
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 268
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 269
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 270
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 271
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 272
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 273
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 274
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 275
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 276
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 277
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 278
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 279
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 280
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 281
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 282
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 283
Suggested Citation: "Appendix C July 2004 Bookmark Standard-Setting Session with the 1992 NALS Data." National Research Council. 2005. Measuring Literacy: Performance Levels for Adults. Washington, DC: The National Academies Press. doi: 10.17226/11267.
Page 284
Next Chapter: Appendix D September 2004 Bookmark Standard-Setting Session with the 2003 NAAL Data
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.