To investigate the most-needed data content, the panel interviewed experts and stakeholders on the following topics:
In those interviews and panel meetings, the panel probed further about:
To determine which topics should be given the highest priority, the panel specified four minimum criteria, requiring that each topic meet at least one of the four. These criteria are:
Based on these criteria, the panel identified 112 topics of interest. These topics were then prioritized in two ways: first, and most critically, by the importance or value of the topic, and second, by level of effort required (i.e., whether NCES could reasonably make progress on the topic).
The importance of each topic was evaluated based on the following dimensions:
NCES’s ability to make progress on each topic was evaluated based on the following:
For each of these criteria, the 112 topics were assigned a yes/no determination as to whether the topic met each criterion. Allowing for the division of criteria 2 and 4 into two separate dimensions, and of criterion 10 into three separate dimensions, a topic could meet up to 10 criteria on importance and up to 9 criteria on feasibility.
These criteria and the process used are described here both to document how the panel evaluated potential topics for inclusion in NCES’s
data collections and as a possible model for NCES as the Center pursues its strategic planning. No attempt is made to assert that the 19 criteria are equal in value, and a variety of weights might be attached to each criterion; some criteria might be considered so important that satisfying just that criterion would be sufficient justification for including a topic, while others might have lesser importance and be insufficient alone. We suggest using the following strategies. First, if a topic fails to satisfy any of the criteria, one might reevaluate the importance of that topic. Second, the degree to which a topic satisfies multiple criteria can be interpreted as a rough measure of the broad importance or feasibility of the topic. Third, the criteria might be used as tools for identifying gaps in current or planned data collections (e.g., could a data collection be modified to more thoroughly address NCES’s research priorities?). Clearly, there are reasons to limit any data collection (e.g., cost constraints and concerns about response rates), and we are not suggesting that every data collection be turned into a massive effort. Sometimes, a risk in data-collection development is that everyone has a topic to add. Still, there may be data collections whose utility can be greatly increased with only minor changes. Fourth, to counterbalance the third point, there may be topics so thoroughly covered elsewhere that there is little advantage to adding data items on those topics. Finally, we should emphasize that, in addition to data collection, another way for NCES to provide leadership on these topics is by creating standards and tools that others may use. NCES has done this by creating tools such as the Classification of Instructional Programs and the Department of Education’s School Climate Surveys.