This chapter reviews the seven Marine Recreational Information Program (MRIP) data quality standards one by one. Each standard is first presented verbatim (see Boxes 3-1–3-7), followed by a discussion.
The panel accepts this data standard as an appropriate starting point for evaluating any proposed survey.
The panel notes the need for greater clarity to the first bullet, which might be read as saying the survey goal is to meet the key estimate and precision requirements. Though the production of reliable statistical estimates is certainly an important goal, a well-designed survey plan will also discuss what data are to be collected and how the data are intended to be used. A well-designed plan would include details on which species are the priority to monitor and what fishing seasons apply; however, MRIP might consider adding more detail than appears in Standard 7 to promote greater consistency in the measures used across states (e.g., how to measure the finfish catch, perhaps with different instructions for different species).
Following the American Association for Public Opinion Research (AAPOR) Best Practices for Survey Research cited above, MRIP’s survey concepts should emphasize that survey practitioners should consider the specific design of the study during the planning phase itself. In well-designed surveys, the plan would specify the mode of data collection (i.e., web, telephone, in-person, mail, or text, alone or in combination). Survey practitioners who wish to include non-probability sources similarly would define
A written survey plan should, at minimum, describe:
For an example, see Access Point Angler Intercept Survey: Survey Plan (PDF, 1 page).
All information collections administered or sponsored by a federal agency must comply with the requirements of the Paperwork Reduction Act of 1995 (PRA). Survey administrators are responsible for developing supporting materials for OMB. If the survey administrator is not a federal agency, the federal agency sponsor is responsible for submitting these materials.
their sources, including opt-in or volunteer panels, angler mobile apps, citizen science, and volunteered information. Another key part of the survey design process is to plan for appropriate weighting and post-adjustments regardless of whether probability samples, non-probability samples, or a blend of both are used. These are all key elements of the survey design process; model-based approaches are discussed separately in Chapter 4.
Data precision also depends on the tools that are used for measurement. Sometimes survey personnel are present at the landing sites to perform measurements themselves, but sometimes the surveys are dependent on anglers’ self-reports. This is especially true for discards. Tools such as phone apps can be used to help anglers provide more precise information.
The panel applauds the parallel consideration of survey objective and intended users. The panel believes that the joint consideration of both the objectives of the survey and the application of the outputs of the survey by the intended users will promote the development of surveys that have broadest impact and acceptance.
Conclusion 3-1: The current Marine Recreational Information Program standards are highly focused on survey research as the appropriate tool for collecting statistics, without considering alternative methodologies including, but not limited to, catch and landing records, electronic monitoring and vessel monitoring systems, and alternative modeling approaches.
Conclusion 3-2: The use of technology such as fishing measurement phone apps, which can be used to estimate biological characteristics of fish both caught and released and caught and retained, might improve both timeliness and accuracy of Marine Recreational Information Program estimates.
Recognizing that survey design is a critical and central element to data standards, the panel finds that this standard is well written in terms of what it covers but also finds its coverage to be overly narrow.
The panel concludes that the data standard is narrow in its strong, direct, and limited focus on surveys as the principal or sole method of meeting the need that justifies the sampling. The panel suggests the standard be broadened to include both probability and non-probability sampling. The existing data standard is primarily focused on probability sampling, though it does allow for non-probability sampling, a newly developing area that may deserve more attention. Researchers may wish to combine or blend non-probability with probability data to increase the overall sample size or because of the need for convenience-based designs. When a non-probability sample is used and weights are calculated, an important step is to calibrate the weights to population totals from a reliable probability sample.1
One key aspect of statistical design is the conceptual development and testing of whatever data are to be collected and of how those data can be collected in a way that will provide the most accurate and useful responses. Standard 2 implicitly addresses such issues in section 2.4 (evaluation) in the discussion of non-sampling error, but the objective is focused more on how to deal with or measure non-sampling error than on how to prevent it. There should be greater emphasis in the survey design phase on how to prevent or minimize non-sampling error, using such means as pretests and cognitive interviews. Standard 1.2, which requires compliance with the
___________________
1 Bayesian modeling is discussed in greater detail in Chapter 4. Additional sources on nonprobability sampling include Liu et al. (2021, 2025), Salvatore et al. (2024), Wisniowski et al. (2020), and Yang et al. (2020).
A technical document describing the survey objective(s) and basic data collection and estimation designs must be developed, reviewed, and certified (see Standard 5.1). The document must include the components described in Standards 2.1–2.4. For an example, see Survey Design and Statistical Methods for Estimation of Recreational Fisheries Catch and Effort.
A sampling plan must describe the survey’s target population, sample frame (including sample frame development and maintenance), sample unit, stratification, and methods of sample selection (e.g., simple random sampling, probability proportional to size sampling, justification for non-probability designs).
Descriptions of data collection procedures must include the anticipated frequency and timing of data collection, primary data collection mode(s), survey protocols, data elements, and survey instruments.
2.3.1: Sample Weights: A weighting plan must identify stages of weighting (e.g., base weights, nonresponse adjusted weights, calibrated/post-stratified weights) and include calculations and/or descriptions for how final weights are derived.
2.3.2: Point and Variance Estimate Calculation: Descriptions of estimation designs must include point and variance estimates. Estimates must be generated using a method that is statistically consistent with the survey’s sample design (e.g., unequal probabilities of selection, stratification, clustering, and the effects of nonresponse, post-stratification, and raking).
An evaluation plan must identify potential sources of non-sampling error, including nonresponse, undercoverage, and measurement. To the extent possible, this plan should evaluate potential impacts of non-sampling error on survey estimates and describe plans/results from bias analyses that attempt to mitigate and/or measure non-sampling error (e.g., nonresponse bias adjustments, post-stratification, calibration weighting).
Paperwork Reduction Act, will help in getting survey designers to consider non-sampling error as part of the design process, but it would be good to discuss this more explicitly.
Researchers should also be aware of and test for the potential for bias, particularly if the convenience-based design systematically excludes or underrepresents some types of people. For example, bias may occur if anglers who are successful in their catches are more likely to respond than anglers who are unsuccessful. Re-weighting to national totals does not necessarily correct for such bias. Calibration is a current area of research among survey practitioners with methods including LASSO (least absolute shrinkage and selection operator), propensity weighting, and small-area modeling described in Chen et al. (2018), Ganesh et al. (2017), Golini and Righi (2024), Valliant (2020), and Yang et al. (2020).
Conclusion 3-3: Standard 2 has a relatively narrow focus, concentrating on survey research as the methodological tool and primarily on probability sampling within survey research. Potential value may be gained by considering non-probability sampling.
The panel finds this data standard to be clearly written. The requirement in Standard 3.1 to include both edited and unedited values in survey data sets goes beyond industry-standard approaches for constructing public-use data sets that contain edited data. While documenting the methods and procedures used to edit the data is considered a best practice, it is not
Survey documentation must describe data processing procedures, including methods to review and edit data to mitigate errors, as well as methods to compensate for item nonresponse. Survey data must identify actions taken during editing, and include both edited and unedited values.
Survey documentation must include a quality assurance plan for each phase of the survey process, to support performance monitoring and assessment. Quality assurance may include the training and supervision of data collectors, as well as the independent validation of interviews, where possible.
common practice to provide both the edited data and the original raw data when the data have been edited. Including flag variables to indicate which cases were subjected to specific types of edits, such as top-coding and imputation, is generally viewed as a good practice. However, including the unedited data may be problematic in that doing so might directly conflict with standards on confidentiality. That is, if enough data elements are available, then in combination they may make it possible to identify a respondent; data edits (particularly rounding and the use of categories) are sometimes applied as a measure to mitigate against disclosure risk.
Conclusion 3-4: The current documentation requirements may create risks for confidentiality and may present burdens on data providers that are contrary to standard practice.
The panel encourages MRIP staff to consider a total survey error (TSE) approach. The TSE framework is a concept in the survey research field that is useful when considering errors and their sources (Biemer, 2010). Errors are defined as being due to sampling error or non-sampling error, with the latter category further broken down into four types: coverage, nonresponse, measurement, and processing error (Biemer, 2010). As an example, problems with bias in who provides data on fishing discards may be more important than the sampling error associated with these statistics.
For research practitioners seeking to follow Standards 2.4 and 3.1, it may be helpful to describe sampling and non-sampling errors and their sources following a TSE framework. Specifically, item 2.4 on “Evaluation” within Standard 2 would relate to sampling error, and item 3.1 on “Processing, Editing, and Quality Control” within Standard 3 concerns non-sampling errors. Recent literature on this topic that would aid in the adoption of a TSE framework includes Biemer et al. (2017) and Amaya et al. (2020). In addition, Biemer and Lyberg (2003) provides more detail on quality control and evaluation across the survey lifecycle, exploring questionnaire design, interviewer effects, mode effects, and data processing while considering cost and timeliness.
Conclusion 3-5: The standards do not give sufficient consideration to reporting error and other non-sampling error, particularly with regard to discards.
MRIP was designed to provide the nation with the magnitude of harvests from recreational anglers based on the technologies available at the time. Newer technologies have the potential to affect both the collection and the analysis of data. Quality assurance and quality control constitute one area where artificial intelligence (AI) is likely to help reduce the burden on researchers going forward. For example, machine learning, natural
language processing, and deep learning have been applied to such tasks as data cleaning, de-duplication, data transformation, and data processing. Such approaches are expected, ultimately, to improve the accuracy and precision of data sets generated as part of MRIP. It is expected that AI will provide faster and more accurate methods of quality assurance in the coming years, potentially reducing costs while improving data quality. AI is discussed in greater detail in Chapter 4.
Conclusion 3-6: Requiring that data submitted to Marine Recreational Information Program include the original raw data as well as edited values may present risks to protecting confidentiality.
Conclusion 3-7: Artificial intelligence may be useful in supporting data quality checks and in helping such checks to be performed in a timely manner.
Recommendation 3-1: Marine Recreational Information Program should drive innovation by evaluating and encouraging the adoption of new technologies that enhance data precision and accelerate the release of information.
The panel supports the intent of this data standard. There has been a history of concern on the part of managers when MRIP has changed survey elements. Catch estimates substantially changed when MRIP replaced Marine Recreational Fisheries Statistics Survey as a result of the adoption of probability-based site allocation in the Access Point Angler Intercept Survey (APAIS). For the Fisheries Effort Survey, similar concerns were expressed within MRIP when the current mail survey replaced a telephone survey, while more recent concerns relate to the issue of question sequence.
Per NOAA Fisheries Policy Directive 04-114 (PDF, 4 pages), a transition plan must be prepared before the implementation of new or improved sampling or estimation designs that are likely to result in large deviations from historical estimates. Guidance is provided in Procedural Directive 04-114-01 (PDF, 5 pages). For an example, see MRIP Transition Plan for the Fishing Effort Survey.
National Marine Fisheries Service has conducted calibration studies for each of these types of transitions to measure how changes to the questionnaires or procedures have resulted in changes to the data, although some of these have been limited because of funding restrictions.
Multiple years of calibration may be beneficial if changes are substantive. Another major transition has been from MRIP’s oversight of APAIS to state involvement in the collection of survey data. It is important to know whether apparent changes in data reflect real changes in fishing practices or fishing stocks or are a result of changes in methodology. Similarly, it is possible that changes in methodology could hide or minimize actual changes in practices or stocks.
Given these concerns, Standard 4 is well justified. With any transition, consideration should be given to the potential for the change to affect estimates, affecting both current measures and finishing management procedures as well as measures of change over time; if the change has the potential to affect estimates, transition planning should include detailed descriptions of efforts to provide a data series to calibrate the impact of moving from the previous methodology to the new methodology. It can be difficult to anticipate what the impacts of changes might be; as for any change in survey design, it is wise to engage in at least some testing of the planned changes.
Conclusion 3-8: Anticipating and measuring the impact of changes to survey design, data collection, and data processing is critical to being able to accurately measure change over time.
The above standard is comprehensive and is consistent with those of other organizations, including the AAPOR transparency initiative. That initiative goes further by having an official process where organizations are certified and undergo re-certification every two years.2
The panel recommends that MRIP survey practitioners follow the official AAPOR standard and calculations to estimate response rates to avoid ambiguity.3 The official AAPOR Standard Definitions (AAPOR 2023) detail how case dispositions are assigned and how the various rates listed as part of Standard 5 are calculated.4 AAPOR has also published web tools to as-
___________________
2 https://aapor.org/standards-and-ethics/transparency-initiative/#where-can-i-get-more-info
3 https://aapor.org/standards-and-ethics/standard-definitions/
4 https://aapor.org/wp-content/uploads/2024/03/Standards-Definitions-10th-edition.pdf
All survey designs are subject to the certification requirements set forth in NOAA Fisheries Policy Directive 04-114 (PDF, 4 pages). Certification requires a peer review of survey methods, as well as review and approval by the MRIP Executive Steering Committee. The required documentation is described in Standard 2: Survey Design and Standard 3: Data Quality.
An annual report for each survey must be submitted at the conclusion of each survey year. Annual reports must provide an overview of data collection procedures, including questionnaires and data collection schedules; the components described in Standards 5.2.1–5.2.7; and key survey estimates for each survey administration (e.g., reference wave) within the survey year. For an example, see the Annual Report for the Fishing Effort Survey.
5.2.1: Sample sizes: Overall, and by state/stratum.
5.2.2: Completed surveys: Overall, by state/stratum, and final disposition.
5.2.3: Response or compliance rates. In addition to the rates themselves, annual reports must describe the method used to calculate response rates (e.g., from a probability sampling design) or to estimate compliance rates (e.g., from a capture-recapture design).
5.2.4: Summary of editing and corrective actions.
5.2.5: Planned or unplanned modifications. Annual reports must describe changes in methodology (e.g., design or implementation changes; changes to survey administration or estimate production timing; changes to address an unmet need; changes to accommodate testing and/or benchmarking; and/or changes in survey coverage). If these modifications are unplanned, annual reports must include information described in Standard 6.2.
5.2.6: Quality assurance measures to minimize sampling and non-sampling error.
5.2.7: Process improvement. Annual reports must describe efforts to identify and evaluate alternative or modified designs that could improve the accuracy, efficiency, and timeliness of survey results as described in Standard 6.1.
sist in the calculation of official response rates,5 which may be of interest to newer users.
Recommendation 3-2: Marine Recreational Information Program should adopt American Association for Public Opinion Research standards for computing survey response rates.
___________________
Ideally, each survey program would meet each of these standards. However, in recognition of the challenges faced by some programs, including resource limitations, a tiered certification system could be considered. Under such a structure, programs meeting some but not all standards could be certified but with different levels indicating the level of adherence to the standards. This approach is used in some international educational assessment programs (see, for example, the Organisation for Economic Cooperation and Development’s Programme for International Student Assessments [PISA] program, which uses annotations in its reporting to indicate countries/economies that did not meet particular technical standards.6)
To implement a tiered system, consideration must be given as to the minimum criteria for each level (tier). For example, a program may be assigned one tier of certification if most (but not all) standards are met (or perhaps if key standards are met), a higher tier if all standards are met except full comparability of the data definitions with the federal survey, and a still higher tier if the definitions are designed to provide comparable data to the federal MRIP surveys. The exact specifications required for each tier would depend on MRIP’s priorities along with considerations of which requirements present the greatest difficulties for those seeking certification. Such a tiered system can help states to be involved in MRIP, allow MRIP data to be more complete, and provide information to users about the quality of the data.
Conclusion 3-9: Providing multiple tiers of certification or participation is one way that organizations have expanded access to participants meeting the quality standards to varying degrees.
Recommendation 3-3: Marine Recreational Information Program should consider adopting tiered levels of certification reflecting different levels of data quality.
The panel supports this data standard. Over time, process improvement may occur for a variety of reasons. For example, some data needs might not currently be met satisfactorily, developments in technology or statistical processes may allow new opportunities for improvement in data collection and reporting, and changes in the environment may create the need for changes in process (as in MRIP’s earlier change from phone-based to address-based sampling). As Standard 6.2 notes, sometimes there may be a need to make changes simply to minimize or resolve a problem that has appeared. In a time of limited resources, how process improvements
___________________
The ongoing evaluation of survey designs ensures methods address emerging needs and incorporate current best practices. Annual reports, as described in Standard 5.2, must discuss these efforts. Regional Implementation Plans should identify unmet needs. The MRIP Program Management Team will evaluate recommendations to implement or further explore revisions on a case-by-case basis, and may identify opportunities for additional research upon review of annual reports and Regional Implementation Plans.
In some instances, survey results (e.g., low response rates) or administrative challenges (e.g., budget shortfalls) may create or identify deficiencies that require unanticipated design modifications. The need for unplanned changes must be communicated to the MRIP Program Management Team as soon as possible. A record of these changes must be included in the annual report described in Standard 5.2 (see Standard 5.2.5).
are targeted and prioritized should be driven by consideration of the survey objectives (Standard 1). All changes should be documented, and the implications for data quality and comparability should be considered.
Conclusion 3-10: While it is desirable to pursue improvements in process, it is important to determine the impact of those improvements on survey estimates and prioritize development and implementation of improvements accordingly.
This is the most detailed and specific of the seven data standards. It has also been the most contentious of the draft data standards. As discussed in Chapter 1, MRIP standard that drew the greatest number of complaints from data users is the requirement to suppress estimates for which the percent standard errors (PSEs) exceeded 50 percent. The complaints had three main components: (a) even if unreliable, the estimates are often the only ones available and are needed for fishery management; (b) some users had access to the suppressed statistics, leading to inequities in data access; and (c) transparency is needed in order to promote trust in the data system.
Information products must include the data described in Standard 7.1 and the statistical elements described in Standard 7.2; these data and statistics must be associated with the attributes described in Standard 7.4. Files must be available in the formats described in Standard 7.3. Information products associated with data collections funded by NOAA Fisheries must meet the federal information management requirements described in Standard 7.5.
Survey data (i.e., observation data collected for each statistical unit).
7.1.1: Processed microdata (post-quality control and other edits) are published online with associated weights each survey year.
7.2.1: Key Statistics: At minimum, key statistics must include total (estimated or censused) finfish catch (landed and released) by year (calendar and/or fishing year), state, fishing mode, area fished, and species, and total (estimated or censused) finfish trips by year, state, mode, and area. All published estimates must include a point estimate and a measure of precision.
7.2.1.1: Publication schedule: Recreational fisheries data programs must establish a standard schedule for the publication of key statistics (e.g., monthly, every two months).
7.2.1.2: Aggregation: MRIP strongly recommends the production and use of aggregated estimates (e.g., cumulative within year, annual) for enhanced precision and reliability. MRIP does not support the use of disaggregated estimates that do not conform to standard 7.2.2.
7.2.2: Measures of Precision for Estimates Posted Publicly: OMB has established Standards and Guidelines for Statistical Surveys that require agencies to identify criteria for determining when errors are too large for a survey estimate to be publicly released. The U.S. Census Bureau, also within the Department of Commerce, does not publicly release an estimate when its coefficient of variation exceeds 30 percent. Given the pulse nature and high variability of many recreational fisheries and unique stock assessment needs, NOAA Fisheries has adopted a more liberal precision standard that provides data use guidance while maintaining access to all available estimates.
7.2.2.1. MRIP presents a warning when the percent standard error (PSE) for an estimate exceeds 30 percent with the recommendation that it should be used with caution.
7.2.2.2. MRIP does not support the use of estimates when the PSE exceeds 50 percent and, in those instances, recommends considering higher levels of aggregation (e.g., across states, geographic regions, fishing modes, and/or years).
7.3.1: Required: CSV.
7.3.2: Recommended: SAS.
7.4.1: Geographic.
7.4.2: Temporal.
7.4.3: Fishing Modes.
7.4.4. Fishing Area.
7.4.5: Public Use Datasets.
7.4.5.1: Survey Data (.xls).
7.4.5.2: Estimate Data (.xls).
7.5.1: General
7.5.1.1: NOAA Plan for Increasing Public Access to Research Results (PARR).
7.5.1.2: National Standard 2 (Best Scientific Information Available).
7.5.2: Access
7.5.2.1: NAO 205-17A: NOAA Information Access and Dissemination.
7.5.2.2: NAO 216-100: Protection of Confidential Fisheries Statistics.
7.5.2.3: NOAA Data Sharing Policy for Grants and Cooperative Agreements.
7.5.3: Archive
7.5.3.1: NOAA Procedure for Scientific Records Appraisal and Archive Approval.
7.5.4: Data Management
7.5.4.1: NAO 212-15: Management of Environmental Data and Information.
7.5.4.2: NOAA Data Management Planning Procedural Directive.
7.5.4.3: NOAA Fisheries Data and Information Management Policy Directive.
7.5.5: Documentation/Metadata
7.5.5.1: NOAA Data Documentation Procedural Directive.
7.5.5.2: NOAA Fisheries Data Documentation Procedural Directive.
7.5.6: Citation
7.5.6.1: NOAA Data Citation Procedural Directive. Disclosure Avoidance: Prevent the unauthorized release of personally or business identifiable information (PII/BII).
The panel reviewed the issue from several perspectives:
___________________
7 Richard Cody, presentation to the panel on April 17, 2025.
8 https://bjs.ojp.gov/bjs-data-quality-guidelines/overview#21-0
9 https://www.eia.gov/consumption/residential/data/2020/pdf/microdata-guide.pdf
Weighing these various factors, the panel felt that priority should be given to transparency, providing the statistics along with sufficient information for data users to evaluate the quality of the data. Suppressing the data would deprive some data users of needed data, while publishing the data with caveats about the data reliability allows data users to weigh what importance to give to the data.
___________________
10 https://www.nps.gov/subjects/socialscience/visitor-use-statistics-dashboard.htm
11 Robert Sivinski, presentation to the panel on June 26, 2025.
12 John Frazer, presentation to the panel on June 26, 2025; John Walter, presentation to the panel on May 23, 2025.
13 David Marker, presentation to the panel on June 26, 2025.
Conclusion 3-11: Transparency is important when data quality is at issue, allowing users to judge data fitness.
Conclusion 3-12: Having some data, even of poor quality, is sometimes better than having no data.
Conclusion 3-13: Marine Recreational Information Program’s current approach of using color coding to identify less reliable estimates is an effective means of communicating the need for user care.
Recommendation 3-4: In the interests of transparency and supporting fishery management programs, Marine Recreational Information Program should continue its current practice of publishing estimates with high percent standard errors while clearly identifying such estimates as having low reliability.
Recommendation 3-5: Because participating state agencies respond to their constituents, we suggest that Marine Recreational Information Program staff provide a forum to discuss the changes to percent standard error reporting recommended by this report and obtain feedback that is congruent with the needs of agencies and the advice from this report.
Since Standard 7 describes what data should be delivered to MRIP, its implementation also depends on other issues discussed elsewhere in this report. The encouragement to seek greater consistency in definitions and in what data are collected (Conclusion 2-6 and Recommendation 2-1) would affect the data deliveries described in Standard 7. Similarly, the panel’s concerns about protecting confidentiality (Conclusions 3-4 and 3-6) and for reporting on error (Conclusion 3-5) would affect data deliveries. The use of model building (Recommendation 4-2) and AI (Recommendation 4-3) could enhance data quality and consistency. Also, while Standard 7 concerns how data should be submitted to MRIP, one might also consider how data should be provided by MRIP; besides offering tables of information, MRIP might also consider offering visual or interactive presentations of data, making tables readily downloadable, archiving the data, and providing metadata.