![]()
THE MOTIVATING QUESTION for most of this report is simply stated as, “What is ‘crime’?” This chapter is certainly driven by that question, too, though it is also motivated by a dual question—“What are ‘crime statistics’?”—that shares with the first question the vexing property that it seems simple but is very complex to answer. The pat answer is that the United States has two primary sources for nationally compiled statistics on the incidence of crime: the data gathered by the Uniform Crime Reporting (UCR) Program and the results of the ongoing National Crime Victimization Survey (NCVS). The former data are premised on the voluntary contribution of information from local law enforcement agencies (primarily through state coordinators) to the Federal Bureau of Investigation (FBI), and the latter are derived from a major sample survey sponsored by the Bureau of Justice Statistics (BJS) that directly interviews people and households on their experiences with crime and violence.
These two sources span two major concepts or philosophies of data collection: The UCR series are essentially administrative records, premised on the voluntary contribution of information culled from the records of local law enforcement agencies, while the NCVS is a sample survey involving direct interviews with people and households on their experiences with victimization or crime. Ultimately, both data systems produce estimates of the incidence of crime, the UCR emphasizing counts of incidents of various types that come to the attention of police (and serving as an estimate of crime because it is also subject to nonreporting or misreporting by local agencies) and the
NCVS emphasizing rates of victimization within the broader population (and overtly being an estimate based on inference from a carefully chosen sample of households).
The UCR and the NCVS are two principal sources of U.S. crime statistics, but are certainly not the only data systems that are or might be sources of crime-related data. As our panel’s parent Committee on National Statistics describes in its regular Principles and Practices for a Federal Statistical Agency publication, the United States has an extremely decentralized statistical system—the natural product of a historical tendency, when a new issue or topic gains importance in the public eye, of the government to craft new agencies and new data collection programs to address them, instead of vesting broader authority in a central statistical office. The same holds true in the specific field of crime and justice statistics. As we describe below in this chapter, coverage of different crime types has been added to the larger UCR and NCVS data collection schemes, but it has also been common for parallel data collection systems with sometimes strong substantive overlap to be established in other bureaus and departments. Indeed, it is often the case that multiple, “competing” data collections using different methodologies are established to examine the same type of criminal (or socially unacceptable) behavior. Hence, there are examples of the same crime types being covered by different data collections in an administrative-data-compilation arrangement akin to the UCR Program, in a sample survey akin to the NCVS, or through other means. To be clear, the treatment of these parallel data sources in this chapter is meant to be suggestive, not exhaustive of the full range of crime-related data resources. It is simply meant to illustrate the complexity in identifying any single uniquely correct or comprehensive source of “crime statistics” in the United States.
Given their sweeping nature, it is natural to start with a description of the two extant major resources for crime statistics, the UCR and the NCVS, before delving into parallel sources for some specific crime types. In all cases, given the role of this report to suggest a classification of crime in order to guide identification of an eventual set of crime indicators, we limit ourselves to descriptions of the coverage (topic/crime type) and basic nature of the sources of crime statistics. Fuller examination of underlying methodology awaits our final report, as appropriate.
The origin of the Uniform Crime Reporting Program is recounted in Chapter 1; to recap in brief, today’s UCR Program compiles the voluntary data contributions from law enforcement agencies (in most cases, monthly reports coordinated through a state-level coordinating agency) into a national-
level resource. Data collection under the UCR Program began in 1929, initially directly under the auspices of the International Association of Chiefs of Police (IACP) but within one year transferred to what would become the Federal Bureau of Investigation (FBI). Today, it is administered by the Criminal Justice Information Services (CJIS) division of the FBI, headquartered in Clarksburg, WV. In nearly all cases, police-report data are channeled to the FBI by way of a state-level coordinating agency, and these agencies have a coordinating arm in the Association of State Uniform Crime Reporting Programs (ASUCRP). Contribution of data to the national UCR Program remains voluntary, as it has since the outset, although statute in some states require law enforcement agencies to report data to the state level.1
The basic legal authority for UCR data collection stems from a single line in legal authorization text for the U.S. Department of Justice, in which the Attorney General is directed to “acquire, collect, classify, and preserve identification, criminal identification, crime, and other records” (28 USC § 534(a)). Only in 1963 was the UCR program formally described as an FBI function in federal regulation; rules organizing the Justice Department vest the director of the FBI with authority to “operate a central clearinghouse for police statistics under the Uniform Crime Reporting program” (28 CFR 0.85)—making specific reference to “police statistics” rather than crime statistics.2 Moreover, it was only in 1988 that a definition (short of explicit authorization) of the UCR Program was etched into law: In extending UCR’s scope to include crimes known to federal law enforcement agencies, Congress noted that “the term ‘Uniform Crime Reports’ means the reports” authorized under the Attorney General’s record collection powers “and administered by the Federal Bureau of Investigation which compiles nationwide criminal statistics for use in
__________________
1We will return to this point in our final report, but it is worth noting here for clarity: Several states have laws mandating that local law enforcement agencies report UCR-type data to a state coordinating agency (e.g., the state police or highway patrol), but very few of these coordinators are required to submit the data to the FBI—hence, the descriptor of the UCR as a voluntary collection. Virginia is among the exceptions; there, law holds that the Virginia State Police “shall correlate reports submitted to it and shall compile and submit reports to the Federal Bureau of Investigation on behalf of all agencies of the Commonwealth, as may be required by the federal standards for the uniform crime reporting program” (Va. Code Ann. § 52-30 (2015)). Oregon law implies but does not explicitly express required submission of data to the FBI, noting that local agencies are required to submit crime data to the state police, “for purposes of the Uniform Crime Reporting System of the Federal Bureau of Investigation” (ORS § 181.550 (2015)). But many other states are less- or noncommittal: Rhode Island requires only that state-compiled crime data be made available to the FBI “upon request” (R.I. Gen. Laws § 12-24-3 (2015), phrasing that is also used in Michigan law); Louisiana directs the state’s law enforcement commission “to supply data to federal bureaus or departments engaged in collecting national criminal statistics,” omitting specific reference to the UCR (La. R.S. § 15:578 (2015)); Arizona and other states oblige the state repository to “cooperate with” the FBI and UCR Program (A.R.S. § 41-1750 (2015)).
2In 1990, a rule added “carry out the Department’s responsibilities under the Hate Crime Statistics Act” to the FBI director’s formal responsibilities (55 FR 28610), as a new item and without changing the UCR authorization.
law enforcement administration, operation, and management and to assess the nature and type of crime in the United States” (P.L. 100-690 § 7332; 102 Stat. 4468).3
In common usage over several decades (and still continuing), generic references to “UCR” information typically refer to only one part of the fuller suite of data collections that have evolved over time under the UCR aegis. Such general references are typically to the Summary Reporting System (SRS) of the UCR—the lineal successor of the original 1929 work that collect summary counts of offenses known or reported to the police. The SRS is sometimes referenced as “Return A” data after the name of the form on which the local agencies are supposed to supply monthly returns. In terms of content, it is important to note that SRS is intended only to cover the small set of offenses dubbed “Part I” crimes (and not those designated “Part II” crimes; the distinction is shown in Box 2.1 and discussed further below). We will describe the National Incident-Based Reporting System (NIBRS) more completely below; together, the SRS and NIBRS may be thought of as the central components of incidence-of-crime statistics in the UCR program. Originally envisioned as the next-generation core UCR collection when it was crafted in the 1980s—that is, as a replacement for the SRS—the practice over time has been to treat SRS and NIBRS as distinct, parallel entities, largely due to relatively slow adoption of NIBRS standards for local data submissions. NIBRS is designed to span a wider array of offenses than the SRS, though the NIBRS component of the broader UCR program eschews the “traditional” Part I and II terminology. Detailed incident-level data and arrest information are collected in NIBRS for roughly 22 Group A offense categories while only arrest information is collected for an additional 11 Group B categories.
Given their centrality, references to “the UCR” in this report focus nearly exclusively on SRS or NIBRS. In describing the content and crime coverage of the UCR program as a whole, though, it is important to clarify that the UCR has evolved into a family of related data collections, largely defined by the type or nature of underlying offenses. Other key components of the fuller UCR program include the following:
__________________
3The 1988 legal provisions also ratified the designation of the FBI “as the lead agencies for performing [such functions]” and authorized the “appoint[ment] or establish[ment of] such advisory and oversight boards as may be necessary to assist the Bureau in ensuring uniformity, quality, and maximum use of the data collected” (P.L. 100-690 § 7332; 102 Stat. 4468).
information available in the compiled SHR data exceeds that expected of all crime types in the NIBRS incident-level data, and has fueled extensive research on the nature of the very important single crime type of homicide. SHR data were first collected and published in 1962 (Federal Bureau of Investigation, 2004:2).
__________________
4As per the 2004 version of the UCR Handbook, “clearance” is essentially an offense-level attribute rather than a person-level event or count; “several crimes may be cleared by the arrest of one person, or the arrest of many persons may clear only one crime” (Federal Bureau of Investigation, 2004:79). Clearance either occurs through arrest (which includes the filing of charges or the transfer of a suspect to the judicial system for prosecution) or through so-called “exceptional means,” which would apply when an investigation leaves no doubt about the identify of the offender and that there is evidence/grounds to support prosecution, but—for “some reason outside law enforcement control”—the suspect cannot actually be arrested, charged, or prosecuted (Federal Bureau of Investigation, 2004:81). Such “exceptional means” clearances include instances where the offender is deceased.
collection addresses a subset of incidents where harm is done to the police. The first UCR data on law enforcement officers killed on duty were gathered in 1960 (Federal Bureau of Investigation, 2004:2).
A final component of the broader UCR program collects no offense or incident information at all. Rather, it functions as a “rolling census” of sorts of law enforcement personnel. On an annual basis, UCR data providers are asked to submit the Number of Full-Time Law Enforcement Employees as of October 31, providing some rough information on size of law enforcement staffs (total and sworn officers) and the resources available to some specific units within the individual agencies. Though this particular subcollection does not gather actual crime data, it does have some bearing on the final estimates of crime generated by the UCR. Size of a law enforcement agency, whether in number of personnel or in population of the communities within the department’s jurisdiction, can play a role in imputation routines for handling missing data through reference to “similar” agencies.
Box 2.1 depicts the basic classification of crimes/offenses covered by the UCR Summary Reporting System as of 2014. Contrasting it with the original Part I and Part II crimes outlined in 1929 (Box 1.2)—and looking over the cosmetic appearance of the 2014 Part I list being expanded to include some subcategories (the reason for said expansion being described below)—it is clear that change has occurred but at a vastly slower pace than might reasonably be expected over many decades. Moreover, the changes that have been made have largely taken the form of expanding crime types or making relatively modest additions, rather than revising definitions.
When discussing the crime-type coverage of the UCR’s Summary Reporting System, one must inevitably describe what is probably the system’s single most distinctive feature, as it is the one that most starkly illustrates the “Summary” nature of the data. This distinctive feature is what is known as the Hierarchy Rule, which is invoked to determine the one—and only one—offense type that is recorded for any particular incident. The order in which offenses are listed in the UCR Part I classification is not accidental, and derives directly from the order in which they were originally presented in 1929; the offense types are listed in a rough descending order of severity while also differentiating between crimes against a person and crimes against property. Box 2.2 presents the Part I listing again, with some expansion, in formally laying out the Hierarchy Rule. As it was stated as a “General Provision” in 1929 (International Association of Chiefs of Police, 1929:34–35):
Part I Classes
Part II Classes
SOURCE: Adapted from Federal Bureau of Investigation (2013b).
Box 2.2 Hierarchy Rule for Part I Offenses, Uniform Crime Reporting Program
The order in which the Part I offenses and their subcategories are listed in Box 2.1 is not accidental; rather, it defines a preference hierarchy used in the UCR Summary Reporting System to associate incidents (which may involve the commission of multiple crime offenses) with a single crime type for reporting purposes. Lower numbers outrank higher numbers, so that a home invasion/burglary gone awry that ends in serious injury to a homeowner would be counted only as assault; a robbery in which the offender also sexually assaults the victim would be counted only as the rape or attempted rape; and so forth.
The 2013 Summary Reporting System User Manual (Federal Bureau of Investigation, 2013b)—the successor to the Uniform Crime Reporting Program Handbook that spelled out UCR policy in various revisions over the decades (Federal Bureau of Investigation, 2004)—retains four prominent “exceptions” to the Hierarchy Rule:
When several offenses are committed by one person at the same time, list as the crime committed the one which comes first in the classification. For example, one offense of robbery would be listed if both assault and robbery had been committed, because Robbery appears before Aggravated Assault in the classification.
In this manner, single incidents occurring at the same time but involving multiple individual offense types are generally collapsed in the SRS to count as only one offense. Box 2.2 describes some exceptions to this general rule that have developed over the years.
A second distinctive rule, known as the Separation of Time and Place Rule, also governs how—and how many—offenses are tallied in the SRS. It, too, derives directly from a “General Provision” promulgated in the original 1929 UCR manual (International Association of Chiefs of Police, 1929:35):
Offenses which follow in a more or less natural sequence but after an appreciable length of time, such as a robbery following auto theft, should be listed as separate offenses in their respective classes.
As currently operationalized (Federal Bureau of Investigation, 2013b:26), the statement of the rule actually addresses the inverse of separation of time and place. That is, it does not argue for any minimum interval in time or space that would constitute a separation but rather defines “same time and place” as occurrences in which “the time interval between the offenses and the distance between locations where they occurred is insignificant.” Generally, the rule defers to investigative findings by law enforcement: If “investigation deems the activity to constitute a single criminal transaction,” then even incidents at different times and locations are to be treated as single occurrences in the SRS.
Problems with the relative inflexibility of UCR structures were already apparent by the early 1980s. After several calls for the creation of a new UCR program, the FBI and BJS formed a joint task force in 1982 to oversee a study by Abt Associates Inc., which led to a major planning conference in 1984 and ultimately to a final report, the Blueprint for the Future of the Uniform Crime Reporting Program (Poggio et al., 1985). The Blueprint called for implementation of “unit-record” data collection within a tiered structure: All agencies would be asked to submit incident and arrest information in incident-level detail, but the burden on the vast majority of agencies (dubbed “Level I participants”) would only be tasked to provide information on a rough analogue to the list of Part I offenses. A much smaller set of “Level II participants”—albeit comprised of the nation’s largest law enforcement agencies (augmented by a sample of other agencies), and so covering the bulk of committed crime—would provide the incident-level offense and arrest information for a full, broad range of offense categories.
What evolved directly from the Blueprint recommendations is what is now known as NIBRS. The focus of this report is the content and coverage of “crime” by NIBRS, not the detail of its design and operations. However, that operational story will be a major part of our second and final report. For this report, it suffices to summarize that NIBRS diverged from the Blueprint’s tiers-of-agencies approach and instead adopted something more akin to the tiers-of-offense-types in the UCR Summary Reporting System, as we will describe below. For a variety of reasons—certainly among them the switch in approach,
to a system premised on agencies all-but-completely converting to detailed incident-based reporting of all crime types—adoption of the NIBRS standard has been much slower than hoped. For several years, the kinds of technical system improvement grants administered by BJS—not the FBI—were the best if not only opportunity to help non-NIBRS-compliant agencies begin to adopt the more detailed reporting, and NIBRS take-up remains relatively low. More recently, the work-in-progress National Crime Statistics Exchange (NCS-X; a partnership between BJS and the FBI) is poised to provide assistance to a carefully crafted sample of law enforcement agencies. Combined with data from existing NIBRS contributors, the data resulting from NCS-X investments are intended to be a statistically representative sample of agencies and thus lend itself to reliable estimation and inference on the incidence and characteristics of crime. Still more recently, the highly publicized incidents of apparently excessive use of force resulting in civilian deaths in 2014 and 2015 has led leadership of the Justice Department and the FBI to talk openly of a phaseout of SRS reporting in favor of NIBRS as the sole crime reporting standard. But, again, the focus of this report is more on crime coverage and content and not implementation issues; we will return to these topics in the final report.
Today, as when it began data collection in the late 1980s, NIBRS covers a substantially broader array of crime/offense types than the traditional SRS, as depicted in Table 2.1. Like the traditional SRS, in which contributing agencies are expected to file both “offenses known to police” and arrest counts for Part I crimes but only arrest data for Part II crimes, NIBRS recognizes a distinction between “Group A” and “Group B” offenses. As in the SRS, only arrests are to be reported for the Group B crimes while highly detailed incident-level data is supposed to be filed for Group A crimes. A critical difference is that the list of Group A offenses (subject to the most detailed reporting) is vastly longer than both the lists of Group B offenses and the list of Part I offenses focused on by the SRS.
A major reason for the relative lack of change in SRS crime-type coverage over time is basic: Little change has occurred because the process to enact change is long and difficult. Changes in UCR and SRS content—particularly in the roster and delineation of Part I offenses—has typically required enacted legislation to achieve. Major changes in UCR coverage achieved by law are described in more detail in Box 2.3. The other mechanism by which change in UCR and SRS procedures can arise is an elaborate Advisory Policy Board (APB) process. The APB is mainly comprised of officials from state and local law enforcement agencies, and includes tiers of discussion at the levels of regional and national working groups. At the national level, the APB has a subcommittee dedicated exclusively to advising the full APB on possible
| Offense Codes | ||
| NIBRS | SRS | Description |
| 09 | 1 | NIBRS Group A Offenses |
| Homicide offenses | ||
| 09A | 1a | Murder and nonnegligent manslaughter |
| 09B | 1b | Negligent manslaughter |
| 100 | — | Kidnapping/abduction |
| 11 | 2,17 | Sex offenses |
| 11A | 2a | Rape (except statutory rape) |
| — | 2b | Attempts to commit rape |
| — | 2c | Historical rapea |
| 11B | 17 | Sodomy |
| 11C | 17 | Sexual assault with an object |
| 11D | 17 | Fondling |
| — | 17 | Sex offenses (except rape and prostitution and commercialized vice) |
| 36A | — | Incesta |
| 36B | — | Statutory rapea |
| 120 | 3a–d | Robbery |
| 13 | 4 | Assault offenses |
| 13A | 4a–d | Aggravated assault |
| 13B | 4e/9 | Simple assault |
| 13C | — | Intimidation |
| 200 | 8a–j | Arson |
| 210 | — | Extortion/blackmail |
| 220 | 5a–c | Burglary/breaking and entering |
| 23 | 6 | Larceny/theft offenses |
| 23A | 6Xa | Pocket-picking |
| 23B | 6Xb | Purse-snatching |
| 23C | 6Xc | Shoplifting |
| 23D | 6Xg | Theft from building |
| 23E | 6Xh | Theft from coin-operated device or machine |
| 23F | 6Xd | Theft from motor vehicles |
| 23G | 6Xe | Theft of motor vehicle parts and accessories |
| 23H | 6Xf | Theft of bicycles |
| 23H | 6Xi | All other larceny |
| 240 | 7a–c | Motor vehicle theft |
| 250 | 10 | Counterfeiting and forgery |
| 26 | 11 | Fraud offenses |
| 26A | 11 | False pretenses/swindle/confidence game |
| 26B | 11 | Credit card/automated teller machine fraud |
| 26C | 11 | Impersonation |
| 26D | 11 | Welfare fraud |
| 26E | 11 | Wire fraud |
| 26F | — | Identity theftb |
| 26G | — | Hacking/computer invasionb |
| 270 | 12 | Embezzlement |
| 280 | 13 | Stolen property offenses |
| 290 | 14 | Destruction/damage/vandalism of property (except arson) |
| 35 | 18 | Drug offenses |
| 35A | 18 | Drug/narcotic violations |
| 35B | — | Drug equipment violations |
| 370 | — | Pornography/obscene material |
| 39 | 19 | Gambling offenses |
| 39A | 19 | Betting/wagering |
| 39B | 19 | Operating/promoting/assisting gambling |
| 39C | 19 | Gambling equipment violations |
| 39D | 19 | Sports tampering |
| 40 | 16 | Prostitution offenses |
| 40A | 16a | Prostitution |
| 40B | 16b | Assisting or promoting prostitution |
| 40C | 16c | Purchasing prostitution |
| 510 | — | Bribery |
| 520 | 15 | Weapon law violations |
| 64 | A–B | Human trafficking offenses |
| 64A | A | Human trafficking, commercial sex acts |
| 64B | B | Human trafficking, involuntary servitude |
| 720 | — | Animal crueltyc |
| 90A | — | NIBRS Group B Offenses |
| Bad checks (except counterfeit or forged checks) | ||
| 90B | 25 | Vagrancy |
| 90B | 28 | Curfew and loitering laws (persons under 18) |
| 90C | 24 | Disorderly conduct |
| 90D | 21 | Driving under the influence |
| 90E | 23 | Drunkenness (except driving under the influence) |
| 90F | 20 | Family offenses, nonviolent |
| 90G | 22 | Liquor law violations (except driving under the influence) |
| 90H | — | Peeping Tom |
| 90J | — | Trespass of real property |
| 90Z | 26 | All other offenses |
| 90C | — | Reportable Offenses, But Deemed “Not a Crime” |
| Justifiable homicide | ||
| 90I | 29 | Runaways (persons under 18) |
| — | 27 | Suspicion |
NOTE: NIBRS offense codes take the form NNX, where a blank for X denotes a top-level grouping category, a zero (0) denotes a specific offense without further subcategories, or an alphabetic character for X denotes a specific offense subcategory.
a Federal Bureau of Investigation (2013a) continues to list incest and statutory rape under a parent category 36 “Sex offenses, nonforcible,” despite the 2011 change in definitions to eliminate “forcible” as a descriptor of rape. Other sources, such as the request to the U.S. Office of Management and Budget (OMB) for clearance for NIBRS collection attach these 36-stub categories under the broader heading of 11 “Sex offenses;” we follow the latter approach. “Historical rape” refers to data compiled under the pre-2011 definition.
b Data collection on two new fraud offenses is to begin in calendar year 2016.
c Data collection on 720 “Animal cruelty” is to begin in 2015, with tabulation effective in calendar year 2016, pending OMB approval.
SOURCE: Adapted from Federal Bureau of Investigation (2013a), with reference to Federal Bureau of Investigation (2013b), Criminal Justice Information Services Division (2015a:9), and Criminal Justice Information Services Division (2015b:6). For the NIBRS Information Collection Review package submitted to OMB, search www.reginfo.gov for OMB control number 1110-0058.
changes to UCR policy, which in turn may be passed on to the director of the FBI and the Attorney General. Most significantly in recent years, the APB process achieved major change in December 2011, when years of negotiation and discussion resulted in the definition of rape across the UCR component programs being broadened to be gender neutral and to omit the term “forcible” as a descriptor (Federal Bureau of Investigation, 2013a:138). Three years earlier, the APB process resulted in the category for “Runaways” being removed from the Part II list and corresponding arrest statistics (Federal Bureau of Investigation, 2013a:137). The NIBRS categories for animal cruelty offenses and for two variants of cybercrime (“hacking” and identity theft) were both added in 2015 (Criminal Justice Information Services Division, 2015a,b) after discussion through the APB process, though the announcement in both cases suggests that the impetus came primarily from FBI management. But, again, such additions and revisions have been relatively rare.
Of the initiatives mentioned in Box 2.3, the one that arguably promised the most significant single instance of changes was the Uniform Federal Crime Reporting Act of 1988; however, in final form, the 1998 revisions proved to be simultaneously very expansive and practically narrow. The portended change was expansive in that the UCR Program’s scope was expanded to include “national data on Federal criminal offenses” and that “all departments and agencies within the Federal government (including the Department of Defense) which routinely investigate complaints of criminal activity” were now legally obligated to report “details about crime within their respective jurisdiction” to
the UCR Program. Yet it was sharply narrow in that this new mandatory reporting “shall be limited to the reporting of those crimes comprising the Uniform Crime Reports”—that is, to the long-standing designation of UCR crimes (particularly Part I offenses) already in place (P.L. 100-690 § 7332; 102 Stat. 4468). For a variety of reasons, including the definitional, few if any federal law enforcement agencies actually started reporting or enhanced their reporting as a result of the legal change. Indeed, the special “Message from the Director” accompanying the release of 2014 UCR data (Federal Bureau of Investigation, 2015) notes—17 years after the enactment of a law requiring federal law enforcement reporting—that “UCR program staff are [now] working with other federal agencies to encourage them to submit their own crime data like the U.S. Department of [the] Interior has for several years,” ideally in NIBRS format. Moreover, the note confirmed that which was already well known to analysts if not to the broader public: Chief among the federal nonreporters to the FBI’s UCR Program was the FBI itself. “To get our own house in order,” the director’s message observes, the 2014 UCR data release contains a first-of-its-kind compilation of “crime data from our field offices, including the number of arrests for human trafficking, hate crimes, and cyber intrusions. . . . We are working toward collecting data for all applicable UCR offenses so we can report those as well.”
Detailed examination and consideration of the implementation and operation of the UCR Program’s data collections, including NIBRS, awaits our final report. From the particular lens of the program’s coverage of crime types, we already characterized in Chapter 1 what is simultaneously the UCR Program’s most significant strength and weakness. The problem with the list of crimes developed by the assembled police chiefs in the late 1920s is not that it is uninformative—the original Part I crimes were chosen in large part for their salience to the general public, and they remain serious events of interest today. Rather, the issues are that the list of Part I crimes have so successfully “defined”—and limited—what is commonly meant by “crime in the United States” and that the lists of both Part I and Part II crimes have remained so relatively invariant over the years. In the Crime in the United States report for 1960, the FBI began the process of simply totaling the number of Part I offenses (by jurisdiction) to obtain a single-number “Crime Index”;5 later, a “Modified Crime Index” was constructed by adding arson (not yet a Part I crime) to the
__________________
5The derivation of a simple-summary index based on UCR police-report data was essentially consistent with the notion expressed by Sellin (1931:346) that “the value of a crime for index purposes decreases as the distance from the crime itself in terms of procedure increases” (emphasis original)—an argument that crime being reported or becoming known to the police was about as close to the “source” as possible.
Box 2.3 Changes to Uniform Crime Reporting Program Coverage, Required or Suggested by Enacted Law
Part I offenses. Not until 2004, and action through the APB process at that time, was this practice of trying to collapse “criminality” into a single-number summary discontinued in lieu of emphasizing separate violent crime and property crime totals instead (“About the Uniform Crime Reporting (UCR) Program” document in Federal Bureau of Investigation, 2015). However, the decades had taken their toll on what is meant by “criminality” in the United States; researchers and law enforcement practitioners alike understood that the indices were necessarily skewed toward high-volume offenses (like larceny-theft) without regard to crime seriousness, yet the obsession for decades was trying to impute meaning to upticks and downticks in the crude index measures.
More generally, the fundamental challenge of crime coverage in the UCR Program’s data collections is major uncertainty as to what information is
really at hand. In the case of the SRS, the problem returns to the language used at the beginning of this chapter—the SRS really and necessarily produces estimates of crime totals and rates. The historical branding of UCR tabulations as Crime in the United States contributes to a somewhat exaggerated sense of comprehensiveness and absolute accuracy—for several reasons, not least of which is that the UCR logically cannot encompass total crime because not all crime is reported to the police. In addition, the myriad tables of the annual Crime in the United States report each come with considerable fine print in companion “data declaration” and “methodology” documents. So, the UCR data tables are characterized in the report text and overview summaries as having impressive overall participation rates (“About the Uniform Crime Reporting (UCR) Program” document in Federal Bureau of Investigation, 2015):
In 2014, law enforcement agencies active in the UCR Program represented more than 311 million United States inhabitants (97.7 percent of the total population). The coverage amounted to 98.6 percent of the population in Metropolitan Statistical Areas, 91.6 percent of the population in cities outside metropolitan areas, and 92.6 percent of the population in nonmetropolitan counties.
But the tables typically avoid mention of the extent to which individual law enforcement agencies actually submitted a full 12 months worth of data (or whether and how many months of missing data had to be imputed), nor do they indicate whether all departments provided data on all the types of crime in the UCR framework. In essence, the SRS tabulations create the impression of being a complete census of crime activity, yet do nothing to suggest that individual entries in the tables may have considerable variation due to nonresponse. This level of uncertainty is undoubtedly elevated for the newer crimes—for example, arson, human trafficking, and so forth. Likewise, in the case of NIBRS, the problem is even more acute because adoption of the new reporting standards has been much slower than hoped. Alas, NIBRS coverage is such that it does not suffer from the false impression of being fully comprehensive and authoritative; NIBRS take-up, varying by state, is such that the accumulation of NIBRS data cannot be said to be representative of the nation as a whole. While NIBRS adds a substantial number of new crime types to the mix, the relatively low take-up rate (again, the reasons for and nature of which will be a major focus of our final report) is such that NIBRS’s strong potential for understanding crime in context remains largely unexplored by researchers and unknown to the general public. (At the end of this chapter, we will revisit this point in describing one current avenue of improvement.)
One of the nation’s two principal sources of information on crime and violence, the National Crime Victimization Survey (NCVS) has a storied history of innovation and redesign that belies its relative youth as a data collection. Confronted with the task of assessing a seemingly growing but ill-understood crime problem in the mid-1960s, the President’s Commission on Law Enforcement and Administration of Justice (1967) sponsored a set of prototype studies,6 the centerpiece of which was a first-of-its-scale survey of 10,000 households that asked household members about the incidence of—and incident-specific details on—experiences with crime and violence. This prototype survey was administered by the National Opinion Research Center (now NORC of the University of Chicago) with the express intent of shedding light on what had been contemporaneously and evocatively dubbed by Biderman and Reiss (1967) as the “dark figure of crime,” the incidences of crime that go unreported to the police. The prototype survey’s results were seismic–revolutionary—in their starkness. The survey documented that “for the Nation as a whole there is far more crime than ever is reported” to the police (and so counted in the existing UCR data), and the misses were hardly small: For some crime types, UCR/police-report totals were one-half or one-third the levels suggested by the survey, suggesting that in some cities “only one-tenth of the total number of certain kinds of crimes are reported to the police” (President’s Commission on Law Enforcement and Administration of Justice, 1967:v).
The commission’s report led directly to the creation of what is now the Office of Justice Programs (and then known as the Law Enforcement Assistance Administration [LEAA]), and fully realized versions of the commission’s prototype studies—the national representative survey, along with a survey of businesses and a few city-specific surveys—quickly became part of the new unit’s transmit. Formally, the full-fledged national survey (first fielded in 1972, sponsored by what developed into BJS with data collection by the U.S. Census Bureau) was but one part of the broader National Crime Surveys (plural) program, though it rapidly came to be known by the NCS abbreviation. However, an early National Research Council (1976) review of the program advised channeling resources into the national survey and scrapping the business- and city-specific components; upon implementation of this advice, the survey continued under the National Crime Survey (singular) banner. Several years later, the first wave of improvement and refinement took hold:
__________________
6The other prototype surveys included more detailed surveys of particular precincts in three cities (Boston, Chicago, and Washington), conducted by the Bureau of Social Science Research and the University of Michigan’s Survey Research Center (President’s Commission on Law Enforcement and Administration of Justice, 1967:21).
A broad redesign consortium worked through a comprehensive overhaul of the survey (in particular, improvements in its routine for a “screening” interview, as described below [Biderman et al., 1986]). Following that redesign, it was also decided to rename the survey as the NCVS to denote its new approach.
Data collection under the redesigned protocols began in 1992 and continued for over a decade, when the time came for another reappraisal—this time, inspired at least equally by fiscal realities as by the desire for measurement improvement. While undeniably important to BJS’s mission, the NCVS became a special burden—with half or more of BJS’s annual funding being routed exclusively to administer the NCVS. Even with that, the survey was made to endure cuts in sample size, jeopardizing the ability to distinguish anything but the largest of year-to-year shifts in crime or victimization at the national level. BJS commissioned another National Research Council (2008, 2009a) review to evaluate its entire data collection portfolio, but with specific emphasis (and one report exclusively) on options for conducting the NCVS. Based on that review and its own research, BJS remains in the midst of another NCVS redesign—one that portends to yield, among other things, some subnational estimates of victimization from the main survey.
Throughout its various distinct “lives” as a data collection program, the NCVS has maintained a fundamental structure—consisting (stated in simplified form) of personal interviews between a Census Bureau field representative and all individual members of a household age 12 and above, each beginning with a “screener” section meant to trigger recall (and count) of individual incidences of violence and followed by completion of a detailed “incident report” interview for each incident enumerated in the screener section. An important feature of this structure is that the use of crime-type labels and legalistic language is avoided to the greatest extent possible in the interview: neither the survey respondent nor the field representative is called upon to label a particular offense or incident as a robbery, an aggravated assault, etc. Instead, the survey’s intent is to collect descriptive information on and basic attributes of the incident, in order to permit crime type(s) to be derived in post hoc data preparation. Invoking the language that we will use later in this report, it may be said that the base NCVS uses a rough attribute-based classification, wherein crime types are derived algorithmically based on the presence/absence or levels of a set of variables (e.g., whether the incident included an element of taking property from a victim or whether entry to a site was achieved by force) rather than matching the letter of a legal definition.
In combination with the reasons for the survey’s creation, the NCVS’s fundamental structure has major consequences for the types of crimes covered by the survey:
Both the NCVS and the UCR have roots in questions of the effectiveness of policing and law enforcement, which affected their construction and prompted a similarity in content. The full-fledged NCVS began under the aegis of the LEAA, an entity that (as its name suggests) was to provide assistance to local law enforcement agencies; the LEAA’s original statistical mandate (under which the survey was developed) was to “collect, evaluate, publish, and disseminate statistics and other information on the condition and progress of law enforcement in the several States” (in the Omnibus Crime Control and Safe Streets Act of 1968; 82 Stat. 207; emphasis added)—not unlike the reference to “police statistics” in the first mention of UCR data in statute. Not surprisingly, then, developers chose to principally focus the NCVS on the same crime types measured under the UCR summary, with definitions and concepts carrying over to the survey. Only later—in 1979, when the LEAA was legally dismantled (to be replaced by the Office of Justice Programs)—was BJS created and chartered more broadly, with legislative language directly envisioning an ongoing (if not expanded) NCVS. The new agency was directed “to collect and analyze information concerning criminal victimization, including crimes against the elderly, and civil disputes,” and moreover to construct “data that will serve as a continuous and comparable national social indication of the prevalence, incidence, rates, extent, distribution, and attributes of crime” and related factors (93 Stat. 1176).
The upshot of these two lines of arguments is that the general list of crimes covered by the base NCVS—summarized in Box 2.4—looks remarkably similar to, and roughly follows, the Hierarchy Rule listing of the UCR Summary Reporting System. The NCVS is an interesting hybrid in that it both employs and eschews a rigid hierarchical rule. On a quarterly basis, a crime type is allocated to each Incident Report in the incoming NCVS data (which would have previously undergone basic editing and coding performed on a monthly cycle). “Incidents that cannot be classified according to the crime classification algorithm (e.g., arson, confidence games, and kidnapping) are deleted from the file,” and the level-of-seriousness algorithm—embodied in the final list in Box 2.4—is used to identify the single most serious offense associated with an Incident Report (Bureau of Justice Statistics, 2014b:47). It is that single, most serious offense that is used for basic tabulation and presentation of the survey’s results. However, the public-use NCVS data files contain at least a secondary offense code—as well as the attribute and variable data used to derive the type-of-crime codes—so that researchers may examine and classify incidents in a very flexible manner.
As a survey, the level of detail that can be gathered by the base NCVS is immense, bounded only by constraints in comprehension in posing questions
Box 2.4 Basic Crime Types/Victimization Rates Estimated by National Crime Victimization Survey
The earliest form of the NCVS—the prototype survey fielded by the National Opinion Research Center (NORC), in support of the work of the President’s Commission on Law Enforcement and Administration of Justice—was particularly ambitious in its crime coverage. It aimed to cover all of the “major offenses as defined by the Part I crimes” of the UCR, “suitably translated into everyday language yet retaining the vital elements.” In addition, “a substantial number of Part II offenses were also included,” as were “several crimes at the boundary of the criminal law, such as consumer fraud landlord-tenant problems, and family problems” (Ennis, 1967:7). This broad sweep was necessary, given the primary interest in comparison with extant UCR data, and enabled in part by deemphasizing some specific incident-level detail and conducting the interview in the classical single-respondent household survey manner (asking a single respondent whether they or anyone else in the household had experienced certain things). The specific crimes estimated in the pilot survey were: homicide, forcible rape, robbery, assaults (aggravated and simple), burglary, larceny (over $50 or under $50), vehicle theft, other automobile offenses (e.g., hit-and-run, reckless or drunk driving), malicious mischief or arson, forgery/counterfeiting, fraud, consumer fraud, other sex crimes, family problems (e.g., desertion, failure to provide child support), soliciting a bribe, building violations, and kidnapping.
Once started in “permanent” form as the National Crime Survey (NCS), the survey also developed stricter adherence to interviewing about personal victimization episodes (rather than “anyone in the household”). Necessarily, this involved some revision of the list of covered crimes—most notably (and logically), the omission of homicide. Attention was focused on a short list (roughly 6–7) of fundamental crime types. An early National Research Council (1976:App.D) review of the NCS identified these key crime categories as
Again, the categories were chosen to enable comparison (if not achieve lock-step conformity in label and definition) with UCR figures. The NCS designers stepped back a bit from the NORC prototype in its handling of rape (and sexual assault, generally), which it considered a form of assaultive violence but declined to single out as a top-level category; then, as now, rape remains a sensitive topic, but the norms of the early 1970s (when the NCS took shape) treated it as a particularly taboo (and interview-disruptive) topic.
Over time, the importance of data collection on rape (and sexual assault) became more clear and some concepts shifted to better match UCR practice (e.g., equating “robbery” with theft including an element of assault). Accordingly, by the time of the NCS’s extensive late-1980s redesign (and rebranding as the NCVS in 1992), the high-level short list of NCVS crimes had shifted to “rape, personal robbery, assault, personal and household larceny, burglary, and motor vehicle theft.” In line with that redesign, vandalism was briefly added to the list of crimes formally covered and estimated by the NCVS, but it was removed several years later.
One slight liability of the NCVS’s great flexibility is that there exist multiple (and slightly differing) lists of the current crime classification used in analyzing the survey. The codebook for the 2014 public NCVS data file details the level-of-seriousness hierarchy used in processing NCVS returns, as follows (Bureau of Justice Statistics, 2014a):
Personal Crime (Violent)
Personal Crime (Nonviolent)
Property Crime
This listing of covered crimes is generally consistent with the “crime classification taxonomy in the NCVS” articulated in the survey’s recent technical documentation (Bureau of Justice Statistics, 2014b:4). The technical documentation listing combines or collapses some specific offenses (e.g., elements 3, 4, and 15 above are combined into one single sexual assault measure and the theft of items valued at less than $10 and at $10–49 are combined); notably, the technical documentation does not include elements 16 (unwanted sexual contact without force) or the verbal threat elements 18–20. But still a third list exists in the U.S. Census Bureau (2012:C2-3) manual for NCVS interviewers—which simplifies but is likewise generally consistent with the codebook list. It, too, omits unwanted sexual contact—but adds verbal threat of personal robbery—as a violent crime.
to respondents and restrictions against making the interviews unduly burdensome. Yet, at the same time, the survey fundamentally queries respondents about events that may be enormously consequential in people’s lives but that are—in the statistical sense, and fortunately in the societal sense—relatively rare events. For any given individual respondent, asked to report incidences of crime and violence in the past 6 months, the chances that the interview will yield zero “incident reports” are considerable, simply because there is no such activity for the respondent to report. Estimation based on the survey requires finding occurrences of incidents of a particular type and making inference from that sample—and so, of necessity, two competing dynamics operate at once. The flexibility of the survey’s content makes it possible to articulate very fine categories of crime, with different attributes such as weapon use or the value of property involved in an incident—at the expense of precision and volatility in estimates. Simultaneously, NCVS publications focus on coarser constructs such as all “violent crime,” all “property crime,” or all acts of serious violence between family members, because those broader categories (and changes over time within them) can be estimated more precisely.
Particularly during the late 1980s redesign effort, when early efforts to build NIBRS were occurring in parallel, the notion of formally increasing the base NCVS’s coverage of crime types was considered. The redesign consortium’s summary document acknowledged that designers briefly contemplated a thorough reimagining of the survey’s crime classification—and heard staff proposals on the same. In particular, the objectives of some of these proposed reclassifications were less about “matching” the UCR in generating comparative incidence statistics, but rather “facilitat[ing] study of households touched by crime (and alternatively households free from crime), home intrusion, crimes in which motor vehicles either were objects of crime or were used in the commission of crime, domestic violence, and crimes committed by strangers and nonstrangers.” Ultimately, the redesign group retained the existing crime classification structure, and focused the major changes on the sample design and the structuring and content of the initial screener interview to boost recall and reporting of difficult-to-measure victimizations. However, the consortium did note that the reimagining work did “illustrat[e] the ways in which our understanding of the dynamics of crime can be expanded beyond the information available from legally based classifications and demonstrate the utility of developing additional attribute-based classifications for criminal events” (Bureau of Justice Statistics, 1989:15–16). Short of overhauling the underlying crime classification, the redesign consortium gave strong consideration to simply augmenting the list of crime types derived from and estimated by the survey; the summary document (Bureau of Justice Statistics, 1989:15) mentions highest consideration being given to the addition of “bombings, parental kidnapping [sic], arson, fraud, and vandalism.” However, most new suggestions “did not appear to be promising
for measurement, using victim survey methods, because of the rarity of the crime or concerns about the potential unreliability of victim reports.”
Over the years, BJS has acquired several direct mandates through Congressional action to collect certain information on criminal victimization in the NCVS. For instance, the Crime Victims with Disabilities Awareness Act of 1998 (P.L. 105-301) directed that the NCVS produce measures of “the nature of crimes against individuals with developmental disabilities” and “the specific characteristics of the victims of those crimes,” which led to the eventual addition of several questions to the survey (including one asking the respondent to judge whether any physical or mental impairment provided an opportunity for their victimization). Two years later, the Protecting Seniors from Fraud Act of 2000 (P.L. 106-534) explicitly mandated that BJS, “as part of each National Crime Victimization Survey,” collect information on “crimes targeting or disproportionately affecting seniors,” including “crime risk factors for seniors” such as the “time and locations at which crimes victimizing seniors are most likely to occur.” This mandate, in part, led to the eventual fielding of an Identity Theft Supplement to the NCVS for the first time in 2008. Most recently, U.S. House appropriators crafting the spending bill for Justice Department agencies for fiscal year 2015 added—as a condition for BJS funding as a whole—a provision “that beginning not later than 2 years after the date of enactment of this Act, as part of each National Crime Victimization Survey, the Attorney General shall include statistics relating to honor violence”—without specifying explicitly what is meant by “honor violence.” One common interpretation of honor violence is punishment for disobeying or disrespecting family dignity, particularly acts against women or girls in families. But the range of interpretations could also extend to “stand your ground”/self-defense laws. Even under a more generic definition of “honor violence” as violence committed to avenge a perceived slight to personal or family dignity, the explicit designation of the NCVS as the vehicle is surprising, both because construction of such a measure requires strong speculation by victims about the motives of their attackers and because the most extreme variant of honor violence (honor killing) would be out-of-scope for the NCVS (like all homicide). The honor violence provision survived in the final omnibus spending act for fiscal 2015 and became part of P.L. 113-235.
The phrasing above and in Box 2.4, speaking of the coverage of crimes in the “base” NCVS, is deliberate, because a great strength of the NCVS is its capacity to accommodate supplemental modules of questions—focused on different possible crime types or on the incidence of crime within unique populations—that can broaden the survey’s content. Typically conducted with sponsorship from some other federal agency, some of these topic supplements
have been purely one-shot efforts while others have been conducted on a somewhat more regular schedule, and the supplements have also provided a forum for survey questions and content to make their way into the base NCVS interviews. The history of NCVS supplements is described more fully in National Research Council (2008) and Bureau of Justice Statistics (2014b), but in brief:
In terms of the types of crime for which the NCVS can generate measures, and as a data collection platform in general, the principal strength and weakness of the NCVS can be stated simply and directly. Its principal strength is its flexibility, both analytically and in terms of content. It is unique in its capacity to generate estimates using multiple units of analysis, including incident-, person-, and household-levels of analysis.
To “emulate” and facilitate comparison with the UCR, NCVS estimates can be analyzed at the incident-level, assessing levels and rates of change in incidence of crimes of particular types (not to mention that it can be used to generate different metrics of “harm” induced by such crimes other than the raw count). One of the survey’s original hallmarks was that it shed light on the commonly overlooked perspective of the individual-person victim, and can be used to study individual reactions to and losses due to crime. But the nature of its collection also enables the use of the household as the unit of analysis, and so can start to generate insights into household and family effects of crime and violence. Finally, the NCVS can provide a unique perspective on criminal offending. In its incident reports, the NCVS asks victims of crime about the number and character of criminal incidents they experience, gathering information about what victims know about the offenders involved in incidents. Certainly, there are limits to which victims know or can know with precision the motives or characteristics of offenders, but some useful information is possible, particularly for offenses involving face-to-face contact between victim and offender. Accordingly, though it is best known for its victimization measures, the NCVS (and its precursor, the NCS) has been used
to construct crime incidence rates (by different characteristics of offender) independent of those gathered in police-report data. Such data have been used to study the similarities and differences in criminal offending as estimated by police-report data and by victim survey data that include crimes not reported to the police (see, e.g., Biderman and Lynch, 1991; Lynch and Addington, 2007b; McDowall and Loftin, 1992). The NCVS also has been used to produce rates of violent criminal offending over time, from 1973 to the present, for males and females (e.g., Lauritsen et al., 2009) and for persons of specific race and ethnic groups (e.g., Steffensmeier et al., 2011), and for some age groups such as juveniles (e.g., Lynch, 2002). In addition, trends in these survey data have been compared to trends in police estimates of crime for some types of offenses across a limited number of areas, such as metropolitan places (e.g., Lauritsen and Schaum, 2005) and urban, suburban, and rural places (Berg and Lauritsen, 2015).
However, the principal weakness of the NCVS is that its flexibility can only be pushed so far: It is designed to be a nationally representative survey, and so is best suited to produce national-level estimates. It is, moreover, a survey that began in 1972 interviewing persons in 72,000 households but—principally for budgetary reasons—the survey experienced cuts in sample size over the years. The smaller sample sizes, combined with the underlying premise of querying for details of statistically rare events meant that, by the mid- and late-2000s, the NCVS was falling short of its basic goal to estimate the level and annual rate of change in criminal victimization. In order to reliably estimate changes in victimization levels, comparisons had to be made between two-year “windows” of collected survey data (National Research Council, 2009a:28). It should be noted clearly that these weaknesses are not yet completely remedied, but that BJS is currently engaged in efforts to further address them: refining analysis and sample in order to derive some subnational estimates from the NCVS data and, within tight budgetary parameters, having made substantial effort to restore some part of the sample size cuts. In short, then, it remains true that the NCVS’s principal weakness is that it is sharply limited in its capacity for highly detailed annual geographic, demographic, or crime-type disaggregation, simply because a large number of events must occur in the data in order to yield reliable estimates. Individual states, and perhaps some large law enforcement departments, have fielded their own victimization surveys, but the NCVS sample is not designed to produce estimates of crime at the local-jurisdiction level that would be most useful to a variety of users. NCVS estimates certainly cannot be used for making comparisons to police-report-based estimates for a particular (arbitrarily small) city or police department precinct.
The NCVS and the UCR Program’s data collections are omnibus in terms of their coverage of crime and related topics. They are not fully comprehensive over the full extent of “crime,” yet each does still cover some considerable terrain, with the intent of collecting information in a standard way. Yet crime, and related behavior, is of sufficient public importance that numerous other data collections have emerged over the years, to cover some very specific offense types in a more detailed manner or to focus attention on a specific victim (or offender) population group in more detail than is possible in the more omnibus, nationally compiled crime datasets. These data systems are not routinely thought of as being part of the nation’s crime statistics system but—nonetheless—are sources that might serve as sources of indicators of some types of crime. The data collections that touch on some aspect of “crime” comprise a very rough patchwork—the inevitable result of different data resources being developed for different purposes, to cover different constituencies or populations, as has been the developmental path for national statistics in the United States, generally.
As we noted in previewing this chapter, the description of data resources in this section is not intended to be construed as comprehensive or exhaustive, and mention of a data collection here (at the exclusion of others) is not any special “endorsement” of the data. Nor are these capsule summaries meant to be thorough reviews or assessments. As with the UCR and the NCVS, our primary emphasis is the coverage of crime-related information the data collections may contain but, given their relative unfamiliarity, we also try to go a step further in describing the ways in which the data are currently being used.
In this section, then, we describe an illustrative set of possible data resources—potential sources for crime indicators or critical contextual information that may inform gaps or weaknesses in extant BJS and FBI crime data series, or that may be uniquely suited to measure crime-related phenomena among special subpopulations. We begin by reviewing some examples of data systems that are analogous to the UCR in that they are compiled from law enforcement or public safety sources, but also focus on some particular population or set of offenses. We then turn to some measures from self-report surveys, of victimization like the NCVS, of offending (in some cases), or of perceptions of specific crimes or offenses. Finally, we turn to some resources that do not align neatly with either of these data collection models but that are, in some sense, either administrative surveys (queries made of facilities or institutions) or compilations of administrative records data outside the law enforcement/public safety sphere.
Clery Act Collections on Crime on College and University Campuses
Postsecondary education institutions began compiling and regularly disclosing statistics on crime and security on campuses as a result of the 1990 Crime Awareness and Campus Security Act (P.L. 101-542; 104 Stat. 2384). The reporting is effectively mandatory on most institutions because it was made a condition for institutions’ eligibility for federal student financial aid funds. In addition to required statements on campus security procedures, the 1990 law mandated that occurrences of six types of crime—the UCR Part I offenses of murder, rape, robbery, aggravated assault, burglary, and motor vehicle theft, albeit not explicitly labeled as such—be tallied for the current and the two preceding school years, to include “offenses reported to [either] campus security authorities or local police agencies.”7 In addition, the law directed that arrest statistics be collected regarding on-campus liquor law, drug abuse, and weapon possession violations. Though written to include offenses handled by law enforcement in the communities surrounding college campuses—and so overlapping in content with reports to the FBI under the UCR program—the campus crime reporting law vested collection authority directly in the U.S. Department of Education, where it continues to be operated by the Office of Postsecondary Education (OPE).
Eight years later, the crime reporting provisions were revised and expanded, and renamed in memory of Lehigh University freshman Jeanne Clery, who was murdered in her campus residence hall room in 1986 (P.L. 105-244, 112 Stat. 1742). In terms of crime covered, the new Clery Act8 expanded the list of reportable offenses to include manslaughter (distinct from murder) and replaced “rape” with “sex offenses, forcible or nonforcible.” The act also paralleled the structure of the Hate Crime Statistics Act and directed that the offense counts be disaggregated to include crimes “in which the person is intentionally selected because of the actual or perceived race, gender, religion, sexual orientation, ethnicity, or disability of the victim.” (Simultaneously, arson was added to the list of reportable offenses and the arrest counts on liquor, drug, or weapon possession charges were made subject to past-two-year reporting, but none of these were made subject to the hate crime categorization.)
__________________
7The legal text indicated that the crime statistics should be collected in deference to standards in the existing UCR Program: “The statistics . . . shall be compiled in accordance with the definitions used in uniform crime reporting system” of the FBI, as modified pursuant to the Hate Crime Statistics Act (P.L. 101-542; 104 Stat. 2387).
8Formally, the Jeanne Clery Disclosure of Campus Security Policy and Campus Crimes Statistics Act; codified at 20 USC § 1092(f) with companion U.S. Department of Education rules for compliance at 34 CFR § 668.46.
In terms of crime coverage, then, the campus crime statistics collected under the Clery Act are closely patterned after the UCR Summary Reporting System, with some additions directed by the enabling law. That said, OPE’s website for dissemination of the data (http://ope.ed.gov/security/) takes care to caution users against directly comparing UCR figures with the OPE-compiled data, because the latter includes a mixture of data from local law enforcement agencies (which should report data to UCR) and campus security forces (which may not be so obligated). The Clery Act data also differ from the UCR and other traditional crime statistics programs in that their primary means of dissemination is dictated by law: The same law that requires the data to be collected mandates that an annual security report be published and disclosed/disseminated by all the individual schools to not just current students and employees but to “any applicant for enrollment or employment upon request” (20 U.S.C. § 1092(f)(1)). There is not, however, a standalone document akin to Crime in the United States that draws inference from the nationally compiled data. In addition to the “data analysis cutting tool” on the OPE’s website, the Clery Act data are accessible through the National Center for Education Statistics’ College Navigator interface (https://nces.ed.gov/collegenavigator/).
Members of the U.S. armed forces, personnel at U.S. military installations, and enemy combatants and prisoners in military custody9 are subject to the adjudication processes outlined in the Uniform Code of Military Justice (UCMJ), comprising Title 10, Chapter 47 of the U.S. Code. Subchapter X of the UCMJ lists a battery of “punitive measures”—in essence, a set of sentencing guidelines dictating what offenses are governed by a court-martial and which incur other penalties; in so doing, the UCMJ lays out an array of crime types unique to the military context, as described in Box 2.5.
As mentioned above in Section 2.1.4, the enactment of the Uniform Federal Crime Reporting Act of 1988 did not result in much increased reporting to the UCR Program—but it did partially spur the development of what would become the Defense Incident-Based Reporting System (DIBRS). DIBRS was principally developed within the U.S. Department of Defense (DoD) to coordinate and bring order to the inputs from the numerous law enforcement agencies that serve within and support the functions of the nation’s armed services. But a central data repository system also became essential to meet a number of legal mandates—not just reporting to the FBI under the Uniform Federal Crime Reporting Act, but also to satisfy recordkeeping
__________________
9This is a highly simplified version of the description of all persons governed by the Uniform Code of Military Justice, including detailed discussion of what exactly it means to be a “member” of the armed forces; the fuller description is at 10 U.S.C. § 802.
Box 2.5 Crime Types Uniquely Defined by the Uniform Code of Military Justice
The following are among the crime types (or “punitive articles”) defined by the Uniform Code of Military Justice (UCMJ) that have designated codes in the Defense Incident-Based Reporting System (DIBRS) but that would “convert” to category 90Z (“all other offenses”) in the National Incident-Based Reporting System (NIBRS; as per Table 2.1):
Other crimes defined in the UCMJ either directly match NIBRS categories (e.g., murder and robbery) or map reasonably closely to them (e.g., the UCMJ offense of “drunk on duty” [10 U.S.C. § 912] as applied to persons “other than a sentinel or look-out”, which maps to NIBRS’ “drunk and disorderly” code). In addition to personal and property crimes, the UCMJ defines what are generally termed inchoate offenses—as those inchoate offenses apply to other UCMJ-specific offenses. So, for example, the UCMJ defines the inchoate offense of (criminal) solicitation (10 U.S.C. § 882), covering the solicitation or advising of other persons to desert, mutiny, misbehave before the enemy, or commit acts of sedition. Similar wording holds for (criminal) conspiracy or functioning as an accessory.
requirements imposed by the Victims’ Rights and Restitution Act of 1990 and the Brady Handgun Violence Prevention Act. On October 15, 1996, DoD published Directive 7730.47, “Defense Incident-Based Reporting System (DIBRS),” to introduce the system and implement legal requirements, and to enable responsiveness to anticipated congressional and DoD information needs.
Per a technical document regarding the system (U.S. Department of Defense, 2010), DoD areas with responsibility for populating and reporting to DIBRS run the gamut of the internal military justice system:
DIBRS also is meant to enable the Department of Defense to track a criminal incident from initial allegation through final disposition. It includes data segments on the law enforcement, criminal investigation, judicial, and corrections phases. These segments from the later phases of the military justice process have substantially more missing data than those segments required for NIBRS.10 Contributions to DIBRS from within DoD are mandatory, in contrast to the voluntary participation of states and localities in NIBRS, suggesting coverage issues for the core data elements may be less severe.
A DoD Inspector General report in late 2014 noted that “10 years of DoD criminal incident data have not been provided to the FBI for inclusion in the annual uniform crime reports” (U.S. Department of Defense, Inspector General, 2014). As of August 2015, DoD remains in the process of obtaining FBI certification for DIBRS to clear the way for transmittal of its criminal incident data for inclusion in NIBRS as required by the Uniform Federal Crime Reporting Act of 1988 and DoD Instruction 7730.47. The remaining hurdle to certification is resolution of geographic tags to avoid inadvertent attribution
__________________
10This inference on higher levels of missingness was made by Defense Human Resources Activity (DHRA) staff, describing DIBRS for the panel at its meeting on August 4, 2015.
of incidents to the city or state in which a military installation is located, as opposed to the installation itself or the military service.
DoD produces no regular reports using DIBRS data that track trends on crime in the U.S. military. There are no public access files for DIBRS, whereas NIBRS has released data through the Inter-university Consortium for Political and Social Research. We consequently have found no secondary analyses of the data outside of government that speak to its strengths and weaknesses.
Responding to the recommendations of a National Commission on Fire Prevention and Control, Congress determined—in the Federal Fire Prevention and Control Act of 1974 (P.L. 93-498; 88 Stat. 1535)—that “a national system for the collection, analysis, and dissemination of fire data is needed to help local fire services establish research and action priorities.” The 1974 act established a National Fire Prevention and Control Administration within the Department of Commerce, and directed that this agency establish a National Fire Data Center to “gather and analyze” data on the “frequency, causes, spread, and extinguishment of fires,” as well as deaths, injuries, and property losses incurred by fires (among other firefighting-specific information). In response, the first-generation National Fire Incident Reporting System (NFIRS) was created in 1976, compiling voluntary data submissions from local fire departments in the same manner as the UCR Program collects voluntary submissions from law enforcement agencies. As described in Box 2.3, the agency was renamed the U.S. Fire Administration (USFA) by law in 1978 and simultaneously was given a strong mandate to collect information on the specific crime of arson in the NFIRS, prior to enactment of separate legislation decreed that arson be designated a Part I offense in the UCR Program.
Today, NFIRS continues to be coordinated by the USFA, though the USFA’s administrative placement has shifted over the years. It is now housed within the Federal Emergency Management Agency (FEMA), in turn overseen by the U.S. Department of Homeland Security. The National Fire Information Council (NFIC)—originated in 1979, and comprised of a group of (volunteer) representative NFIRS users—serves as a liaison between USFA and the broader USFA participants, though with less formal standing in policy decisions than the UCR’s Advisory Policy Board.
Generally, NFIRS parallels the UCR Program in construction: It relies on the voluntary contribution of data from local fire departments. Over 20,000 fire departments from all 50 states submit data to NFIRS. Most states relay information to NFIRS through the state fire marshal’s office. Due to budgetary constraints, however, a few states have discontinued NFIRS support at the state level but encourage local departments to continue participation via
direct submission to FEMA’s Data Entry Browser Interface (DEBI). Arizona, Nevada, and Washington are a few of the states that no longer provide state-level support. To supplement the state coverage, NFIRS also strives to record data for 35 major metropolitan areas with populations of 500,000 or greater.
It is estimated that 44 percent of total fire incidents are reported in NFIRS—a total that exceeds 900,000 incidents but that, like NIBRS, represents a minority of the total share of incidents in the country and a set of incidents that is difficult to characterize as being representative of any broader population.11 In part, low NFIRS participation may be attributable to an extensive reporting burden; the NFIRS instruments ask for substantially more information than NIBRS for a particular incident, and cover a much vaster scope of events. Though originally motivated by the desire for better quantification of fire and arson incidents, NFIRS has developed into a record system of all functions and activities performed by local fire departments, from emergency medical services (EMS) runs to hazardous material responses to “first responder” calls not actually involving a fire. NFIRS has a modular structure, with fire department personnel intended to fill out a core/Basic module for every response incident, followed by detailed question modules for applicable circumstances.
In the current “NFIRS 5.0” system, a core/Basic module (dubbed NFIRS-1) is completed by fire department personnel for each incident to which they have responded. NFIRS-1 prompts for basic identifier information (e.g., an identifier code for the reporting department, the geographic location, and a rough categorization of the incident). It also asks for information about the aid given or received and the actions taken by fire department personnel; whether dollar/property losses were incurred or whether fatalities resulted; and whether any hazardous materials were released. The basic module could also include “incidents” not actually involving a fire (e.g., first responder calls) or very minor incidents (e.g., “contained no-loss fires,” such as food-on-stove extinguished when fire department arrives). In addition to the Basic Module, NFIRS contains nearly a dozen specific additional “modules” that may apply to particular incidents. The second, “Fire” Module (NFIRS-2), starts the process of documenting actual fire incidents, including details about the property and what is known about human factors involved in the ignition of the fire. Depending on the type of land/property involved, a Structure Fire or a Woodland Fire Module would be completed. If the fire resulted in a casualty, then either the Civilian Fire Casualty Module or the Fire Service Casualty Module would be completed; both of those involve the fire department rendering an opinion on the causes of the injury leading to death, including human and contributing factors. Depending on the situation and
__________________
11This level of coverage was described by USFA staff in describing NFIRS at the panel’s meeting on August 4–5, 2015.
the specific equipment and staff put into play, then the HazMat, Apparatus, or Personnel Modules would be completed.
In addition to some of the information collected on NFIRS-1 and NFIRS-2 (and the associated Property Type module), interest in NFIRS as a companion measure of arson (or, generally, malicious burning or other property-damage crimes involving the use of fire) centers around two other modules:
Retrieval of such NFIRS data and subsequent comparison with/attribution to incidents collected through other reporting sources is difficult because of NFIRS’ unique structure.
The original Child Abuse Prevention and Treatment Act (CAPTA), enacted in 1974, required a new center within the U.S. Department of Health, Education, and Welfare to “make a complete and full study and investigation of the national incidence of child abuse and neglect, including a determination of the extent to which incidence of child abuse and neglect are increasing in number or severity” (P.L. 93-247; 88 Stat. 5). A 1988 revision of CAPTA (formally the Child Abuse Prevention, Adoption, and Family Services Act; P.L. 100-294, codified at 42 USC § 5104 et seq.) required the Secretary of Health and Human Services (HHS) to establish and appoint a director for the National Center on Child Abuse and Neglect, as well as establish a national clearinghouse for information relating to child abuse. The general task of coordinating information (from state and local resources) on national-level incidence of child abuse and neglect swelled in magnitude and specificity as CAPTA was periodically revised over the years. The current specifications of data required to be collected by the U.S. Department of Health and Human Services on “the national incidence of child abuse and neglect” includes 11 specific dimensions, ranging from “the incidence of substantiated and unsubstantiated reported child abuse and neglect cases” to “the extent to which reports of suspected or known instances of child abuse . . . are being screened out solely on the
basis of the cross-jurisdictional complications” of multiple agencies (42 USC § 5105(a)(1)(O)). Though the legislation beginning in 1974 laid the groundwork for a data collection system, it would take until enactment of P.L. 111-320, the CAPTA Reauthorization Act of 2010, for amendment text to formally define “child abuse and neglect” for these purposes: “any recent act or failure to act on the part of a parent or caretaker, which results in death, serious physical or emotional harm, sexual abuse or exploitation, or an act or failure to act which presents an imminent risk of serious harm.”
The specific data system established to meet these legislatively mandated requests is the National Child Abuse and Neglect Data System (NCANDS), under which HHS’s Administration for Children and Families (ACF) coordinates data inputs from state child welfare agencies. In its basic structure, NCANDS uncannily parallels both the core mission of BJS and emulates the methodology of the UCR Program. Under its information clearinghouse authority under law, HHS (through ACF) is required to “annually compile and analyze research on child abuse and neglect and publish a summary of such research,” to promulgate “materials and information to assist State programs for investigating and prosecuting child abuse cases,” and “establish model information collection systems.” That mission is akin to BJS’s authorizing legislation, emphasizing the function of providing technical assistance to individual communities. Meanwhile, the broad-brush structure of NCANDS participation—local child welfare agencies and authorities reporting to state agencies, which in turn submit data to ACF and NCANDS—is similar to the report-through-state model of the UCR. Operationally, NCANDS does differ markedly from the UCR model in frequency; its contributors are asked to provide counts and information for an entire financial fiscal year (October 1–September 30) at a time rather than on a monthly basis.
Both NCANDS and the UCR Program have a strong state-level coordination role, though that role is arguably stronger for the former than the latter. To wit, there is no provision for local agencies to supply NCANDS data without going through the state. Moreover, state statutes are commonly such that local reporting to the state-level child welfare agency is not optional. The level of state compliance with NCANDS reporting has been and remains impressive—starting with 46 states submitting in 1990, and including all 50 states, the District of Columbia, and Puerto Rico today. That said, it is important to note that the final step in the data relay, from the states to NCANDS, is strictly voluntary.
Established to provide insight on the highly specific crimes of child abuse and neglect, NCANDS features two functional dynamics that merit brief mention. First, HHS and ACF arranged for the development and availability of “local”-agency information systems in order to facilitate uniform data submission. Specifically, HHS developed Statewide and Tribal Automated Child Welfare Information Systems (SACWIS) software, and a provision in
1993’s Omnibus Budget Reconciliation Act (P.L. 103-66) offered states financial incentives to implement a SACWIS, leaving open the option of some system customizability in order to incorporate state-specific data collection items. As of April 2015, 35 states and the District of Columbia have fully operational SACWIS, and three states’ SACWIS are still in development; the remaining 12 states and Puerto Rico use non-SACWIS models to submit their data to NCANDS and other child welfare systems. Second, NCANDS is an interesting case study in moving from a mixed detail-and-summary-type reporting system to full incident-based reporting. For several years, NCANDS operated two major file types in parallel—a detailed Case Level Data File (also referred to as the Child File) and a Summary Data Component, the latter explicitly intended to collect summary incidence information from states lacking the capacity to submit the detailed case-level files. However, the NCANDS program was able to discontinue the Summary Data Component in 2012, every state having successfully acquired the capability for detailed reporting.
Child File entries consist of cases that received a disposition from a Child Protective Services (CPS) entity within any particular state. This file collects information on the reporting source, type of abuse or neglect, type of allegation (indicated, substantiated, unsubstantiated, victim, nonvictim, etc.), victim demographics, family court history and select family risk factors, child fatalities, and perpetrator information. The Child File has seven primary abuse and neglect categories: physical abuse, neglect or deprivation, medical neglect, sexual abuse, psychological or emotional abuse, other, and unknown. These seven categories constitute the full detail on the means of the abuse within these categories; NCANDS does not collect specific data on the method of abuse within these headings. Just as the UCR Program also includes a rolling census of sorts of law enforcement personnel, so too does NCANDS include a component that gathers some contextual information about the Child Protective Services entities at work in a state. This Agency File gathers aggregated state-level data on preventative services, screening, and other topics, including information from agencies operating outside of the state government’s CPS structure.
NCANDS data is stored, aggregated, and processed at the National Data Archive on Child Abuse and Neglect (NDACAN) at Cornell University. NDACAN is funded by a grant from the Children’s Bureau. The data center conducts secondary analysis of NCANDS data and provides datasets and technical support, free of charge to researchers and data users. Other units in HHS are, arguably, the principal consumers of NCANDS data, making extensive use of them for reports required by law (e.g., the annual Child Welfare Outcomes report annually submitted to Congress as required by the Adoption and Safe Families Act of 1997, compiling data from NCANDS and other sources), regular “omnibus” publications (e.g., the annual Child Maltreatment report published by ACF’s Children’s Bureau, which typically presents analyses
of data lagged by two calendar years), and a variety of special topic reports. Other users of NCANDS data include the American Humane Association, which frequently updates “fact sheets” on child abuse and neglect using the compiled data.
Independent of the NCVS, which focuses exclusively on obtaining self-reports of recent victimization experiences, there are very few national-level self-report estimates of criminal offending. Five notable exceptions are:
__________________
12For additional information, see http://www.colorado.edu/ibg/human-research-studies/national-youth-survey-family-study.
13See http://www.bls.gov/nls/ for general information on the surveys and https://www.nlsinfo.org/content/cohorts/nlsy97/topical-guide/crime/crime-delinquency-arrest for discussion specific to crime and delinquency.
14Additional information on the studies can be found at http://www.monitoringthefuture.org/.
self-reported delinquency, most of the antisocial behavior information contained in the MTF is focused on drug and alcohol use.
In addition, the annual National Survey on Drug Use and Health (NSDUH), administered by RTI International with sponsorship from the U.S. Substance Abuse and Mental Health Services Administration (SAMHSA), generates information on the use (and abuse) of “legal” drugs (alcohol and tobacco) as well as controlled substances. The survey targets the population aged 12 and older, and makes use of computer-assisted self-interviewing to try to actively promote the privacy of respondent answers.
Although each of these data sources has served as an important resource for understanding the correlates of delinquent and criminal activity, each is limited in some ways for purposes of estimating levels of crimes. Some of
__________________
15Additional information on Add Health may be found at http://www.cpc.unc.edu/projects/addhealth.
16Additional information on the YRBSS is available at http://www.cdc.gov/healthyyouth/data/yrbs/index.htm.
these limitations are associated with methodological problems common to self-report surveys, such as sample biases and errors associated with respondent under- and over-reporting (see Thornberry and Krohn, 2000); other limitations are due to study-specific differences. For example, longitudinal surveys such as the NYS and the Add Health data suffer from sample attrition over time and the low levels of self-reported involvement in violence suggests that survey participation may not be fully representative of the population. The MTF self-report information estimates only certain delinquent and antisocial behaviors and is limited to younger age persons in schools. The NLSY does not contain sufficient information on a large array of delinquent or criminal acts, and annual assessments are not routinely conducted. Therefore, although there have been efforts to obtain self-report information directly from persons about their involvement in criminal offending, these data collections are not capable of providing ongoing, reliable national-level estimates of crime.
As the NCVS sheds some light on the characteristics of offenders, other national surveys provide specialized glimpses at crimes and offenders, particularly in the area of family and intimate partner violence. Like the NCVS, the focus of these studies is on measuring victimization incidents that are often classifiable as “crime” as well as some important information about the offenders in such incidents (such as victim-offender relationship). In the area of child victimization, the National Incidence Studies of Missing, Abducted, Runaway, and Thrownaway Children (NISMART) has twice measured abductions of children by strangers and nonstrangers (see, e.g., Hammer et al., 2004), once in 1988 and a second time in 1999. The Developmental Victimization Survey, conducted once in early 2003, used a combination of self-reports and proxy reports to measure the extent to which children younger than age 12 have experienced various forms of victimization (Finkelhor et al., 2005). Like other victim surveys, these data include incidents that are not captured in official records by either the police or by child welfare agencies, or captured in the NCVS because it excludes respondents under the age of 12.
Violence against women and intimate partner violence have been captured in various national surveys, the largest including the National Violence Against Women Survey (Tjaden and Thoenes, 2000) and the National Intimate Partner and Sexual Violence Survey (NISVS, http://www.cdc.gov/violenceprevention/NISVS/index.html). Several other violence-against-women surveys, including one measuring the sexual victimization against college students (Cullen et al., 2001), are summarized by National Research Council (2004a).
It is important to note, in listing these various surveys, that they vary greatly in terms of frequency of administration and sample size. Some, like NSDUH, are ongoing surveys that are meant to produce ongoing data series, but others—either by design or as a result of cost of administration—have been strictly one-shot affairs. Hence, the surveys can produce radically different estimates of what is purportedly the same phenomenon and, with a one-shot
survey, it can be nearly impossible to conclude that one source is inherently better or more accurate than another. That said, the time-limited, one-shot surveys should not necessarily be denigrated; indeed, a well-designed one-shot survey with a solid research base can be highly valuable in pointing out deficiencies in the other, ongoing surveys and studies.
The Federal Trade Commission (FTC) has two data collections that may be partial indicators of the occurrence of fraud in the United States. We discuss the Consumer Sentinel Network database in the next section, but focus here on the series of survey-based measures that the FTC has sponsored over recent years.
The FTC in 2003 commissioned the first of three surveys of consumer fraud in the United States to understand the extent to which complaints in the Consumer Sentinel database are representative of consumers’ experiences with fraud in the marketplace, to assess the extent to which these experiences vary across demographics, and to identify the determinants of victims filing a complaint with authorities (Anderson, 2004, 2007, 2013). The surveys’ samples were large enough to enable some comparison of victimization by race and ethnicity, but not to make subnational estimates by geography.
The first survey explicitly asked respondents about 10 types of fraud that covered those that appeared most frequently in the FTC’s complaint database and had led to FTC enforcement actions. These included:
The survey also asked about “slamming,” where a consumer’s long-distance telephone service was switched from one provider to another without permission, and two situations that often suggest a fraud may have occurred: paying for a product or service that a consumer does not receive or being billed for a product, other than the specific products identified above, that a consumer had not agreed to purchase.
The survey, conducted on FTC’s behalf by Public Opinion Strategies, had 2,500 respondents obtained via random direct-dialing sampling. The response rate is not included in the documentation available on the FTC’s website. No further information is available on the sampling frame.
The FTC’s second consumer fraud survey was conducted in late 2005 by Synovate with 3,888 respondents (Anderson, 2007). The final report indicates that 52,986 phone numbers were called, an individual number was called up to seven times to make contact, and the response rate was 23 percent, using the American Association of Public Opinion Research’s Response Rate 3. The 2005 and 2003 surveys are not directly comparable. The second survey reformulated questions related to fraudulent advance fee loans, fraudulent credit card insurance, fraudulent credit repair, unauthorized billing for Internet services, fraudulent prize promotions, and fraudulent business opportunities and pyramid schemes. The questionnaire also addressed four additional frauds unexamined in the initial survey: fraudulent weight-loss products, fraudulent work-at-home programs, fraudulent foreign lotteries, and fraudulent debt consolidation.
The FTC’s third consumer fraud survey was conducted in late 2011 and early 2012 by Synovate with 3,638 respondents (Anderson, 2013). The final report indicates 51,192 working telephone numbers were called, with at least seven attempts if no one answered the number, producing a response rate of 14 percent, using the American Association of Public Opinion Research’s Response Rate 3. This survey retained the frauds covered in the 2005 iteration with the addition of questions on mortgage relief fraud and grant fraud.
FTC released a single report on each survey (Anderson, 2004, 2007, 2013). One article by FTC staff using data from the initial survey, examining the demographics of identity theft, appeared in the Journal of Public Policy and Marketing (Anderson, 2006). FTC staff published another article using data from the second survey in the Journal of Economic Perspectives (Anderson et al., 2008). The survey data are not publicly available and do not appear to have been analyzed elsewhere.
Several statutes authorize the FTC to collect and maintain consumer complaints. Section 6(a) of the act that established the FTC (codified at 15 U.S.C. § 46(a)) authorizes the Commission to compile information concerning and to investigate business practices in or affecting commerce, with certain exceptions. Information relating to unsolicited commercial email is collected pursuant to the FTC’s law enforcement and investigatory authority under the CAN-SPAM Act of 2003, 15 U.S.C. § 7704. In addition, the Identity Theft and Assumption Deterrence Act of 1998, 18 U.S.C. § 1028 note, mandates the Commission’s collection of IDT complaints, and the Fair and Accurate Credit 16 Transactions Act of 2003, Pub L. 108-159, 117 Stat. 1952, requires the sharing of information with consumer reporting agencies. Amendments to the Telemarketing Sales Rule (TSR), 16 C.F.R. Part 310, required the implementation of the National Do Not Call (DNC) Registry and collection of consumer telephone numbers and DNC-related complaints. The TSR also requires telemarketers to access the National Do Not Call Registry. Telemarketer SSN/EIN collection is mandatory under 31 U.S.C. § 7701. User names, password, and other system user data that are collected from CSN users accessing the secure system are collected pursuant to the Federal Information Security Management Act (FISMA), 44 U.S.C. § 3541.
The FTC’s Bureau of Consumer Protection in 1997 deployed the Consumer Sentinel database to help both it and law enforcement agencies identify and address the most pressing and newly emerging consumer issues (Muris, 2006). By the end of 1999, Consumer Sentinel contained more than 200,000 reports (Federal Trade Commission, 1999). By the end of 2014 the collection consisted of more than 10 million complaints—complaints older than five years are purged biannually—with inflows of more than 2.5 million complaints in calendar year 2014 (Federal Trade Commission, 2015). The collection includes contributions from Canada but geographic tags allow analysis by location of the victim and of the alleged perpetrator.
The data consist of unverified complaints filed by consumers directly to the FTC, along with those filed with numerous state law enforcement agencies, federal agencies and departments (such as the Consumer Financial Protection Bureau, the FBI’s Internet Crime Complaint Center, and the Departments of Defense, Education, and Veterans Affairs), and nongovernmental organizations (such as Better Business Bureaus, Green Dot, MoneyGram International, and Western Union; Federal Trade Commission, 2015).17 Roughly one-third of the
__________________
17The full list of designated Consumer Sentinel Network data contributors includes: 18 state attorney general or public safety offices; 5 other state or local regulatory agency (e.g., Los Angeles County Department of Consumer and Business Affairs and Tennessee Consumer Affairs Division);
complaints were filed directly with the FTC. One-fifth of the complaints come from PrivacyStar, a company with a smartphone application that enables users to identify and block unwanted calls. The next largest complaint contributors are Better Business Bureaus (15 percent), the Internet Crime Complaint Center (9 percent), and the Consumer Financial Protection Bureau (8 percent).18 There appears to be some effort to eliminate duplicate reports on the front end of directing reporting to the FTC,19 but it is not clear to what extent reporting from other data streams contributes to duplicate complaints.
There do not appear to be any codebooks or technical specifications for the Consumer Sentinel database that reveal the complete structure and content. The FTC does have on its website a file that includes the product service codes applied to complaints—a total of 113 as of July 2014, including categories for complaints about credit cards; food; funeral services; banks, savings and loans, and credit unions; and video games.20 A review of sample complaint forms suggests that the data files contain copious amounts of personally identifying information on the victims and alleged perpetrators (Federal Trade Commission, 2004). Directions on the FTC site about filing a complaint ask the filer to be prepared to provide (https://www.ftc.gov/faq/consumerprotection/submit-consumer-complaint-ftc):
Consumer Sentinel data access is available to any federal, state, or local law enforcement agencies and select international law enforcement authorities. As
__________________
at least 5 federal agencies or affiliated bureaus (Consumer Financial Protection Bureau, Internet Crime Complaint Center, U.S. Department of Defense, U.S. Department of Veterans Affairs, U.S. Postal Inspection Service); 2 Canadian agencies (the Anti-Fraud Centre and the Competition Bureau); and a mix of private-sector firms and interest groups (e.g., Council for Better Business Bureaus, MoneyGram International, Privacy Rights Clearinghouse, Xerox Corporation) (see https://www.ftc.gov/enforcement/consumer-sentinel-network/data-contributors [9/2/2015]). In addition, several entities refer complaints to the FTC, though they are not designated as data contributors; these range from Catholic Charities USA and the Financial Fraud Enforcement Task Force to the Internal Revenue Service and the U.S. Senate Special Committee on Aging.
18Share of complaints contributed are based on calendar year 2014, as documented in Appendixes A2 and A3 of Federal Trade Commission (2015).
19The FTC First Amended CRC SOW for its Consumer Response Center notes requirements for the contractor to eliminate some forms of duplication of complaints, such as those arriving from a single IP address. See https://www.ftc.gov/sites/default/files/attachments/hot-topics/dnc_crc_sow_first_amend_0.pdf.
20Federal Trade Commission. Consumer Sentinel Product Code Descriptions. July 2014. See https://www.ftc.gov/system/files/attachments/data-sets/csn-psc-full-descript.pdf.
noted before, the collection mechanism was not designed to support traditional crime analysis, but rather to support investigations and decisionmaking about where to focus resources to combat fraud against consumers. The FTC publishes an annual data book in PDF and makes the aggregated data available in Excel format. The data books are often cited by the media in stories about fraud, but there are no public data files available for further analysis.
The Financial Crimes Enforcement Network (FinCEN), established in 1990, is tasked with safeguarding the financial system from illicit use, combating money laundering, and promoting national security through the collection, analysis, and dissemination of financial intelligence and strategic use of financial authorities. The Bank Secrecy Act (BSA), composed of the Currency and Financial Transactions Reporting Act of 1970, as amended by Title III of the USA PATRIOT Act of 2001 and other legislation, requires banks and other financial institutions to file reports to FinCEN. These reports, in turn, have been found useful by the Treasury Department in its criminal, tax, and regulatory investigations and proceedings, as well as certain intelligence and counterterrorism matters. Of the data series produced under the BSA, Suspicious Activity Reports (SARs) generate the data most likely to reflect a range of criminal activities and, as such, prove useful in the creation of crime indicators.
FinCEN is responsible for the central collection, analysis, and dissemination of data reported under the Bank Secrecy Act. Despite its nomenclature, FinCEN’s core task is not the determination, prosecution, or measurement of crime per se, but rather—through analysis of a series of reports—to be bellwethers of activities that may subsequently be determined to be criminal. The types of reports FinCEN collects include:
FinCEN since April 1996 has collected SARs filed by banks and other financial institutions that identify possible criminal activity affecting or conducted through the reporting institution (Financial Crimes Enforcement Network, 1998). SAR filings replaced two prior reporting systems, one which required depository institutions to file criminal referral forms with their primary federal financial regulator and federal law enforcement agencies and the other which required checking a box on Currency Transaction Reports to note transactions the bank thought were suspicious.
FinCEN introduced industry-specific SAR forms as anti-money laundering and reporting requirements were levied on sectors beyond depository institutions, moving to a single form in March 2012. In conjunction with the move to the uniform SAR form, FinCEN began accepting SARs through its BSA E-Filing System (Financial Crimes Enforcement Network, 2012a) and, as of 1 April 2013, only accepts SAR submissions through that system. Comparisons of data collected beginning April 2013 with prior years’ data are complicated by these changes and the one-year transition period through which some institutions continued to submit SARs on legacy forms.
The range of institutions filing SARs under the BSA has expanded since the 1992 Annunzio-Wylie Anti-Money Laundering Act that initially required financial institutions to report suspicious activity. SAR filers now encompass (Financial Crimes Enforcement Network, 2015):
FinCEN’s SARs—which accounted for nearly 2 million of the roughly 19 million BSA reports FinCEN received in FY2014—are not a traditional source of crime measures (largely those recorded by the police or reported by victims), but nonetheless could support the creation of data series on indicators of criminal activity that otherwise go unmeasured in national crime statistics.21 A SAR is filed when a filer—a depository institution, non-bank financial institution, money services business, or casino—suspects that a transaction: involves funds derived from illegal activity, or is intended to hide or disguise the proceeds of illegal activity; is designed to evade BSA reporting requirements; has no business or lawful purpose; or is not an expected transaction for that particular customer.
The SAR has five parts: Part I—Subject Information; Part II—Suspicious Activity Information; Part III—Information about the Financial Institution Where Activity Occurred; Part IV—Filing Institution Contact Information; and Part V—Narrative. Detailed descriptions of each item on the SAR form are included in official guidance available on FinCEN’s website (Financial Crimes Enforcement Network, 2015). Filers are asked to record the type of suspicious activity by selecting from 10 categories, each of which has multiple subcategories:
Some SARs address multiple financial transactions; some assign more than one suspicious activity to a single transaction. These variations would require investment in data management to generate series with consistent units of analysis. FinCEN typically aggregates the number of instances of each type of suspicious activity reported, such that a SAR citing solely check fraud would be tabulated as one instance of check fraud whereas a SAR citing check fraud and identity theft would be tabulated as one instance of each suspicious activity.
SARs are viewed primarily as sources of potential lead information for regulators and law enforcement that, when further investigated, may produce or supplement evidence of criminal activity. FinCEN publishes regular updates
__________________
21Numbers from presentation by FinCEN staff to the Panel on Modernizing the Nation’s Crime Statistics, June 2015.
highlighting trends and emerging issues in suspicious activity reporting both within and across industries. FinCEN has also published more focused examinations of industry-specific trends or particular suspicious activities. Several of these published between 2006 and 2013 focused on reports of mortgage loan fraud, foreclosure rescue scams, and loan modification fraud; another examined several years of SAR filings by casinos and card clubs (Financial Crimes Enforcement Network, 2013, 2012c).22 Limited access to these data outside of law enforcement and other government entities means there has been little independent exploration of the data, but FinCEN in 2015 introduced an interactive SAR Statistics tool on its website that permits the generation of data extracts that may increase their use going forward.
FinCEN analysts have identified emerging crime types—beyond those explicitly listed on the SAR form—by monitoring the narratives and other free-text fields. Insights derived from analysis of the SARs can inform advisory notices to filing institutions that sensitize reporters to recognize suspicious activities, possibly prompting shifts in the level of reporting independent of changes in the underlying level of the suspected criminal activity. For example, FinCEN in 2012 issued guidance on suspicious activity related to mortgage loan fraud (Financial Crimes Enforcement Network, 2012b) and in 2014 issued guidance on recognizing financial activity that may be associated with human smuggling and human trafficking (Financial Crimes Enforcement Network, 2014).
Based on conversations with FinCEN officers, there does not appear to be any system in place that would enable statistical linkage of SAR data monitor flows through the justice system. Some fraction of completed prosecutions and administrative remedies are highlighted in media releases citing the role of SARs and other BSA reports in addressing the violations.
Another glimpse at possible criminal activity may be possible because of federally required recordkeeping regulations, requiring the prompt reporting of suspected theft (or general) loss of specific, sensitive “property.” The amount of detail about the nature of the possible theft and the affected property—and whether the offense is also required to be reported to local law enforcement—varies by collection. Among these recordkeeping-type collections are:
__________________
22See http://www.fincen.gov/news_room/rp/mortgagefraud_suspectedMortageFraud.html for additional reports and news releases issued by FinCEN that explore mortgage loan and other real estate fraud activity appearing in FinCEN SARs.
is discovered”; said report is required to be made to both “the Attorney General and to the appropriate local authorities” (18 USC § 923(g)). The vehicle for reporting to the Attorney General—the Bureau of Alcohol, Tobacco, Firearms, and Explosives’ (ATF’s) Form 3310.11—obligates the licensee to categorize the incident as burglary, larceny, robbery, or “missing inventory”; the date and time of notification of local law enforcement; a brief (free-text response) description of the incident; and specifications (manufacturer, model, caliber/gauge, and serial number) of the lost or missing firearms.23
__________________
23Additional information on the form and the justification for the data collection is available by searching www.reginfo.gov for U.S. Office of Management and Budget (OMB) control number 1140-0039.
24Additional information on the form and the justification for the data collection is available by searching www.reginfo.gov for OMB control number 1140-0026. Quotes here are from the version cleared by OMB for use on September 12, 2014, and valid through September 30, 2016.
25Additional information on the form and the justification for the data collection is available by searching www.reginfo.gov for OMB control number 1117-0001. Quotes here are from the version cleared by OMB for use on September 19, 2014, and valid through September 30, 2017.
Around the world, death certificates are completed using codes drawn from the International Statistical Classification of Diseases and Related Health Problems maintained by the World Health Organization (WHO). Currently in its 10th major revision, the master version of the classification owned and maintained by the WHO is commonly known as ICD-10. In the United States, the National Center for Health Statistics (NCHS) of the U.S. Centers for Disease Control and Prevention (CDC) is responsible (under WHO’s authorization) for development of U.S.-specific code lists26 (including the Clinical Modification version of the ICD used to code morbidity [distinct from mortality] and health problem information on inpatient and outpatient medical records). All the states27 participate in the Vital Statistics Cooperative Program (VSCP) administered by NCHS, to which birth and death certificate information is routed for compilation.
Akin to the UCR Program, a primary (“underlying”) cause of death is identified on the death certificate and is commonly used for summary tabulation purposes. However, the VSCP also produces what are commonly known as the Mortality Multiple Cause-of-Death files (as public use data files) that permit coding of an additional 20 contributing causes of death. Of course, what is salient to discussion of crime statistics is that not all the causes of death described by the ICD-10 are internal (to the body) or natural causes. Previous ICD revisions maintained a separate, supplementary listing of “external” cause-of-death codes covering homicide, suicide, accidental deaths, and the like; ICD-10 was the first to fold these external causes directly into the main classification and numbering scheme.
For purposes of factoring into possible measures of crime, mortality data have both major strengths and liabilities. The strength is that the time for medical examiners to do their work arguably provides the best (and perhaps only) source of some contextual information of the detailed circumstances of a death, such as the presence of specific drugs in the decedent’s system at the time of death or the exact nature of the weapon that inflicted a lethal injury. One major weakness is obvious and inherent, which is to say that mortality data pertinent to crime are necessarily limited to homicide, manslaughter, and other criminal events leading to death. But others are more subtle. The mortality data represent the determination by one source—typically, the medical examiner or coroner—as to whether death was due to deliberate measures or to accidental or
__________________
26It is worth noting that NCHS has added one entirely new “chapter” of codes to its coding lists: the “∗U” codes for causes of death or injury resulting from terrorist activities. The ∗ prefix attached to the codes denotes that they are not reflected in the current ICD. Additional information is available at http://www.cdc.gov/nchs/icd/terrorism_code.htm.
27Technically, 57 “vital event registration areas” participate in the cooperative program: the 50 states, the District of Columbia, New York City (separate from New York state), and five commonwealths and territories (National Research Council, 2009c).
other means. However, the coroner’s determination may or may not square with determinations made at any level of the criminal justice arena. More subtly, mortality data have historically suffered from timeliness concerns—not just from the time of death to the publication of data but also simply to edit and compile all of the deaths in a given year from every participant area (recalling that the “external cause” deaths are but a subset of the much broader set of all deaths and corresponding certificates).
Over the course of the 1990s, increased attention to injury-related morbidity and mortality led to the creation of a new National Center for Injury Prevention and Control in the CDC. The new center, with the Harvard School of Public Health, secured funding in 1999 to pilot a National Violent Injury Statistics System in 13 sites throughout the nation, focusing on violent deaths (homicide, suicide, accidental deaths involving external means or weaponry). Following initial success, a full-fledged National Violent Death Reporting System (NVDRS) was established in 1992. The NVDRS (and its pilot predecessor) resembles the standard mortality data in that it is a surveillance data system, amalgamating the records from lower-level contributors. It also resembles the standard mortality data in that data submissions are coordinated through the states, which are the primary participants in the system. Where NVDRS differs from the conventional mortality data—beyond the restriction of scope to violent deaths—is the exact mechanism of coordinating data input and the range of original-source providers. States agree to provide data to NVDRS via cooperative agreement with the CDC—which is to say that state participants receive funding to compile and submit their data, rather than relying on purely voluntary data submissions. As to the second point, medical examiner offices (and death certificate data) constitute a major part of NVDRS coverage but they are not the only source; NVDRS is also meant to involve input (or additional data items on specific deaths) from law enforcement agencies and their crime laboratories.
Again, for purposes of possible role in measures of crime, the NVDRS represents the possibility for detailed contextual and situational information about homicide and accidental killings that exceed the detail in the current Supplementary Homicide Reports (SHR) of the UCR Program. It could, for instance, provide insight on mental health history of perpetrators and victims, in addition to the kind of precision of drug involvement or weapon use that may only be available through postmortem examination. Being dependent on continued funding and the maintenance of cooperative agreements, the major problem with NVDRS is the same one confronting the National Incident-Based Reporting System (NIBRS): Not all the states participate (or can participate, contingent on additional funding), meaning that the “national” data compiled by the system are not representative of the nation as a whole. Approximately 32 states currently participate in NVDRS—and absentees include California, Texas, and Florida. Hence, like NIBRS, the data may be used to generate very informative analyses of violent deaths within participant states, but
the system’s full analytic power to describe national-level trends is currently limited. Conceptually, NVDRS’s restriction to violent deaths comes closer (than mortality data writ large) to concentrating on the set of events relevant to crime analysis, but “violent death” is not strictly synonymous with “death by criminal means.” Hence, as described earlier, the NVDRS coding from a medical examiner or crime laboratory may not necessarily agree with determinations made in the criminal justice system (e.g., negligent homicide).
Having described a patchwork of crime-related data resources, we turn in Chapters 3 and 4 to broader context—to the demands on crime statistics made by their users and to alternative strategies for mapping out the full range of “crime” through classification schemes. Recent years have seen some attempts—including two predecessor study panels—to look at crime statistics as a more cohesive whole and suggest major omissions or improvements in the same.
As we noted in Section 1.3, an earlier National Research Council (2008, 2009a) panel was asked by BJS to review its full data collection portfolio. The review was designed to put particular emphasis on the NCVS, given its dominance of the agency’s resources, but also covered its extensive data collection work in corrections, law enforcement management, and judicial processing. Though the BJS Review panel completed its work six years ago, it can still fairly be described as a recent and concurrent effort at improving crime-related statistics because implementation of several of its recommendations is still ongoing. In particular, the BJS Review panel’s evaluation of options for the NCVS spurred a new round of redesign of the NCVS, including the conduct of a suite of methodological studies on ways to decrease survey costs while increasing the survey’s relevance. The panel noted the survey’s key, current liability—its inability to generate reliable subnational estimates of victimization—and encouraged both selective “boosting” of sample in geographic areas and derivation of model-based estimates. It is expected that this work will result in the production of new subnational estimates from the NCVS in 2016. Looking at the BJS portfolio as a whole, the BJS review panel argued that the portfolio lacked underlying conceptual frameworks—making it unclear how individual data series fit within, or contribute to understanding of, the criminal justice system as a whole; the suggestion of development of a modern crime classification is certainly consistent with that guidance. Finally—while observing that responsibility for their production could not be solely BJS’s duties, within its historical resource constraints—the BJS review panel noted a dearth of data coverage in four critical areas:
We will return to all four of these, in different ways, in suggesting our crime classification in Chapter 5.
More recently, a National Research Council (2014) panel was convened at BJS’s request to focus on the measurement of sexual violence—rape and sexual assault—via household surveys such as the NCVS as well as other independent (and typically one-shot or more sporadic) surveys intended to yield national-level statistics. Rape and sexual assault have always been difficult to measure, not least because the offenses are such that many victims are reluctant to report them to anybody, law enforcement officer or survey administrator alike. BJS had already initiated pilot work on a new survey specific to the measurement of these crimes, which is consistent with one of that panel’s central recommendations: establishment of an independent survey specifically designed to measure incidence of rape and sexual assault, with more visible and effective means of ensuring respondent privacy while completing the survey and extreme care and precision in the definition and wording of questions. Work on a full-fledged version of this new survey is continuing. That panel’s work is useful to recall, not necessarily due to the exact nature of its recommendations but because it illustrates the complexity of deriving useful measures amidst myriad (and often conflicting) definitions in state and federal criminal codes and across data collection instruments.
Finally, beginning in late 2012, BJS has periodically convened an informal Crime Indicators Working Group, comprised primarily of police chiefs, sheriffs, other law enforcement personnel, and representatives of their major service organizations. The group has not issued a formal report, but its sessions have provided a very useful sounding board about the ideal shape of a set of national crime indicators—and how that corresponds with both the internal data resources offered by departments’ own records management systems as well as the information that such practitioners are expected by the public to know and grasp at any moment’s notice. Members and staff of our panel have participated in several of the working group’s discussions, and what might be most telling about them is the fervency with which the need to be able to study crime data in neighborhood and broader context is raised. Indeed, many of the discussions of the Crime Indicators Working Group center around the type of non-crime statistics—demographic and socioeconomic indicators for small geographic areas—that are highly desired as part of an overall, new crime statistics system.