The Mismeasure of Crime

Mosher, Clayton; Miethe, Terance D.; Hart, Timothy C.



Homicide victims are notoriously poor respondents to Census Bureau interviewers.

—Benjamin Renshaw (1990, p. 226)*

Another method of studying crime that arose in response to concerns about the limitations of official data was the victim survey. Instead of asking criminal justice system officials or offenders about criminal behavior, this approach asked people about their experiences as victims of crime. The first large-scale victimization survey appeared in the late 1960s. Since that time, they have been widely used to measure the frequency and characteristics of particular types of crime and the demographic profiles of victims, both in the United States and in other countries. By eliciting information about both crimes that citizens report to the police and those they do not, victimization surveys provide us with further information regarding the dark figures of crime. These surveys have also had a profound effect on theories of crime causation. Routine activity, opportunity, and even rational choice theory have flourished in the discipline of criminology in recent years in part because of the availability of victim survey data (Cantor & Lynch, 2000). However, as we will see in this chapter, similar to other measures of crime, victimization surveys have their own unique strengths and limitations.

Victimization surveys differ from other methods of measuring crime in their nature, their scope, and in the type of information collected. As implied by the basic definition, these surveys involve self-reports of victimization experiences by victims themselves, and as such, they are subject to many of the same problems associated with other forms of survey research. Victimization reports are usually elicited from random samples of the general public, and a variety of screening questions are utilized to identify different types of victims. A number of crimes addressed in other data sources—for example, prostitution and drug and alcohol offenses—are not covered in these surveys because they are considered to be victimless. In addition, in some crime situations, it is not possible to interview the victim; it is obviously not feasible to interview the victim of a homicide. Finally, some crimes, such as vandalism, are viewed as trivial and are not covered by these surveys; others, such as white-collar and corporate crime victimizations, are seen as difficult to accurately measure. This lack of coverage of certain types of crimes renders direct comparisons between victimization and official data problematic, an issue that will be addressed in more detail later in the chapter.

This chapter examines how victimization surveys have been used to measure crime. We begin with a review of the major victimization surveys used in the United States in the last five decades and proceed to describe the distribution of crime that emerges from a consideration of these surveys. We conclude with a discussion of the various problems associated with current efforts to accurately measure victimization experiences. While focusing on the findings from victimization surveys in the United States, we will include research from international studies where relevant, including the British Crime Survey and the International Crime Victimization Surveys, which shed light on the methodological weaknesses of this method of counting crime.


As discussed in Chapter 2, surveys of crime victims in the United States developed during the mid- to late-1960s out of a concern with the weaknesses of official data in measuring the extent and characteristics of crime.

Given the initial success in using survey methods to provide information on victimization experiences, a number of additional pilot studies were conducted to address several important methodological questions: What is the ideal time frame for asking respondents about their experiences with victimization; how reliable is victimization recall; what is a suitable lower limit for the age of eligible respondents; and what are the advantages and disadvantages associated with mail or telephone interviewing methods versus in-person methods (Dodge & Turner, 1981)? These test studies aimed at validating and improving techniques used in victimization surveys were designed by the Law Enforcement Assistance Administration, in cooperation with the Census Bureau.

With insights derived from the earlier surveys, the National Crime Surveys (NCS) were initiated in 1972. The original NCS involved a national panel study of the victimization experiences of both households and individuals, as well as a number of surveys of particular cities. The initial NCS included samples of approximately 60,000 households (containing approximately 136,000 individuals) and about 15,000 businesses. The central-city surveys had samples of approximately 12,000 households in each of 26 cities; in addition, a probability sample of between 1,000 and 5,000 businesses was selected for each city. Both the national surveys of businesses and the city surveys of individuals were terminated in the mid-1970s on the basis of findings from external reviews that the sample was undersized, that the surveys were of limited utility as fielded, and that the surveys failed to collect information beyond that already gathered by the police, apparently due to their cost (Rennison & Rand, 2007). However, the national victimization survey of households, although having undergone several modifications over time, continues as an annual series.

Compared to official reports of crime, national victimization surveys have several advantages. These surveys, for example, have been perceived to provide more accurate measures of the absolute rates of some serious crimes and are believed to be more reliable than official statistics in analyzing crime trends in the United States (O’Brien, 1985). It is also believed that victim surveys provide more detailed information about the situational factors surrounding criminal acts—for example, the physical location of crime events; the day and time of events; the type of weapon used, if any; the number of victims and offenders; and the relationship between the victim and offender. General characteristics of offenders, such as their race, gender, and age in direct-contact predatory crimes such as assaults and robberies, can also be identified in victim surveys.

Procedures in the National Crime Victimization Survey

The National Crime Victimization Survey (NCVS) is the most comprehensive and systematic survey of victims in the United States.1 This survey has been designed and modified by the leading researchers and institutions in the country. The sampling procedure is supervised by the Census Bureau, the survey is conducted by well-trained staff and interviewers, and changes in the sampling design and format of questions are rigorously evaluated in terms of their effects on estimates of victimization experiences.

The basic procedures for selecting households to participate in victimization surveys have been essentially unchanged since the inception of the national survey. Recall that the goal is to obtain a nationally representative sample. The NCVS uses a complex, stratified, multistage cluster sample in which approximately 673 primary sampling units are initially identified by standard metropolitan statistical areas (SMSAs), a county, or small groups of contiguous counties. These clusters are then stratified with respect to important demographic characteristics, and sample elements (in this case, households) are selected from each stratum in a manner that is proportionate to their representation in the larger population.

The NCVS uses a rotating panel design. This means that sampled households are organized into panels. Each panel is divided into 6 groups so that interviews are ongoing throughout the year, thereby reducing seasonality effects. Residents of sampled households are interviewed seven times—once every six months for three years. After seven waves of interviews are complete, the panel is rotated out of the sample and replaced by a new panel of sampled households. It is important to note that although individuals are interviewed for the NCVS, it is a panel survey of housing units. This means that a housing unit remains in the sample even if the original residents of a household move during the seven interview waves. Although all this certainly sounds complex, the rotation pattern is designed with the goal of ensuring the representativeness of the sample.

In the earliest surveys, approximately 72,000 households were sampled. Over time, that number has decreased considerably. Twice during 2008, 42,000 households and 77,850 people age 12 or older were interviewed. The response rate for the survey was 90% of eligible households and 86% of eligible individuals. NCVS response rates for both household and individual surveys conducted between 1996 and 2008 are presented inExhibit 5.1

NCVS data collection instruments consist of three core items: (1) the Control Card (NCVS-500), (2) the Basic Screen Questionnaire (NCVS-1), and (3) the Crime Incident Report (NCVS-2). The Control Card provides the basic administrative record for each sampling unit, including information identifying the address of each sample unit and basic household data such as family income, whether the household unit is owned or rented, and the name, ages, race, sex, marital status, and education level of each individual living there. The Control Card also serves as a record of visits, telephone calls, interviews, and information about noninterviews (Biderman & Lynch, 1991).

An adult 18 years of age or older serves as the household respondent, providing answers to the basic questions on the Control Card, the Basic Screen Questionnaire, and, if necessary, the Crime Incident Reports for each victimization against the household (e.g., burglaries, motor vehicle thefts, and household larcenies). This individual also serves as a proxy for household members who are 12 or 13 years of age and whose parents do not allow the interviewer to speak to directly, for those mentally or physically unable to complete an interview, and for those who are unavailable for interview during the entire interview period.

The NCVS screening questions are designed to elicit information about whether particular incidents can be classified as a victimization, either for the household or the individual respondent. Screening questions are followed by more detailed questions about each incident identified as a victimization. The NCVS uses these screen questions to elicit maximum recall of victimization experiences, primarily by reducing respondent fatigue that can result from answering a large number of questions. As illustrated in Exhibit 5.2 and discussed in more detail later in the chapter, changes in the wording of these screen questions can affect respondent recall and alter subsequent estimates of victimization derived from the survey.

Incident reports in the NCVS involve a series of questions about the particular crime event, the offending parties, and the consequences of the crime. For each separate incident identified in the Basic Screen Questionnaire, respondents are asked, for example, whether the crime was reported to the police; whether the offense was completed or merely attempted; whether the offender was identified or known to the victim; the demographic (race, gender, age) characteristics of the offender, if known; whether there was a weapon used in the crime; whether the offender was a member of a gang or under the influence of drugs or alcohol at the time of the incident; whether the victim resisted; and the amount of monetary loss or physical injury, or both, that resulted from the victimization.

The NCVS is rigorous in terms of data collection procedures and processes. NCVS interviewers receive extensive training prior to conducting interviews, with explicit and detailed instructions about how the questionnaire is to be administered, adherence to question wording, and the use of probes to elicit answers from respondents. Quality control is further enhanced by periodic monitoring of the interviewers by supervisors, office edits of completed work, and verification of the data through re-interviews of some individuals. The use and refinement of these procedures have served to enhance the reliability of the NCVS data collection activities.

The 1992 Redesign of the NCVS

Recall that one of the major reasons for the development of the NCVS was to provide an alternative measure to official data of the extent and nature of crime in the United States, a measure that would also allow for comparisons over time. Researchers and staff involved with the NCVS have been reluctant to implement major changes in the design of the survey, fearing that this would compromise the over-time comparisons. However, as victimization surveys were subject to methodological critiques and as advances occurred in areas related to survey methodology more generally, it became increasingly clear that some changes were necessary in the procedures and practices underlying the NCVS. In the early 1980s, a consortium of experts in the fields of criminology, survey design, and statistics was organized to reexamine all aspects of the survey, including questionnaire design, sampling strategies, administration, errors, dissemination, and utilization of the NCVS data (Biderman & Lynch, 1991; Lehnen & Skogan, 1984; Taylor, 1989).

Three separate phases were identified in the possible redesign of the NCVS. The first phase was directed at immediate improvements that could be made in the survey. The second phase emphasized the development of so called near-term changes (e.g., alterations in the use of proxy interviews, incident form changes, the use of computer assisted telephone interviewing, and cost-saving changes) that could improve the NCVS without incurring significant financial costs or disrupting the time series. The third phase involved more fundamental, long-term changes that could dramatically increase the quality of the data or reduce the costs of data collection or both.

As described by Biderman and Lynch (1991), the Crime Survey Redesign Consortium proposed the following set of recommendations for implementation in the near term:

Screening and Scope Changes

Include vandalism in the NCVS, and interview 12- and 13-year-olds directly instead of by proxy.

Expanding Incident Descriptions

(a) Revise place of occurrence codes so that there are consistent distinctions regarding the “publicness” of places or their exposure, (b) add codes to specifically identify crimes occurring in the respondent’s town, (c) obtain information on victim-offender interactions, and (d) expand information on the outcomes of victimization incidents, such as the response of the criminal justice system and other agencies.

Expanding Explanatory Variables

(a) Place supplements in the survey that can be used to distinguish victims from non-victims, and (b) collect more information on the perceived motivation of offenders, including the role of substance use

Changing Crime Classification and Reporting

(a) Use the current collection period for preliminary estimates that can be disseminated in a more timely fashion, (b) adjust annual estimate rates for the major sources of measurement error, and (c) increase the power of statistical tests used in the NCVS. (p. 19)

After various types of design work and field-testing by the Census Bureau, most of these recommendations were accepted by the Bureau of Justice Statistics and were subsequently introduced into the NCVS design in 1986. However, they were limited to those changes that “would not significantly affect the amount or type of crime measured by the survey” (Rennison & Rand, 2007, p. 34); in addition, the scope of crimes covered in the NCVS was not expanded to include vandalism, primarily because it was believed that such a change would disrupt the series and cause difficulties in comparing victimization data over time.

The Redesign Consortium also recommended various changes in the NCVS to be implemented over the long term. These recommendations focused on the design of the survey and carried substantial implications for survey costs and data quality.

Quality Enhancements

(a) Make the NCVS a longitudinal survey of individuals rather than housing units, (b) use a four-month instead of a six-month reference period to reduce the underreporting of victimization events, (c) use interview-to-interview recounting to simplify the recall task rather than recounting to the beginning of the month in which the interview is conducted, (d) employ more productive short-cue screen questions to encourage more complete reporting of victimizations, and (e) use centralized telephone interviews to enhance control over interviewers.

Cost Reducing Changes

(a) Maximize the use of telephone interviewing, which is less expensive than in-person interviews, and (b) use data from bounding interviews for estimation purposes. (Bounding interviews are the first interviews conducted with a household in which questions about victimization experiences are asked, but there is no specific reference period to the previous interview.)

Some of these long-term recommendations have been phased into the NCVS. For example, in 1988, centralized telephone interviewing was initiated at the same time as the use of the new survey instrument, and computer assisted telephone interviewing (CATI) technology is now used in five of the seven waves of the NCVS interviews. However, due to cost considerations, the six-month reference period rather than the recommended four-month period is still used. Given the importance of maintaining continuity in the NCVS series, the 1992 redesign was structured to assess the impact of the various changes in the survey instrument and procedures on inflating or deflating national estimates of victimization. That is, for 1992 through the first six months of 1993, data from half the sample were collected using the NCS methodology, and data from the other half were collected using the redesigned NCVS methodology (Rennison & Rand, 2007). Changes in the NCVS procedures that have had minimal effects on estimated victimization rates include modifications in the wording of several existing questions, expansion of the list of questions in the survey to include perceived drug and alcohol use by offenders, self-protective measures taken by victims, police actions, victim contact with the justice system, the location of the crime, and the victim’s activity (Bachman & Taylor, 1994).

Research comparing pre-redesign and post-redesign data indicates that changes in the nature and coverage of screening questions, which was one of the most important changes to the NCVS; changes in the definition of series crime; the increased use of telephone interviewing; and changes in the classification of crimes have substantially altered the recall of victimizations. In fact, when changes in the NCVS were implemented in 1992, the number of crimes reported by survey respondents increased by 50% to 200%, depending on the type of crime (Cantor & Lynch, 2000). Most of these changes have been viewed as positive improvements in that the enhanced screening questions are thought to better stimulate respondents’ recall of victimizations. These questions serve to clarify crime victimization incidents and diminish the effects of respondents’ subjective interpretation of survey items. In addition, the enhanced questions and inquiries about experiences of domestic violence, rape, and sexual attacks are believed to provide better estimates of these victimizations, which are often difficult to measure. The new screen questions also expand cues that assist respondents in recalling an incident, such as items that ask about being a victim of a violent crime committed by someone the victim knows (such as co-workers, neighbors, and family members) and questions for burglary regarding how the offender entered the structure. The following consequences have been observed in studies on the impact of these procedural changes (Bureau of Justice Statistics, 1994; Kindermann, Lynch, & Cantor 1997).

CATI and Use of a Centralized Phone Facility

These procedures are believed to help standardize the interviewer respondent interactions, leading to the greater reporting of victimizations and more realistic crime rates. The use of CATI has increased the reporting of crimes of violence, crimes of theft, and household larceny by approximately 15% to 20% and burglary by about 20%. CATI’s effect on the reporting of motor vehicle theft has been negligible.

Changing Definitions of Series Crimes

Series crimes are similar but separate crimes that the victim is unable to recall individually or describe in detail to an interviewer. Older versions of the NCVS used three crimes as the minimum limit for a series, but the redesign changed the number to six similar offenses. Under this change, if a respondent reports three to five similar incidents to an interviewer, data on each incident are collected. For most types of crime, it is estimated that this change in the definition of series crimes increases the rate of crime by only 1% to 5%. However, for assaults, especially situations of domestic abuse, and some types of theft, the increase in crime rates may be in the 10% to 15% range.

Reporting Crimes to the Police

A lower percentage of crimes identified in the redesigned NCVS are now being reported to the police than in previous versions. This change is attributed to expanded cuing of less serious crimes (which are less likely to be reported to the police) in the redesigned survey.

Changes in Crime Classification of Personal and Household Larceny

Under the older versions of the NCVS, larceny was defined according to the location in which it occurred, with household thefts involving stolen items on the grounds of the home and personal thefts involving items stolen someplace away from the home. Under the redesign, all thefts are classified as household thefts unless there was contact between the victim and offender. Accordingly, the number of household thefts increased and the number of personal thefts decreased as a consequence of the redesigned coding procedures.

Overall Effects on Victimization Estimates

Kindermann et al. (1997) indicated that the impact of the redesign varies by the type of crime. In particular, the redesigned NCVS yielded higher estimates of crime rates for the following offense types: personal crimes (increase of 44%), crimes of violence (increase of 49%), rapes (increase of 157%), assaults (increase of 57%), property crimes (increase of 23%), burglaries (increase of 20%), and thefts (increase of 27%). No substantial differences were observed for rates of robbery, personal theft, and motor vehicle theft.

Redesign Effects on Select Population Subgroups

The redesign procedures had different effects on the victimization rates for particular subgroups. For crimes of violence, the redesigned NCVS elicited more recounting of victimizations for whites than for blacks, for 33- to 44-year-olds than for other age groups, for persons with household incomes of $15,000 or more than for lower-income persons, and for suburban residents more than urban residents. Rates of household crimes were recounted more for suburban than rural residents through the use of the new procedures, and higher rates of burglary were elicited from black than white respondents in the redesign (Kindermann et al., 1997).

These research findings indicate that the continuity of the NCVS series was compromised by changes in the screening questions and classification procedures, which leads to difficulties in comparing victimization rates over time. As is done in many published reports using NCVS data, it is possible to make adjustments to the pre-redesign and post-redesign series to increase their comparability. However, it is also likely that a number of these changes have complex interaction effects that vary across particular combinations of offense, victim, and method attributes. If these interaction effects are not fully incorporated in the estimation procedures, current adjustments in the NCVS data may not necessarily enhance the comparability of the two data panels.

National Academy of Sciences

Assessing the utility and methodology of the NCVS did not stop after the 1992 redesign. Recently, the Bureau of Justice Statistics commissioned the National Academies’ Committee on National Statistics (in cooperation with the Committee on Law and Justice) to consider alternative options for conducting the NCVS (Groves & Cork, 2008). In response, several preliminary recommendations were made, including the following:

Changing from a six-month reference period to a 12-month reference

Streamlining the incident form (either by eliminating items or by changing their periodicity)

Using advanced statistical methods to construct and disseminate sub-national estimates of major crime and victimization rates

Developing, promoting, and coordinating subnational victimization surveys through formula grants funded from state or local assistance resources

Investigate the introduction of mixed mode data collection designs (including self-administered modes) into the NCVS


Since the Academies’ report, due to budget constraints, three major changes to the NCVS have occurred: (1) data from the first interview— previously withheld as a bounding case—began being used in annual estimates; (2) The Bureau of Justice Statistics implemented a 14% sample cut, as a balance for using the bounding first interviews; and (3) the Bureau of Justice Statistics suspended all CATI from Census Bureau call centers. (However, field interviewers may still use the telephone to conduct their scheduled interviews.) Phase-in of these and other minor changes resulted in a break in series between 2006 and previous years that prevents annual comparisons of national crime victimization rates (Rand & Catalano, 2007).


One basic indicator of crime prevalence used in the first 20 years of summary reports of NCVS trends was the proportion of households touched by crime. From 1975 to 1992, the estimated proportion of U.S. households that experienced any type of victimization in the previous year decreased steadily from about 32% to approximately 23% (Zawitz et al., 1993). In 2005, the estimated proportion of households experiencing any type of victimization was only 14%. Only about 1 in 36 households in the United States experienced one or more violent crimes in 2005 (Klaus, 2007).

Examining the NCVS data from 1973 to 2006,2 it is clear that rates of criminal victimization in the United States have exhibited patterns of stability and change over time. Victimization rates for crimes of violence (e.g., assaults, robberies, and rapes) hovered around 50 per 1,000 persons age 12 or older in the late 1970s to the early 1980s, decreased somewhat during the mid- to late-1980s, increased until the mid-1990s, and steadily declined from 1994 to 2005 (see Exhibit 5.3). As noted above, the Bureau of Justice Statistics recommends that 2006 NCVS data not be used when making yearly trend comparisons.

NOTE: 1973–1991 data adjusted to make data comparable to data after the redesign. Estimates for 1993 and beyond are based on collection year while earlier estimates are based on data year. Due to changes in methodology, the 2006 National Crime Victimization rates are not comparable to previous years and cannot be used for yearly trend comparisons. However, the overall patterns of victimization at the national level can be examined. Property crime rates based on the NCVS data have exhibited a steady decrease over the last 30 years (seeExhibit 5.4). The most common property crimes experienced in the United States are household thefts, followed by residential burglary and motor vehicle thefts. Property crime rates have decreased from 520 per 1,000 households in 1973 to 160 per 1,000 households in 2006. Among specific property crimes, thefts from the household decreased more than twofold, from 391 to 122 per 1,000 households between 1973 and 2006. Burglary rates in 2006 were almost three times lower than their rate in 1973, and rates of motor vehicle theft are more than two times lower over this period.

NOTE: 1973–1991 data adjusted to make data comparable to data after the redesign. Estimates for 1993 and beyond are based on collection year while earlier estimates are based on data year. Due to changes in methodology, the 2006 National Crime Victimization rates are not comparable to previous years and cannot be used for yearly trend comparisons. However, the overall patterns of victimization at the national level can be examined. For additional information about the methods used, see Criminal Victimization 2006.Findings from the NCVS indicate that the risks of victimization are not uniform across different demographic subgroups (see Exhibit 5.5). With the exception of rape or sexual assault, men are victims of violence at significantly higher rates than women. On the basis of comparisons with NCVS data in the 1970s and 1980s, the magnitude of gender differences in the risk of violent victimization has exhibited little change over the last 30 years.

*Includes American Indians, Alaska Natives, Asians, Native Hawaiians, and other Pacific Islanders.

Blacks experience violent victimization at higher rates than any other singular racial group for every violent crime measured by the NCVS. While only 1% of the U.S. population identified itself as being of more than one race in 2008, these individuals were victims of violence at rates 2 to 3 times that of any other race. Persons of Hispanic origin experience violence at slightly lower rates than do non-Hispanics (Rand, 2009).

In general, there is an inverse relationship between age and violence. That is, as age increases, rates of violence decrease. The highest rates of simple assault, for example, involve juveniles between the ages of 12 and 15 and are significantly higher than among persons age 25 or older (Rand, 2009). Over the last 10 years, NCVS data indicate that rates of violent victimization for younger teenagers (aged 12–15) have increased more rapidly than for any other age group. As a group, teenage black males are especially vulnerable to violent victimization (Zawitz et al., 1993).

Major demographic differences also are apparent when property victimization is considered (see Exhibit 5.6). For example, households headed by blacks experienced property crimes at rates significantly higher than households headed by any other single racial group. Households headed by Hispanics had higher rates of property victimization than non-Hispanic households. Rates of property victimization also decreased as household income increased. The higher risks of property victimization for each of these demographic groups are also found across specific types of property crimes.

*Includes American Indians, Alaska Natives, Asians, Native Hawaiians, and other Pacific Islanders.

NCVS data provide the additional opportunity to examine particular characteristics of criminal offenses and those who commit the crimes. As shown inExhibit 5.7, these offense and situational characteristics for violent crimes include whether the act was completed or merely attempted, the number of offenders involved during the incident, the victim-offender relationship, the time of day, whether a weapon was used, whether the incident was reported to the police, and the victim’s perception of their attacker’s sex, race, age, and drug or alcohol use (Bureau of Justice Statistics, 2010).

*Includes attempted rape and verbal threats of rape only.

**Based on incidents where only one offender was present during the victimization.

***Simple assault, by definition, does not involve the use of a weapon.

Detail may not sum to 100% due to rounding.


The vast majority of violent offenses derived from the national victimization data involve attempted or threatened rather than completed offenses. (Note: Assaults are not classified as attempted or completed but as incidents with or without injury.) Most robberies and sexual assaults, for example, involve completed rather than attempted offenses. Violence involving multiple offenders is the exception in violent crimes reported in the NCVS. In 2007, about 8 out of every 10 violent victimizations involved a single offender.

The most common interpersonal relationship between the victim and offender in violent crimes depends on the particular type of offense. For all violent offenses combined in the NCVS data, the majority of victims (51%) report that the offenders are strangers. Offenses committed by strangers are most common in robberies (80%) and aggravated assaults (58%). The proportion of violent victimizations that involve nonstrangers is the highest among rapes or sexual assaults (58%), followed by simple assaults (55%).

When an incident occurred, whether a weapon was involved, and how a victim responded during violent crime are also important components of violent situations that can be better understood by examining data produced by the NCVS. For example, while most simple assaults occur during the daytime (i.e., 6 a.m.–6 p.m.), most rapes or sexual assaults, robberies, and aggravated assaults occur in the evening hours (i.e., 6 p.m.–6 a.m.). In addition, about two thirds of all violent victimizations in 2007 did not involve the use of a weapon, which was especially true of rapes or sexual assaults. More victims of violence take some sort of self-protective measure (i.e., fighting back, shouting for help, etc.) during the incident than do not. This is true for overall violence in general and for each particular type of violent crime.

Victims of direct-contact crimes are sometimes able to provide demographic information about their attackers. Unless the suspect is thwarted in the attempt or subsequently apprehended, victims of property crime are typically unable to provide this type of information. According to 2007 NCVS data on single-offenders of violent incidents, the vast majority (76%) are identified as males by their victims; this rate for male offenders was highest for rape or sexual assault (95%) and lowest in cases of simple assault (71%). Most offenders are more likely to be identified as white by their victim than any other race. And persons under 30 years old are more commonly identified as the offender than older persons for all types of violent incidents.

Although many victims are not able to be certain, when victims can ascertain the state of an offender, victims perceive them to be under the influence of either drugs or alcohol in approximately half of all violent incidents. Rates of perceived substance use are highest in rape or sexual assaults (37%) and lowest during robbery incidents (18%).

As a measure of the dark figures underlying official data, NCVS data from 2007 reveal that about half of all violent crimes are not reported to the police. Most rapes or sexual assaults (58%) as well as most simple assaults (58%) were not reported to police in 2007. Despite this level of unreported crime, the trend in crime reporting for both overall violent and property crime appears to be fairly stable but increasing somewhat over the last two decades (see Exhibit 5.8). For example, on average, between 1992 and 1993, 43% of all violent crimes and 33% of all property crimes were reported to police, according to the NCVS. In contrast, between 2006 and 2007, crime victims indicated that 48% of all violent incidents and 38% of all property crimes were reported.

Different factors are associated with the relative likelihood of crimes being reported to police. In particular, the likelihood of reporting crime is higher for the following factors: (a) completed acts versus attempted acts, (b) crimes involving injury versus those without injury, (c) crimes committed by strangers versus nonstrangers, and (d) when a weapon is present. Particular groups of people also have higher reporting rates than others. For example, violence against females is more likely than violence against males to be reported to the police for both violent and property crimes. Violence against black victims for both types of crimes is slightly more likely to be reported than is violence against white victims (Hart & Rennison, 2003).

Reasons for not reporting crimes to the police vary according to the type of offense. The most commonly given reasons for not reporting violent offenses are that “the crime was a personal or private matter” and that “the offender was not successful.” For property offenses, the most common reasons for not reporting were that “the object was recovered,” “the offender was unsuccessful,” “the police would not want to be bothered,” and “lack of proof” (Baumer & Lauritsen, 2010; Hart & Rennison, 2003; Lauritsen, 2005; Rennison & Rand, 2007).


In addition to national studies, research in the last four decades has included a diverse array of smaller-scale victimization studies, usually focused on particular types of crime within specific jurisdictions. Surveys of victims of domestic violence and sexual assault are widely recognized as an important measurement strategy due to the significant underreporting of these crimes in official data sources. College campus surveys, statewide surveys of quality of life, and local surveys of neighborhood revitalization and development often also include measures of victimization experiences. In addition, Gallup polls and national surveys such as the General Social Survey, conducted by the National Opinion Research Center, also frequently include items on victimization; the National White-Collar Crime Center has conducted multiple surveys of individuals over the last several years about their experiences as victims of white-collar crime (Kane & Wall, 2006); and recent supplements to the NCVS have captured information related to cybercrime experienced by businesses (Rantala, 2008).

Although useful for their particular purposes, small-scale victimization surveys are often less comprehensive than national surveys, and they are usually based on smaller samples. Low response rates and selective sampling frames also limit the generalizability of sample estimates from such studies.


In addition to national and subnational victimization surveys, there is a growing interest in victimization surveys in other countries. These surveys vary in both size and specific design features (see Exhibit 5.11). For example, unlike the NCVS, victimization surveys conducted in Australia, Canada, England and Wales, Sweden, the Netherlands, Scotland, Switzerland, and Ireland use a 12-month reference period and do not use bounding techniques to reduce telescoping. In addition, among these countries, only England and Wales, Sweden, Scotland, and Ireland interview respondents in person. In contrast, Australia and the Netherlands rely on self-administered questionnaires to collect their victimization data.

Variation in international victimization survey design, crime definitions, and legal codes makes cross-national comparisons of victimization estimates produced from these surveys difficult. Nevertheless, experts have attempted to produce compatible information from these surveys for certain types of crimes (Farrington, Langan, & Tonry, 2004). For example, results from international victimization surveys from the countries listed in Exhibit 5.9 show that between 1980 and 2000, Australia consistently had the highest burglary rate, while Switzerland and Sweden were the countries with the lowest(see Exhibit 5.10). Canada and the Netherlands were the countries whose surveys produced the highest estimates for robbery, compared to Scotland, which until the mid-1990s, had the lowest rate of robbery victims (see Exhibit 5.11).


Large-scale public surveys of victims are widely regarded as an alternative measure of the true extent of crime in a jurisdiction because they provide estimates of both reported and unreported offenses in a particular time period. Similar to official data and self-reports of criminal behavior, however, victimization surveys are limited by basic restrictions on their scope and are susceptible to major conceptual and methodological problems that contribute to their mismeasurement of crime. Several of these issues are addressed in the sections that follow.

Limitations on the Scope of Crimes Covered

One immediate problem with victimization surveys as a measure of the distribution and nature of crime is that they can only capture criminal offenses involving victims. By definition, victimless crimes—such as drug and alcohol violations, prostitution, and gambling—are excluded from victimization surveys.3 Other criminal offenses—such as illegal weapon possession, tax evasion, murder, and crimes in which a business or commercial establishment is the victim (e.g., nonresidential burglary, bank robbery, employee theft, corporate collusion, and industrial theft), consumer fraud, possession of stolen property, and a host of public order offenses (e.g., trespassing, disorderly conduct, breach of peace, curfew violations)—are also excluded from these surveys.

The restricted scope of crimes covered in victimization surveys becomes problematic because the included crimes represent only a small minority of all criminal offenses that may be of interest to criminologists and policy makers. For example, based on official data on crime in the United States, the most common arrests involve drug- and alcohol-related crimes (e.g., possession of a controlled substance, public drunkenness, liquor law violations, driving under the influence of alcohol), and a sizable proportion of robberies (27%), burglaries (37%), and larcenies (12%) are crimes committed against the property of businesses rather than individuals (Biderman & Lynch, 1991). By excluding these major and frequently occurring forms of crime, existing public surveys of victims are limited to only a relatively small subset of crimes.4

Conceptual and Definitional Problems

Even among the subset of personal and property crimes included in victimization surveys, this measure of crime suffers from conceptual ambiguity regarding how crimes are defined by the researcher and the respondent. Differential perceptions of crime across individuals that derive from competing conceptualizations of criminal acts contribute to measurement error in the coding and counting of victimizations.

One serious problem in victim surveys that limits their comparability with official counts of crime involves the basic definition of crimes used in each data source. Specifically, victimization surveys use a potentially more inclusive definition of some types of crime than police reports because victim surveys include incidents that may be legally justified (e.g., self-defense assaults) and incidents lacking the basic necessary elements for legal culpability (e.g., criminal intent, particular injuries, or monetary loss). By simply counting as violent crime “any attack or threat or use of force by anyone at all” without an examination of the context of the event, victimization surveys may provide us with a distorted image of the prevalence of particular types of crimes. The tendency for victimization surveys to include many noncrimes and trivial offenses is well documented (Biderman & Lynch, 1991; O’Brien, 1985; Skogan, 1981).

Different definitions of crime across cultures and social groups are another fundamental problem with victimization surveys. Although the magnitude of bias in these surveys from differential interpretations of questions has not been empirically assessed, there is undoubtedly much variation in the meaning of particular words and phrases for members of different demographic groups. For example, the new screening questions in the NCVS use words such as attacked or threatened to cue memories about victimization experiences, but the subjective meaning attached to these terms varies widely. Is the act of brandishing a firearm or knife a threat of violence? Is someone attacked when it is a situation of mutual combat? How do respondents in victimization surveys interpret situations of grabbing, punching, or choking done in the context of either male or sibling roughhousing or of physical banter among peers or spirited athletic contests (like football,hockey, and basketball)? Is it reasonable to assume that no major gender, age, race, social class, or cultural differences exist in the interpretation of what constitutes a threat or attack? Similarly, wording for the property offense questions such as burglary (e.g., Has anyone broken in or attempted to break in your home?) and motor vehicle theft (e.g., Has anyone stolen or used without permission your vehicle?) are also subject to differential interpretations. Under these conditions, estimates of the prevalence of violent or property crimes are likely to be distorted.5

By failing to include reference to the particular context in which threats or attacks take place, most victimization surveys further compound measurement error stemming from differential interpretation of questions on the part of respondents. The NCVS redesign has attempted to deal with this problem by including screening cues about the offender or place of the crime (e.g., attacked by someone at work or school, a neighbor or friend, a relative or family member, while riding in a car, on the street or in a parking lot). Unfortunately, these contextual cues are still not necessarily standardized across different groups because the meaning that should be attached to words and phrases such as attack, threaten, grab, punch, have something stolen from you, break in, and use without permission that underlie questions about an individual’s victimization experiences is not addressed. Regardless of how refined the objective meaning attached to these terms is by researchers, they remain prone to varied subjective interpretations by respondents.

Methodological Problems

In addition to conceptual and definitional limitations, current victimization surveys suffer from numerous methodological problems that further call into question the accuracy of estimates of victimization rates, subgroup variation in these rates, and the measurement of the characteristics of offenders, victims, and crime incidents. These methodological problems involve both simple and complex issues of sampling (e.g., sampling error, sampling bias, characteristics of nonrespondents), survey research (e.g., interviewer effects,telephone vs. personal interviews, social desirability, reference period), and technical procedures used in the calculation of rates (e.g., appropriate numerators and denominators, series incidents).

Sampling Issues

National victimization surveys use the responses of a sample of residents to estimate rates of victimization for the entire population. Unfortunately, whenever samples are used to represent populations, there is always the possibility of a discrepancy between the sample estimates and the true population parameters. When this discrepancy is due entirely to the properties of random sampling, it is referred to as sampling error. The discrepancy is referred to as sampling bias when it derives from sources other than random sampling. Both sampling error and sampling bias are characteristic of victim surveys.

Sampling error results in fluctuations in estimates of national victimization rates. For example, the reported 2000 NCVS rate of violent victimization of 28 per 1,000 persons 12 years of age and older is our best single guess of the true rate of violent victimization in the United States. However, we cannot be certain of the absolute accuracy of this estimate because it is based on a sample, rather than the entire population. It is quite possible that another random sample of U.S. households for the same period would yield a markedly different estimate of violent victimization.

As discussed inChapter 4, as a strategy for correcting the effect of sampling error, statistical theory about probability sampling tells us that in the long run, these sample estimates will converge on the true value in the population. Information from the sample and estimates of sampling error can then be used to develop a range of values in which the true population parameter is likely to fall. Unfortunately, even with the construction of such confidence intervals, there is no guarantee that the estimates derived from a particular sample necessarily reflect the true population values. Of course, these are issues associated with survey research in general and are not specific to victimization surveys.

All other things being equal, large samples are preferred over small samples because sampling error decreases as sample size increases. Given the relatively large size of NCVS samples (approximately 60,000 households), we have far greater confidence in their estimates than other victimization surveys.

Several sources of sampling bias have been identified in victimization surveys. For example, particular groups of people are less likely to participate in victimization surveys than others, and the excluded groups tend to be more prone to victimization. In the NCVS, homeless persons, young males, and members of minority groups are less likely to be included, and each group has higher risks of victimization than their older, female, and nonminority counterparts (Skogan, 1978). At the other extreme, the very wealthy are probably under represented in victimization surveys because of their ability to isolate themselves from interviewers (Garafalo, 1990). Nonrandom differences in response rates in surveys across social groups (e.g., lower response rates among minority and inner-city residents compared to other groups) is another source of sampling bias in victimization research. For the NCVS, nonresponse rates are highest for young non-white youth, the population subgroup with the highest victimization rate. This confounds this issue of producing unbiased estimates for this and other subpopulations with high nonresponse rates (see Exhibit 5.12). The exclusion of victimization experienced by businesses and crimes against the government in household victimization surveys can also be interpreted as a source of sampling bias that dramatically lowers estimates of national victimization rates. Although adjustments for sampling bias are sometimes made in national estimates, there is no universally accepted method of adjusting for this source of error, and many of the correction factors that are used are based on rather dubious assumptions about the nature of the excluded cases.

Survey Research Issues

Survey research is an ideal data collection strategy for victimization studies because surveys are best designed to describe a characteristic in a population. Unfortunately, survey responses are affected by a wide variety of factors that alter the accuracy of estimates of victimization rates. These problems with survey research include differences across the mode of administration of surveys, question wording and reference periods, and the basic limitations of human judgments.

One basic issue in victimization surveys involves whether to collect data through a telephone or face-to-face interview. Telephone surveys have the advantage of being cheaper and quicker to implement. When the interviews are conducted by reading the survey questionnaire from the CATI and are monitored in a central facility, telephone surveys provide greater assurances of uniformity and standardization. Greater anonymity for respondents is also provided through telephone interviewing, which may generate more truthful answers to sensitive questions. In contrast, face-to-face interviews are believed to provide higher response quality because trained interviewers can maximize the use of various visual and nonverbal cues to ask more complicated questions and to determine whether the respondent understands the questions. Both telephone and face-to-face interviews have been used in national and international victimization surveys. For example, only the first interview of the head-of-household respondent surveyed for the NCVS requires a face-to-face interview. (Note: Although not required, if available the other household members also will complete a face-to-face interview.) For the remaining interviews with the head-of-household respondent and with all others residing in the sampled household, the interviews are conducted via the telephone if the respondents are agreeable and have a telephone.

In terms of the accuracy of information and eliciting victimization incidents, several general statements can be supported from previous research comparing telephone and face-to-face interviews. First, there is no convincing evidence that telephone surveys provide less accurate information about crime victimization than personal interviews in the NCVS (Biderman & Lynch, 1991). However, differences across methods in the NCVS projects are probably smaller than in other surveys because of the extensive training and monitoring that is done in the NCVS for both telephone and face-to-face interviewers. Second, the use of CATI from a centralized telephone facility has been found to increase the number of reported crimes for at least some offenses. As mentioned earlier, the use of CATI increases estimates of the rates of crimes of violence, crimes of theft, and household larceny by approximately 15% to 20% and burglary by about 10%, but it has only a marginal impact on reports of motor vehicle theft (see Bureau of Justice Statistics, 1994). CATI is presumed to yield higher and more realistic estimates of crime rates in victimization surveys by enhancing administrative control over the interview process. Under these conditions, differences across studies in the type of survey method utilized and changes over time in the increased use of telephone interviewing makes rather dubious many comparisons of the estimates of victimization rates over time.

Another major issue in survey research involves question wording and response formats. This issue has been raised most pointedly in victimization surveys within the context of the type and nature of screen questions as well as the reference period for the reporting of victimization experiences.

Both incident rates and subgroup variation in victimization risks are affected by the particular screen questions used to elicit reports of victimization experiences. Short screen questions may cue a respondent’s recall of only a small subset of incidents that involve the most serious or frequent violations, whereas longer screens encourage the recounting of a fuller range of experiences across various contexts.

When compared to results using the pre-redesign NCS instrument, the redesigned survey (which includes new screens and enhanced questions) yields substantially higher estimates of victimization rates for particular crimes. Specifically, changes in the survey wording resulted in a dramatic increase in the estimated rates of personal crimes (44% increase), crimes of violence (49% increase), assaults (57% increase), and rapes (157% increase). However, the redesigned survey did not substantially affect estimated rates for robbery, personal theft, or motor vehicle theft (Kindermann et al., 1997). The new method also had a significant impact on estimates of crime committed by nonstrangers, attempted acts, and those offenses that were not reported to the police. As mentioned previously, the recalling of violent crimes in the redesigned survey was higher for the following groups: whites, mid-aged residents (35–44 years old), persons with higher incomes, and suburban residents.

Estimates of victimization rates are also affected by the length of the reference period used in the survey. Intuitively, longer time periods (e.g., asking about victimization experiences in the previous five years) will elicit more incident reports than a shorter time period because of the greater time at risk for victimization. However, survey respondents who are asked retrospective questions may also report an incident as occurring earlier than it actually did. Although this telescoping of the reference period may be due to faulty memory or an unconscious effort to please interviewers, it is a serious problem in any survey that attempts to elicit information about past events.

Surveys that use a reference period of one year or more are susceptible to forward telescoping (i.e., remembering events as occurring more recently than they actually occurred). The six-month reference period used in the NCVS makes this survey prone to backward telescoping (i.e., remembering incidents as having occurred in a more distant past). However, historically, both types of telescoping were minimized in the NCVS data through the process of the bounding interview, in which the first interview of a household serves as a baseline for anchoring the recall period. When re-interviewed after six months, NCVS respondents were asked about incidents since the last interview, and repeated incidents could be filtered out. Incident reports from the first interview of a household in the NCVS panel were not used in the estimation of national rates because they are unbounded and susceptible to telescoping. Biderman and Cantor (1984) suggested that the failure to bound incidents in the NCVS would increase the number of estimated victimizations by almost 50%. Recently, however, the initial, unbounded interviews have been included in the NCVS annual victimization estimates; and advanced statistical procedures suggest the impact of including unbounded interviews is less severe (Addington, 2005; Rand & Catalano, 2007)

This bounding procedure may be a partial solution to the problem of telescoping, but it does not correct several other potential response effects often found in panel studies. For example, and unlike cross-sectional surveys, subjects in panel surveys are interviewed repeatedly. Their responses to survey items may be at least partially dependent on the previous interview experience (Lehnen & Reiss, 1978). However, recent studies suggest that repeated exposure to multiple waves of NCVS interviews does not lead to more people refusing to participate in the survey (Hart, Rennison, & Gibson, 2005).

Decisions regarding which reference period to use in victimization surveys are often based on balancing the issues of telescoping, sample size, and financial costs. Forward telescoping is minimized by a shorter recall period, but this choice also requires the use of larger and ultimately more expensive samples to uncover a sufficient number of individuals who have recently experienced a victimization. Unfortunately, the use of different recall periods and bounding procedures limit the ability to make over-time comparisons of large-scale victimization surveys such as the NCVS.

In addition, because victimization surveys rely only on the report of the victim, the data may be distorted by variations in how respondents define crime. In the 1976 survey, for example, persons with college degrees recalled three times as many assaults as those with only an elementary education (Gove, Hughes, & Geerken, 1985). It is possible that persons with lower levels of education may see a certain act as a normal aspect of daily life, whereas individuals who have had very little experience with physically assaultive behavior may view the same act as one of criminal violence. Alternatively, these differences in reporting of victimization across educational levels could be due to differential respondent productivity; that is, people with higher levels of education may be better able to recall incidents of victimization.

Another issue is related to interviewer and interviewer-respondent interaction effects: Different interviewers may elicit different accounts from the same individual because, for example, they prompt respondents more or less or appear more or less open to certain responses. Clarren and Schwarz (as cited in Gove et al., 1985) concluded that “the upper bound for the number of crimes that could be elicited is limited only by the persistence of the interviewer and the patience of the respondent” (p. 461). In the context of the British Crime Surveys, Coleman and Moynihan (1996) noted that respondents, not wanting to disappoint the often persistent interviewers, may recall incidents experienced by friends or neighbors rather than by themselves. It is also possible, they suggested, that with the crime problem so high on the media’s agenda and thus ingrained in peoples’ minds, respondents may fabricate incidents in the hope that this will somehow lead to policy changes.

Victimization surveys are also an imperfect measure of crime because of the inherent fallibility of human information processing and judgment. People experience lapses in memory, selectively perceive and misperceive particular actions, and interpret actions and events from their own perspectives. In the case of reports of victimization, people may overestimate or underestimate their experiences through outright deception, exaggeration, embarrassment, or misinterpretation. The not uncommon perceptions that lost items were stolen, that open doors and windows are evidence of attempted break-ins, as well as the misunderstanding of particular words and phrases such as threats and fighting words are simple examples of how victimization surveys may provide seriously distorted estimates of the true amount of crime.

The reliability of victimization data can be ascertained through comparisons of survey data with official records, but the results from the limited number of studies that have made such comparisons are not overly encouraging. For example, Turner (1972) found that only 63% of the cases of robbery, assault, and rape from police records were reported on victimization surveys, and there were important differences according to the relationship of the victim to the offender. When the offender was a stranger, 76% of the incidents were reported to the interviewer on the victimization survey; when the offender was known to the victim, 57% of the incidents were reported; and when the offender was a relative, only 22% of the incidents were reported. In a similar record-check study in Baltimore, Murphy and Dodge (1981) found that only 37% of the assaults—compared to 75% of the larcenies, 76% of the robberies, and 86% of the burglaries—uncovered in police records were reported by respondents in victimization surveys.

A second type of record check involves forward record checks (O’Brien, 1985), which involves examining crimes that respondents in victimization surveys claim to have reported to the police. Apparently, the only study of this type was conducted by Schneider (1977, as cited in O’Brien, 1985) in Portland, Oregon, who found that only 45% of the crimes that respondents claimed to have reported to the police were listed in police records.

Technical and Procedural Issues

A number of technical issues associated with victimization surveys also place limits on their utility as measures of crime. These issues focus on the numerator (i.e., the number and type of crimes) and the denominator (i.e., the relevant population base) used in the calculation of crime rates and trends. Changes in technical aspects of national surveys have further eroded the comparability of victimization estimates across jurisdictions and over time.

Changes in the definition of series victimizations and how they are treated is a major technical problem with current victimization surveys. A series involves multiple incidents that are very similar in detail but for which the respondent is unable to recall specific dates and details well enough to report the incidents separately. They are ongoing, with no clear starting or stopping point. For example, many cases of spouse abuse or bullying involve repeated attacks or threats of attack on a number of occasions over the reference period, but a victim cannot recall the particular dates or details of each incident. These details are important in that they are what are used to ascertain if a crime occurred, and if so, what type of crime occurred. Without these details, this simple counting task cannot be accomplished.

National surveys vary widely in their definition of series victimizations and how these are handled in estimation procedures. For example, the NCVS defined a series as involving “three or more criminal acts” prior to the redesign in 1992, and since that time they have changed the crime threshold to six incidents.

Although some reports using NCVS data count series crimes as one victimization, series victimizations are excluded in rates presented in the annual crime bulletin. It is estimated that this change in the definition of series crimes will result in only a small (1% to 5%) increase in the rates for most crimes, but the increase may be as large as 10% to 15% for rates of assault and some types of theft (Bureau of Justice Statistics, 1994). Aside from decreasing the overall estimated rates of victimization, the exclusion of series incidents also artificially deflates the victimization risks for women and other subgroups that are more susceptible to these crimes.

Another technical issue that affects the counting of incidents for rate calculation involves the incomplete bounding of interviews in the NCVS data. Specifically, NCVS procedures dictate that only the first rotation of a housing unit in the survey be treated as a bounding interview, excluding victimization data from that housing unit gathered in the first contact period. However, unbounded data occurs when (1) a new person(s) moves into an eligible housing unit, (2) an eligible respondent was not successfully contacted in the previous interview, (3) a respondent ages into eligibility (i.e., they were 9–11 years old at the start of the time-in sample), or (4) a respondent provides personal interviews but it is followed by proxy interviews. Nevertheless, the data are still used for calculating victimization rates. Thus, by bounding the housing unit but not the individuals within it, NCVS estimates remain susceptible to telescoping and overestimation of victimization risks. The seriousness of this problem is illustrated in a study by Biderman and Cantor (1984), who found that approximately 18% of the total interviews used to generate NCVS estimates in the 1970s were first unbounded interviews. The inclusion of these data increased the number of victimizations used for published estimates of national trends by almost 50%. The replacement of households that leave the panel because they move may also lead to lower estimates of victimization; individuals in such households generally have higher rates of victimization than people who remain at the same address.

As sample data used to estimate population values, national counts of victimizations are derived from various types of weighting procedures and imputations for missing data. Although based on sound statistical theory, these adjustments in practice involve making assumptions about the behavior of nonrespondents and the homogeneity of classes or subgroups. For example, the weighting of the British Crime Survey data to adjust for oversampling of inner-city and minority respondents may be of limited value simply because it does not take into account the differential response rates across these subgroups and their differences in victimization risks. By making unrealistic assumptions that errors in measurement and sampling are random (rather than correlated with other factors), the complex weighting and adjustments used in the NCVS are also subject to debate.

When moving from a consideration of victimization incidents to victimization rates, other technical issues arise that may lead to a distortion in the interpretation of results. For example, Bureau of Justice Statistics’ publications of NCVS data trends compute rates of victimizations per 1,000 persons or households by taking the total number of incidents and dividing by the respective number of persons or households. The resulting rate, however, does not translate into a proportion of persons or households because multiple victimizations are included in the calculations. For instance, a burglary rate of 100 per 1,000 households does not mean that 10% of households experience a burglary because it is possible that one household may report an enormous number of victimizations. What it means is that there are 100 burglaries per 1,000 households. Unfortunately, by spreading these multiple victimizations of a particular household across all households, a somewhat misleading image of risks may be assumed by the consumer of these calculated rates. Under these conditions, a measure such as proportion of households touched by crime may be a better barometer of victimization risks (Klaus, 2007).

An assortment of other technical and procedural issues affects the number of victimizations estimated from these surveys. The following additional factors have been found to influence the counting of victimization incidents:

• Using a proxy to report victimizations of other household members results in lower numbers of incidents being reported than when household members report their own experiences. The NCVS used a proxy for all 12- or 13-year-old household members (until 1992), non-English or non-Spanish speakers, and those temporarily absent or unable to be interviewed.

• Response rates vary across national surveys, and those who refuse to participate or are undercounted in household enumerations (e.g., younger persons, the poor and homeless, ethnic minorities, and frequent movers) generally have higher victimization risks than survey respondents. Although the underrepresentation of these high-risk groups will obviously decrease estimates, the impact of differential response rates across subgroups may either inflate or deflate the number of recorded incidents, depending on whether groups with the highest response rates have high or low risks of victimization.

• Various types of violent behaviors and thefts are seriously under-counted in victimization surveys. These include crimes committed by family members and intimates, homicides, robberies and thefts from commercial establishments, rapes and sexual assaults, and all crimes committed against tourists and other nonresidents.

• Changes in procedures utilized in the NCVS studies over time (e.g., more extensive screens, greater use of telephone interviews, changes in the definition of series incidents, decreased sample sizes) also influence the number of incidents recorded in yearly samples. All else being equal, reductions in the size of NCVS samples over time increase sampling error and the subsequent accuracy of the estimates of the numbers and rates of victimization (Lauritsen, 2005).


• The number of reported victimizations decreases through successive interviews in the NCVS rotation panels (Garafolo, 1990). In other words, the number of recalled incidents decreases consistently between the first and seventh interview. Given that the proportion of persons in the NCVS who are being interviewed for all seven rotations has decreased over time (due to increased population mobility), there has been (a) an increase in the number of persons who receive a smaller number of interviews (e.g., one to three interviews) and (b) a subsequent increase in the apparent number of victimizations due to this time-in-sample problem. Changes in the average time-in-sample affects estimates of the number of incidents and the comparability of the NCVS victimization rates over time.

• Many of the incidents reported in victimization surveys are trivial and may not even qualify as a crime from a legal perspective. Most violent crimes in the NCVS involve simple assaults without injury to the victim. The redesigned NCVS yields more incidents of violent crime, but the new method has a greater impact on estimates for violent offenses by nonstrangers, attempted crimes, and violent crimes not reported to the police.

The level of victimization risks in national surveys depends in large part on the denominator used in the calculation of estimated rates. Property victimization rates for each national survey are calculated as an incident rate (or victimization rate—both are possible in the data) per 1,000 households. Rates of violent victimization (or violent incidents—both are possible in the data) are often expressed as a victimization rate per 1,000 population 12 years of age and older. The measurement of victimization risks, however, is in many cases better served by a different base for rate calculation, using for the denominator the entity most at risk for that particular victimization. For example, the calculation of motor vehicle theft per household is probably a less accurate measure of one’s vulnerability to this crime than a motor vehicle theft rate expressed per 1,000 households with motor vehicles or thefts per 1,000 motor vehicles. Similarly, the calculation of rape rates per 1,000 persons ignores the fact that women are the victims of this crime in more than 90% of the cases. Under these conditions, computing rape rates per 1,000 females is a more meaningful barometer of victimization risks, and this is often what is produced in many reports published by the Bureau of Justice Statistics.

It is possible to compute victimization rates on a wider variety of population bases that correspond to risky groups and settings. These include, for example, (a) the rate of stranger assaults per 1,000 contacts with strangers, (b) mugging rates per 1,000 hours spent in public places, (c) home burglary rates per 1,000 households with burglary alarms, and (d) violent crime rates per 1,000 college students between 18 and 24 years of age. Although numerical data for each of these particular base comparisons may not be readily available in all cases, the use of rate calculations that directly incorporate risk factors may be a better reflection of one’s chances of particular types of victimization than measures of victimization rates that are not adjusted for differential exposure and vulnerability.


National victimization surveys have been widely used as an alternative measure of the prevalence and distribution of crime in the United States. The major advantage of these surveys is that they provide a profile of criminal incidents that are both reported and not reported to the police. Who is better able to enumerate the nature and distribution of crime incidents and the consequences of crime than those who experience it?

Unfortunately, all victimization surveys have four inherent problems that limit their utility as accurate measures of criminal activity. First, victim surveys cover only a small range of criminal acts—excluding victimless and public order violations, homicides, commercial and business victimizations, and many white-collar crimes against consumers—and seriously undercount incidents of domestic violence and other crimes among known parties. Second, victimization surveys are based on sample data and not population counts, making them subject to serious distortion because of sampling error and sampling bias. Third, these surveys are based entirely on victims’ perceptions without independent confirmation that the offenses they claim to have experienced actually occurred or would qualify as a crime from a legal perspective. Victims may also either under- or overreport their experiences because of factors such as forgetfulness, misinterpretations of events, embarrassment, fear of getting in trouble, trying to please interviewers by giving socially desirable answers, and deliberate distortion or manipulation. Fourth, the number of victimizations uncovered in surveys depends on how the questions are worded and numerous technical elements associated with the survey itself. The use of different procedures over time renders problematic any comparisons of estimated victimization rates from these surveys.

The problems with victimization surveys, however, are neither more nor less serious than the problems with official data and self-report measures of crime. In fact, problems of definitional ambiguity, limited coverage, reporting biases, and various sources of measurement error plague each method of counting crime. Nonetheless, a comparison of the results across these three primary methods of counting crime reveals several common themes about the prevalence of crime, its spatial distribution, and the correlates of crime. The common themes across these methods and some concluding thoughts about crime measurement are addressed in the final chapter.

Get 20% Discount on This Paper
Pages (550 words)
Approximate price: -

Try it now!

Get 20% Discount on This Paper

We'll send you the first draft for approval by at
Total price:

How it works?

Follow these simple steps to get your paper done

Place your order

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Receive the final file

Once your paper is ready, we will email it to you.

Our Services

Ace Writing Center has stood as the world’s leading custom essay writing services providers. Once you enter all the details in the order form under the place order button, the rest is up to us.


Essay Writing Services

At Ace Writing Center, Nowadays, students normally have extremely busy schedules. You will note that some of them have to take on some evening or weekend jobs in order to get some income that can help them to sustain in college or in the university. This can deny them a chance to write all the essays given. Others usually get bombarded with a lot of work by their lecturers. This can still delay such students from working on all their essays. However, some of them usually try to work on all these essays but end up delivering their work late. This can prevent them from graduating since most lecturers are strict on deadlines. If you want to write a business essay, the wise way is to hire an outstanding essay writing service like us, so that you can get the best results. If you are keen, you will note that many companies usually overcharge their customers. Some of them are there only to make money. And in reality, they really don’t care to build a long term commitment with students. You should not choose such companies. You should take your time and choose a reliable company to work with. Ace Writing Center is the ultimate solution for you. We have been offering our writing service for more than 7 years. This is a clear indication that you will get quality essay writing service. We have a wide range of writers who can work on any business essay that you might have. We believe in doing extensive research so that we can provide quality work to all our clients. .


Admission and Business Papers

Have you ever had to write an admission essay for college? The majority of students face the same issues when applying to a university or college and many in such situations decide they need professional help to cope with this matter. They get in a situation when the deadline keeps coming closer but lack motivation to start because they are just not sure if their writing skills are strong enough. We have a solution for you! Ace Writing Center is the best admission essay writing service with a large professional team and years of experience in providing high-quality papers to students of all levels and faculties. The mission of our team is to help students make their dreams of entering a good college come true and that’s what we offer!.


Editing and Proofreading

Sometimes all the words for your paper just flow out of your mind and into your fingers. You type quickly at your keyboard and there they are, your beautiful words right there on the screen. But you have no idea how to polish it up. You may be wishing there was a paper writing service that offered this type of writing service. Look no more! Here at Ace Writing Center, we offer you an editing and proofreading option that you can't find anywhere else..


College Essay Writing

In case you are familiar Ace Writing Center, you know the way to distinguish a better company from a cheap one exactly. First of all, poor service website does not have a sufficient support. We think support team is an essential part of success; it has to answer all clients’ questions and be a connecting link between clients and their writers. On our web-service you will get answers about anything you need and your writer will receive all your instructions, assignments and requirements exactly and swiftly. A writing service that we run has got a flexible pricing system that will save you from senseless wastes and many bonus systems that let you sparing money for something important for you.

You cannot copy content of this page
Open chat
Hello. Can we help you?