Click a topic below for an index of articles:

 

New-Material

Home

Donate

Alternative-Treatments

Financial or Socio-Economic Issues

Forum

Health Insurance

Hepatitis

HIV/AIDS

Institutional Issues

International Reports

Legal Concerns

Math Models or Methods to Predict Trends

Medical Issues

Our Sponsors

Occupational Concerns

Our Board

Religion and infectious diseases

State Governments

Stigma or Discrimination Issues

If you would like to submit an article to this website, email us at info@heart-intl.net for a review of this paper
info@heart-intl.net

any words all words
Results per page:

“The only thing necessary for these diseases to the triumph is for good people and governments to do nothing.”


     
  


     

Using National Survey Data to Analyze Children’s Health Insurance Coverage: An Assessment of Issues

by John L. Czajka and Kimball Lewis

Mathematica Policy Research, Inc.
600 Maryland Ave., S.W. Suite 550
Washington, DC 20024

May 21, 1999

EXECUTIVE SUMMARY

Survey data will play an important role in the evaluations of the Children’s Health Insurance Program (CHIP) because program administrative data cannot tell us what is happening to the number of uninsured children. This report discusses key analytic issues in the use of national survey data to estimate and analyze children’s health insurance coverage. One goal of this report is to provide staff in the Office of the Assistant Secretary for Planning and Evaluation (ASPE) with information that will be helpful in reconciling or at least understanding the reasons for the diverse findings reported in the literature on uninsured children. The second major objective is to outline for the broader research community the factors that need to be considered in designing or using surveys to evaluate the number and characteristics of uninsured children. We examine four areas:

· Identifying uninsured children in surveys

· Using survey data to simulate Medicaid eligibility

· Medicaid underreporting in surveys

· Analysis of longitudinal data

We focus on national surveys, but many of our observations will apply equally to the design of surveys at the state level.

IDENTIFYING UNINSURED CHILDREN IN SURVEYS

Most of what is known about the health insurance coverage of children in the United States has been derived from sample surveys of households. Three ongoing federal surveys--the annual March supplement to the Current Population Survey (CPS), the National Health Interview Survey (NHIS), and the Survey of Income and Program Participation (SIPP)-- provide a steady source of information on trends in coverage and support in-depth analyses of issues in health care coverage. Periodically the federal government and private foundations sponsor additional, specialized surveys to gather more detailed information on particular topics. Three such surveys are the Medical Expenditure Panel Survey (MEPS), the Community Tracking Study (CTS), and the National Survey of America’s Families (NSAF). Table 1 presents recent estimates of uninsured children from all six surveys. It is easy to see from this table why policymakers are frustrated in their attempts to understand the level and trends over time in the proportion of children who are uninsured.

Estimates of the incidence or frequency of uninsurance are reported typically in one of three ways: (1) the number who were uninsured at a specific point in time, (2) the number who were ever uninsured during a year, or (3) the number who were uninsured for the entire year. Point-in-time estimates are the most commonly cited. With the exception of the MEPS estimate, all of the estimates reported in Table 1 represent estimates of children uninsured at a point in time, or they are widely interpreted that way. Of the six surveys, only the SIPP and MEPS are able to provide all three types of estimates. With the 1992 SIPP panel we estimated that 13.1 percent of children under 19 were uninsured in September 1993, 21.7 percent were ever uninsured during the year, and 6.3 percent were uninsured for the entire year. Clearly, the choice of time period makes a big difference in the estimated proportion of children who were uninsured.

                                   TABLE 1
ESTIMATES OF THE PERCENTAGE OF CHILDREN WITHOUT HEALTH INSURANCE, 1993-1997
                                                                                          
Source of     1993        1994        1995        1996       1997
Estimate
                                                                                          
CPS           14.1        14.4        14.0        15.1       15.2
NHIS          14.1        15.3        13.6        13.4        --
SIPP          13.9        13.3         --          --         --
MEPS           --           --          --        15.4        --
CTS            --           --          --        11.7        --
NSAF           --           --          --         --         11.9
                                                                                          
Notes:  Estimates from the CPS and SIPP are based on tabulations of public use
files by Mathematica Policy Research, Inc., and refer to children under 19 years
of age.  Estimates from the other surveys apply to children under 18.  The NHIS 
estimates were reported in NCHS (1998).  The estimate from MEPS refers to children
who were "uninsured throughout the first half of 1996," meaning three to six 
months depending on the interview date; the estimate was reported in 
Weigers et al. (1998).  The CTS estimate, reported in Rosenbach and Lewis (1998), 
is based on interviews conducted between July 1996 and July 1997.  The NASF estimate, 
reported in Brennan et al. (1999), is based on interviews conducted between 
February and November, 1997.
 

The estimate of uninsured children provided annually by the March CPS has become the most widely accepted and frequently cited estimate of the uninsured. At this point, only the CPS provides annual estimates with relatively little lag, and only the CPS is able to provide state-level estimates, albeit with considerable imprecision. But what exactly does the CPS measure? CPS respondents are supposed to report any insurance coverage that they had over the past year. There is little reason to doubt that the CPS respondents are answering the health insurance questions in the manner that was intended--that is, they are reporting coverage that they ever had in the year. For example, CPS estimates of Medicaid enrollment match very closely the SIPP estimates of children ever covered by Medicaid in a year whereas the CPS estimates exceed the SIPP estimates of children covered by Medicaid at a point in time by about 27 percent. How, then, can the CPS estimates of children ever uninsured during the year match other survey estimates of children uninsured at a point in time? The answer, we suggest, lies in the extent to which insurance coverage for the year is underreported by the CPS. Is it simply by chance that the CPS numbers approximate estimates of the uninsured at a point in time, or is there something more systematic? The more the phenomenon is due to chance, the less confident we can be that the CPS will correctly track the changes in the number of uninsured children over time or correctly represent the characteristics of the uninsured.

Multiple sources of error may affect all of the major surveys, including the CPS, and make it difficult to compare their estimates of the uninsured. These include the sensitivity of responses to question design; the impact of basic survey design features; the possibility that respondents may not be aware of the source of their coverage or even its very existence; and the bias introduced by respondents’ imperfect recall.

Typically, surveys identify the uninsured as a “residual.” They ask respondents if they are covered by health insurance of various kinds and then identify the uninsured as those who report no health insurance of any kind. Both the CTS and the NSAF have employed a variant on this approach. First, they collect information on insurance coverage, and then they ask whether people who appear to be uninsured really were without coverage or had some coverage that was not reported. In both surveys this latter “verification question” reduced the estimated proportion of children who were without health insurance. These findings make a strong case for including a verification question into the measurement of health insurance coverage. The NHIS introduced such a question in 1997, and the SIPP is testing this approach.

The sensitivity of responses to question design is further illustrated by the Census Bureau’s experience in testing a series of questions intended to identify people uninsured at a point in time. These questions yielded much higher estimates than other, comparable surveys. The Bureau’s experience sends a powerful message that questions about health insurance coverage can yield unanticipated results. Researchers fielding surveys that attempt to measure health insurance coverage would be well-advised to be wary of constructing new questions unless they can also conduct very extensive pretesting.

Other survey design decisions can also have a major impact of the estimates of the uninsured, including the choice of the survey universe and the proportion of the target population that is actually represented, the response rate among eligible households, the use of proxy respondents, the choice of interview mode, the use of editing to correct improbable responses, and the use of imputation to fill in missing responses. Both the CTS and NSAF were conducted as samples of telephone numbers, with complementary samples of households without telephones. This difference in methodology between these surveys and the CPS, NHIS, and SIPP has drawn less attention than the use of a verification question, but it may be as important in accounting for the lower estimate of the proportion of children who are uninsured.

Which estimate reported in Table 1 is the most correct? There is no agreement in the research community. Clearly, the CPS estimate has been the most widely cited, but, probably its timeliness and consistency account for this more than the presumption that it is the most accurate. When the estimate from the CTS was first announced, it was greeted with skepticism. Now that the NSAF, using similar survey methods, has produced a nearly identical estimate, the CTS’ credibility has been enhanced, and the CTS number, in turn, has paved the way for broader acceptance of the NSAF estimate. Yet neither survey has addressed what was felt to be the biggest source of overestimation of the uninsured in the federal surveys: namely, the apparent, substantial underreporting of Medicaid enrollment, discussed below. Much attention has focused on the impact of the verification questions in the CTS and NSAF, but the effect was much greater in the NSAF than in the CTS even though the end results were the same. The NHIS will soon be able to show the effects of introducing a verification question into that survey, but we suspect that significant differences in the estimates will remain. We conclude that a more detailed evaluation of the potential impact of sample design on the differences between the CTS and NSAF, on the one hand, and the federal surveys, on the other, may be necessary if we are to understand the differences that we see in Table 1.

USING SURVEY DATA TO SIMULATE MEDICAID ELIGIBILITY

There are two principal reasons for simulating Medicaid eligibility in the context of studying children’s health insurance coverage. The first is to obtain denominators for the calculation of Medicaid participation rates--for all eligible children and for subgroups of this population. The second is to estimate how many uninsured children--and what percentage of the total--may be eligible for Medicaid but not participating. The regulations governing eligibility for the Medicaid program are exceedingly complex, however. There are numerous routes by which a child may qualify for enrollment, and many of the eligibility provisions and parameters vary by state. Even the most sophisticated simulations of Medicaid eligibility employ many simplifications. More typically, simulations are highly simplified and exclude many eligible children. A full simulation requires data on many types of characteristics, but even the most comprehensive surveys lack key sets of variables.

A Medicaid participation rate is formed by dividing the number of participants (people enrolled) by the number of people estimated to be eligible. Because surveys underreport participation in means-tested entitlement programs, it has become a common practice to substitute administrative counts for survey estimates of participants when calculating participation rates. This strategy merits consideration in calculating Medicaid participation rates as well, but the limitations of Medicaid eligibility simulations imply that this must be done carefully. In addition, there are issues of comparability between survey and administrative data on Medicaid enrollment that affect the substitution of the latter for the former in the calculation of participation rates and even the use of administrative data to evaluate the survey data. Problems with using administrative data include:

  • The limited age detail that is available from published statistics
  • The duplicate counting of children who may have been enrolled in different states
  • The fact that the administrative data provide counts of children ever enrolled in a year while eligibility is estimated at a point in time
  • The difficulty of removing institutionalized children--who are not in the survey data-- from the administrative numbers
  • Inconsistencies in the quality of the administrative data across states and over time

Attempts to combine administrative data with survey data in calculating participation rates must also address problems of comparability created by undercoverage of the population in sample surveys and the implications of survey estimates of persons who report participation in Medicaid but are simulated to be ineligible.

A further issue affecting participation rates is how to treat children who report other insurance. With SIPP data we found that 18 percent of the children we simulated to be eligible for Medicaid reported having some form of insurance coverage other than Medicaid. Excluding them from the calculation raised the Medicaid participation rate from 65 percent to 79 percent.

MEDICAID UNDERREPORTING IN SURVEYS

When compared to administrative data, it appears that the CPS and the SIPP may underestimate Medicaid enrollment by 13 to 25 percent. The underreporting of Medicaid enrollment may lead to an overstating of the number and proportion of children who are without insurance. But the impact of Medicaid underreporting on survey estimates of the uninsured is far from clear. Indeed, even assuming that these estimates of Medicaid underreporting are accurate, the potential impact of a Medicaid undercount on estimates of the uninsured depends on how the underreporting occurs. First, some Medicaid enrollees may report to survey takers, incorrectly, that they are covered by a private insurance plan or a public plan other than Medicaid. Such people will not be counted as Medicaid participants, but neither will they be counted among the uninsured. Second, some children in families that report Medicaid coverage may be inadvertently excluded from the list of persons covered. In the SIPP we found that 7 percent of uninsured children appeared to have a parent covered by Medicaid. Any such children actually covered by Medicaid will be counted instead as uninsured. Third, some children covered by Medicaid may fail to report any coverage at all and be in families with no reported Medicaid coverage either; these children, too, will be counted incorrectly as uninsured. Fourth, some of the undercount of Medicaid enrollees may be due to underrepresentation of parts of the population in surveys, although survey undercoverage may have a greater impact on understating the number of uninsured children. This problem has not been addressed at all in the literature, and we are not aware of any estimates of how many uninsured children may be simply missing from the survey estimates. In sum, the potential impact of the underreporting of Medicaid enrollment on estimates of the uninsured is difficult to assess without information on how the undercount is distributed among different causes.

In using administrative estimates of Medicaid enrollment, it is important that the reference period of the data match the reference period of the survey estimates. HCFA reports Medicaid enrollment in terms of the number of people who were ever enrolled in a fiscal year. This number is considerably higher than the number who are enrolled at any one time. Therefore, the HCFA estimates of people ever enrolled in a year should not be used to correct survey estimates of Medicaid coverage at a point in time because this results in a substantial over-correction.

The CPS presents a special problem. We have demonstrated that while the CPS estimate of uninsured children is commonly interpreted as a point in time estimate, the reported Medicaid coverage that this estimate reflects is clearly annual-ever enrollment. Adjusting the CPS estimate of the uninsured to compensate for the underreporting of annual-ever Medicaid enrollment produces a large reduction. What this adjustment accomplishes, however, is to move the CPS estimate of the uninsured closer to what it purports to be--namely, an estimate of the number of people who were uninsured for the entire year. Applying an adjustment based on annual-ever enrollment but continuing to interpret the CPS estimate of the uninsured as a point-in-time estimate is clearly inappropriate. Adjusting the Medicaid enrollment reported in the CPS to an average monthly estimate of Medicaid enrollment yields a much smaller adjustment and a correspondingly smaller impact on the uninsured, but it involves reinterpreting the reported enrollment figure as a point-in-time estimate--which it is clearly not. Invariably, efforts to “fix” the CPS estimates run into problems such as these because the CPS estimate of the uninsured is ultimately not what people interpret it to be but, instead, an estimate--with very large measurement error--of something else. We would do better to focus our attention on true point-in-time estimates, such as those provided by SIPP, NHIS, the CTS, and NSAF. But until the turnaround in the release of SIPP and NHIS estimates can be improved substantially, policy analysts will continue to gravitate toward the CPS as their best source of information on what is happening to the population of uninsured children.

ANALYSIS OF LONGITUDINAL DATA

Given the difficulties that respondents experience in providing accurate reports of their insurance coverage more than a few months ago, panel surveys with more than one interview per year seem essential to obtaining good estimates of the duration of uninsurance and the frequency with which children experience spells of uninsurance over a period of time. Longitudinal data are even more essential if we are to understand children’s patterns of movement into and out of uninsurance and into and out of Medicaid enrollment. At the same time, however, longitudinal data present many challenges for analysts. These include the complexity of measuring the characteristics of a population over time, the effects of sample loss and population dynamics on the representativeness of panel samples, and issues that must be addressed in measuring spell duration.

CONCLUSION

Perhaps the single most important lesson to draw from this review is how much our estimates of the number and characteristics of uninsured children are affected by measurement error. Some of this error is widely acknowledged--such as the underreporting of Medicaid enrollment in surveys--but much of it is not. Even when the presence of error is recognized analysts and policymakers may not know how to take it into account. We may know, for example, that Medicaid enrollment is underreported by 24 percent in a particular survey, but how does that affect the estimate of the uninsured? And how much does the apparent, substantial underreporting of Medicaid contribute to the perception that Medicaid is failing to reach millions of uninsured children? Until we can make progress in separating the measurement error from the reality of uninsurance, our policy solutions will continue to be inefficient, and our ability to measure our successes will continue to be limited.

As federal and state policy analysts ponder how to evaluate the impact of the Children’s Health Insurance Program (CHIP) initiatives authorized by Congress, attention is turning to ways to utilize ongoing surveys as well as to the possibility of states funding their own surveys. Survey data certainly will play an important role in the CHIP evaluations. While administrative data can and will be used to document the enrollment of children in these new programs as well as the expanded Medicaid program, administrative data cannot tell us what is happening to the number of uninsured children. In this context it is important to consider what we know about the use of surveys to measure the incidence of uninsurance among children.

The purpose of this report is to discuss key analytic issues in the use of national survey data to estimate and analyze children’s health insurance coverage. The issues include many that emerged in the course of preparing a literature review on uninsured children (Lewis, Ellwood, and Czajka 1997, 1998) and in conducting analyses of children’s health insurance coverage with the Survey of Income and Program Participation (SIPP) (Czajka 1999). One goal of this report is to provide staff in the Office of the Assistant Secretary for Planning and Evaluation (ASPE) with information that will be helpful in reconciling or at least understanding the reasons for the diverse findings reported in the literature on uninsured children. The second major objective is to outline for the broader research community the factors that need to be considered in designing or using surveys to evaluate the number and characteristics of uninsured children. While we focus on national surveys, many of our observations will apply equally well to the design of surveys at the state level.

Section A discusses how uninsured children have been identified in the major national surveys. It compares alternative approaches, discusses a number of measurement problems that have emerged as important, and concludes with comments on the interpretation of uninsurance as measured in the Current Population Survey (CPS)--the national survey most widely cited with respect to the number of uninsured children. Section B looks at the problem of simulating eligibility for the Medicaid program. Estimates developed with different underlying assumptions suggest that anywhere from 1.5 million to 4 million uninsured children at various points in the 1990s may have been eligible for but not participating in Medicaid. In part because the estimates vary so widely, and also because even the lowest estimate of this population is sizable, the problem of simulating Medicaid eligibility merits extended discussion. Building on this discussion, Section C then examines strategies for calculating participation rates for the Medicaid program. We review issues relating to estimating the number of participants with administrative versus survey data and making legitimate comparisons with estimates of the number of people who were actually eligible to participate in Medicaid. We include a discussion of the problem presented by people who report participation but appear to be ineligible. Section D examines how the underreporting of Medicaid participation in surveys may affect survey estimates of the uninsured, and Section E discusses issues related to the use of longitudinal data to investigate health insurance coverage in general and uninsurance in particular. Finally, Section F reviews our major conclusions.

A. IDENTIFYING UNINSURED CHILDREN IN SURVEYS

Most of what is known about the health insurance coverage of children in the United States has been derived from sample surveys of households. Three ongoing federal surveys collect data on insurance coverage from nationally representative samples, thereby providing a steady source of information on trends in coverage as well as supporting in-depth analyses of issues in health care coverage. Periodically the federal government and private foundations sponsor additional, specialized surveys to gather more detailed information on particular topics. After a brief review of the major federal surveys and three recent specialized surveys, we outline the alternative approaches that are being used to identify uninsured children and consider some of the measurement problems that confront these efforts. We close this section with a discussion of the interpretation of estimates of the uninsured from the most widely cited of these surveys.

1.The Major Surveys

The CPS is a monthly survey whose chief purpose is to provide official estimates of unemployment and other labor force data. In an annual supplement administered each March, the CPS captures information on the health insurance coverage. In large part because of the timely release of these data and their consistent measurement over time, the CPS has become the most widely cited source of information on the uninsured. The March supplement is also the source of the official estimates of poverty in the United States. The availability of the poverty measures along with the data on health insurance coverage and a large sample size--50,000 households--that can support state-level estimates have contributed to making the CPS an important resource for research on the uninsured.

The National Health Interview Survey (NHIS) collects data each week on the health status and related characteristics of the population. The principal purpose of the NHIS is to provide estimates of the incidence and prevalence of both acute and chronic morbidity. To achieve this objective, the entire year must be covered. To limit the impact of recall error and reduce respondent burden, the annual interviews (with more than 40,000 households) are distributed over 52 weeks, and respondents are asked to report on their current health status as well as recent utilization of health care services. The interviews include a battery of questions on health insurance coverage. These data can be aggregated over the year to produce an average weekly measure of insurance coverage. Despite some clear advantages of the NHIS measure over the CPS measure of the uninsured, however, the NHIS measure has been much less widely accepted and cited. Even its limitations are much less well known than those of the CPS measure. The long lag with which data from the NHIS are released, relative to the March CPS, is undoubtedly a major factor limiting use of these data on uninsurance.

The last of the three ongoing surveys, the SIPP, is a longitudinal survey that follows a sample of households--a “panel”--for two-and-a-half to four years. Sample households are interviewed every four months and asked to provide detailed monthly data on household composition, employment and income of household members, and other characteristics. Each interview includes a battery of questions on health insurance coverage. Until a major redesign, initiated in 1996, new panels were started every year. When combined, the overlapping panels yielded national samples that were about three-quarters the size of the CPS and NHIS samples. The 1996 panel, which is twice the size of its predecessors, will run for four years; the next panel is not scheduled to begin until 2000. While the enhanced sample size was intended to eliminate the need for overlapping panels, starting a new panel every year also provided a way to maintain the representativeness of SIPP data over time. The loss of overlapping panels, however, weakens the SIPP as a source of reliable data on national trends. Finally, while the redesign has also slowed the release of data from the 1996 panel, SIPP data have never been released in as timely a manner as March CPS data, and, as with the NHIS, this has limited their value as a source of current data on trends.(1)

All three of these surveys are conducted by the U.S. Bureau of the Census. The CPS is a collaborative effort with the Bureau of Labor Statistics (BLS), which bears ultimate responsibility for the labor force statistics. The March supplement and the SIPP, however, are entirely Census Bureau efforts. The NHIS is conducted for the National Center for Health Statistics (NCHS), with the Census Bureau serving, essentially, as the survey contractor.

Periodically, the Agency for Health Care Policy and Research (AHCPR) conducts a panel survey of households to collect detailed longitudinal data on the population’s utilization of the health care system, expenditures on medical care, and health status. The most recent of these efforts, the Medical Expenditure Panel Survey (MEPS), was drawn from households that responded to the NHIS during the middle quarters of 1995. The initial MEPS interviews were conducted by Westat. Like the SIPP, MEPS will collect data at subannual intervals, and new panels will overlap earlier panels, allowing data to be pooled to enhance sample size and improve representativeness (see Section E).

The federal government is not alone in sponsoring large-scale national surveys to measure health insurance coverage and aspects of health care utilization. Private foundations have sponsored a number of surveys as well. While none of these foundation-sponsored efforts has been repeated with sufficient regularity to provide a long-term source of data on trends, the two most prominent of the recent undertakings will collect data from at least two points in time. The household component of the Community Tracking Study (CTS) was conducted by Mathematica Policy Research for the Center for Studying Health System Change, with funding from the Robert Wood Johnson Foundation.(2) The survey was fielded between July 1996 and July 1997 and collected data on current health insurance coverage (that is, at the time of the interview). Interviews were completed with about 32,000 families representing the civilian noninstitutionalized population of the 48 contiguous states and the District of Columbia. More than a third of the sample was concentrated in 12 urban sites that will be the subject of intensive study. The second round survey, which includes both a longitudinal component and a new, independent sample of households, started in 1998 and will be completed in 1999.

In 1997 the Urban Institute, with sponsorship from a group of foundations, fielded the first wave of the National Survey of America’s Families (NSAF).(3) The total sample size of 44,000 households compares to the NHIS, although the nationally representative sample (except for Alaska and Hawaii) features large samples for 13 states. These 13 states, which account for one-half of the U.S. population, will be the subject of intensive study. The survey was conducted by Westat from February through November of 1997. A second interview with the same sample is currently in the field, and a third interview may be fielded as well. Both the CTS and the NSAF include extensive batteries of questions on health insurance coverage, and both incorporate significant methodological innovations in these measures, which we will describe shortly.

Table 1 presents estimates from each of these surveys of the proportion of children who were uninsured at different times between 1993 and 1997. With the exception of the MEPS estimate, discussed below, all of these estimates represent or are widely interpreted to represent children who were uninsured at a point in time. Estimates refer to children under 19 (CPS and SIPP) or children under 18.(4) We will refer back to this table as we discuss alternative approaches to measuring uninsurance and the sources of error in estimates of the uninsured. Briefly, however, the estimates from the CPS, which we have reported for all five years, show little movement over the first three years but then a 1.1 percentage point rise between 1995 and 1996, with essentially no change between 1996 and 1997. The NHIS estimate in 1993 equals the CPS estimate, but the NHIS series shows a 1.2 percentage point rise between 1993 and 1994, followed by a 1.7 percentage point drop

                                   TABLE 1
ESTIMATES OF THE PERCENTAGE OF CHILDREN WITHOUT HEALTH INSURANCE, 1993-1997
                                                                                          
Source of     1993        1994        1995        1996       1997
Estimate
                                                                                          
CPS           14.1        14.4        14.0        15.1       15.2
NHIS          14.1        15.3        13.6        13.4        --
SIPP          13.9        13.3         --          --         --
MEPS           --           --          --        15.4        --
CTS            --           --          --        11.7        --
NSAF           --           --          --         --         11.9
                                                                                          
Notes:  Estimates from the CPS and SIPP are based on tabulations of public use
files by Mathematica Policy Research, Inc., and refer to children under 19 years
of age.  Estimates from the other surveys apply to children under 18.  The NHIS 
estimates were reported in NCHS (1998).  The estimate from MEPS refers to children
who were "uninsured throughout the first half of 1996," meaning three to six 
months depending on the interview date; the estimate was reported in 
Weigers et al. (1998).  The CTS estimate, reported in Rosenbach and Lewis (1998), 
is based on interviews conducted between July 1996 and July 1997.  The NASF estimate, 
reported in Brennan et al. (1999), is based on interviews conducted between 
February and November, 1997.
 

between 1994 and 1995 and then essentially no change between 1995 and 1996, at which point the NHIS estimate is 1.7 percentage points below the CPS estimate. We should caution, however, that the 1996 NHIS estimate is a preliminary figure based on just the first 5/8 of the sample. For this reason it may not reflect the impact of the implementation of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA)--the welfare reform law that went into effect in the late summer of 1996. Some observers have attributed the rise in the CPS estimate of uninsured children between 1995 and 1996 to a reduction in the Medicaid caseload that accompanied the implementation of welfare reform (Fronstin 1997). The SIPP estimate for September 1993, at 13.9 percent, lies within sampling error of the CPS and NHIS estimates for 1993, but the SIPP estimate drops between 1993 and 1994 while both the other series rise. Like the CPS estimate, the MEPS estimate of 15.4 percent purports to be children who were continuously uninsured over a period of time (three to six months in this case), but its value, which nearly equals the CPS estimate, is more consistent with point-in-time estimates. Finally, both the CTS and the NSAF yield estimates below 12 percent for the proportion of children who were uninsured. These estimates for the privately funded surveys lie substantially below the estimates from the federal surveys. In later sections we will explore possible reasons for this difference.

2.Alternative Approaches to Measuring Uninsurance

The surveys discussed in the preceding section have employed somewhat different approaches to measuring uninsurance among children, and other approaches are possible. Here we discuss two dimensions of the measurement of uninsurance: (1) whether uninsurance is measured directly or as a residual and (2) the choice of reference period.

a. Measuring Uninsurance Directly or as a Residual

There is a direct approach and a more commonly used indirect approach to identifying uninsured children in household surveys. The direct approach is to ask respondents if they and their children are currently without health insurance or have been uninsured in the recent past. The alternative, indirect approach is to ask respondents if they are covered by health insurance and then identify the uninsured as those who report no health insurance of any kind. Because interest in measuring the frequency of uninsurance is coupled, ordinarily, with interest in measuring the frequency with which children (or adults) are covered by particular types of health insurance, the more common approach is the indirect one--that is, identifying the uninsured as a “residual,” or those who are left when all children who are reported to be insured are removed. This is the approach used in the CPS, the SIPP, the NHIS, and, for some of its measures, MEPS.

We are not aware of any survey that has attempted to measure uninsurance by first asking if a child is or has been without health insurance.5 However, both the CTS and the NSAF have employed a variant on the traditional approach that involves first collecting information on insurance coverage and then asking whether those people who appear to be uninsured really were without coverage or had some insurance that was not reported. For example, in the CTS, the sequence on insurance coverage ends with, “(Are you/any of you/either of you) covered by a health insurance plan that I have not mentioned?” Respondents who indicated “no” to every type of coverage were then asked:

According to the information we have, (NAME) does not have health care coverage of any kind. Does (he/she) have health insurance coverage through a plan I might have missed?

If necessary, the interviewer reviewed the eight general types of plans. The respondent could indicate coverage under any of these types of plans or could reaffirm that he or she was not covered by any plan. In the NSAF, each respondent under 65 who reported no coverage was asked,

According to the information you have provided, (NAME OF UNCOVERED FAMILY MEMBER UNDER 65) currently does not have health care coverage. Is that correct?

If the answer was yes, the question was repeated for the next uninsured person. If the answer was no, the respondent was then asked:

At this time, under which of the following plans or programs is (NAME) covered?

The sources of coverage were repeated, and the respondent was allowed to identify coverage that had been missed or to verify that there was indeed no coverage under any type of plan.

In both of these surveys, including this “verification” question converted nontrivial percentages of children from uninsured, initially, to insured. In the CTS, the responses to this question reduced the fraction of children (under 18) who were reported as uninsured from 12.7 percent to 11.7 percent (Rosenbach and Lewis 1998). In the NSAF, the verification question lowered the estimated share of children who were uninsured from about 15 percent to 11.9 percent.(6) While the uninsured are still identified as a residual, the findings from these two surveys suggest that giving respondents the opportunity to verify their status makes a difference in the proportion of children who are estimated to be without health insurance. Curiously, both the CTS and the NSAF end up with about the same proportion of children reported as uninsured. Without the verification question, however, the CTS would have estimated 2 percentage points fewer uninsured children than the NSAF. Is a verification question an equalizer across surveys, helping to compensate for differentially complete reporting of insurance coverage in the questions that precede it? Certainly that is a plausible interpretation of these findings from a survey methodological standpoint. In any event, the results from these two surveys make a strong case for including a verification question as a standard part of a battery of health insurance questions. The NHIS added such a question in 1997, although no results have been reported as yet. The Census Bureau is testing such a question in the SIPP setting. We would hope that these efforts to test the impact of a verification question would be accompanied by cognitive research that can help to explain why respondents change their responses. It would be preferable to improve the earlier questions than to rely on a verification question to change large numbers of responses.

b.Reference Periods

Estimates of the incidence or frequency of uninsurance are reported typically in one of three ways: (1) the number who were uninsured at a specific point in time, (2) the number who were ever uninsured during a year, or (3) the number who were uninsured for the entire year. Point-in-time estimates are sometimes reported not for a specific point in time, such as January 1, 1999, but for any time during a year. When described in this way, estimates should be interpreted as the average number uninsured at a point in time and not the number who were ever uninsured during the year.

Estimates of the number or percentage of children who were uninsured over different periods of time are useful for different purposes. Estimates of the number of children who were ever uninsured over a year indicate how prevalent uninsurance is. Estimates of children uninsured for an entire year demonstrate the magnitude of chronic uninsurance. Estimates of children uninsured at a point in time reflect a combination of prevalence and duration in that the more time children spend in the state of uninsurance, the more closely the number uninsured at a point in time will approach the number who were ever uninsured.

  


 

Table 2 presents estimates for all three types of reference periods, based on data from the 1992 SIPP panel. While 13.1 percent of children under 19 were uninsured in September 1993, 21.7 percent of children under 19 were ever uninsured during the year while 6.3 percent were uninsured for the entire year.

Measuring uninsurance as a residual has implications for the length of time over which children are identified as uninsured. When a survey identifies the uninsured as a residual, the duration of uninsurance that is measured is generally synonymous with the reference period. That is, children for whom no insurance coverage is reported during the reference period are, by definition, uninsured for the entire period. To identify periods of uninsurance occurring within a reference period in which there were also periods of insurance coverage, it is necessary to do one of the following: (1) ask about such periods of uninsurance directly, (2) ask whether the insurance coverage extended to the entire period, or (3) break the total reference period into multiple, shorter periods, such as months and establish whether a person was insured or uninsured in each month.7

In the March CPS, respondents are asked if they were ever covered by any of several types of insurance during the previous calendar year. Respondents can indicate that they had multiple types of coverage during the year. But because the survey instrument does not ask if respondents were ever uninsured, or how long they were covered, respondents cannot report that they were covered for part of the year and uninsured for the rest.

TABLE 2. ESTIMATES OF THE PROPORTION OF CHILDREN UNDER 19 WHO WERE UNINSURED FOR DIFFERENT PERIODS OF TIME

Period

Estimate

Uninsured at a Point in Time (September 1993)

13.1%

Ever Uninsured in Year

21.7%

Uninsured Continuously throughout the Year

6.3%

In the SIPP, respondents are asked to report whether they had any of several types of insurance coverage during each of the four preceding months. The month is the reference period. To be identified as uninsured during a given month, a child must be reported as having had no coverage during the month. Thus, a child is classified as uninsured during a month only if the child was uninsured for the entire month.(8) With the SIPP data, however, we can aggregate individual months into years or even longer periods, and we can identify children who were ever uninsured during the year, where being ever uninsured means being uninsured for at least one full calendar month.

The redesigned NHIS, the CTS, and the NSAF all capture insurance status at the time of the interview--that is, literally at a point in time. Other things being equal, this approach would appear likely to yield the most error-free reports and, in addition, the least biased estimates of coverage. It also has the advantage of requiring no recall. Respondents are not asked to remember when coverage began or ended, only to indicate whether they currently have it or not.

The value of estimates for different types of reference periods depends, in part, on the accuracy with which they can be measured. If the number of children uninsured at a point in time can be measured more accurately than the number ever uninsured during a year or the number uninsured for the entire year, then there is a sense in which the point-in-time estimates are more valuable. In the next section we discuss measurement problems that affect estimates of the uninsured.

3.Sources of Error in Estimates of the Uninsured

There are a number of sources of error encountered in attempting to measure uninsurance, and these affect the comparability of estimates from different surveys. These include certain limitations inherent in measuring uninsurance as a residual, as it is usually done; the possibility that respondents may not be aware of existing coverage; the bias introduced by respondents’ imperfect recall; the sensitivity of responses to question design; and the impact of basic survey design choices.

a.Limitations Inherent in Measuring Uninsurance as a Residual

Perhaps the most significant problem with measuring uninsurance as a residual is that a small error rate in the reporting of insurance becomes a large error in the estimate of the uninsured. With the number of children insured at a point in time being eight to nine times the number without insurance, and the number ever insured during a year being 18 to 19 times the number never insured, errors in the reporting of insurance coverage are multiplied many times in their impact on estimates of the uninsured. Based on the SIPP estimates reported in Table 2, a 6 to 7 percent error in the reporting of children who ever had health insurance would double the estimated number who had no insurance. In Section 4, below, we argue that this is what accounts for the fact that the CPS estimate of the uninsured resembles an estimate of children uninsured at a point in time rather than children uninsured for the entire year, which is what the questions are designed to yield.(9)

Another implication of measuring uninsurance as a residual can be seen in the CPS estimates of the frequency of uninsurance among infants. The health insurance questions in the March CPS refer to coverage in the preceding calendar year--that is, the year ending December 31. If parents answer the CPS questions as intended, a child born after the end of the year cannot be identified as having had coverage during the previous year. With no reported coverage, such a child would be classified as uninsured. If all children born after the end of the year were classified as uninsured, this would add about one-sixth of all infants to the estimated number uninsured. Because the March CPS public use files lack a field indicating the month of birth, data users cannot identify infants born after the end of the year and cannot exclude them from their analyses. Is there any evidence that uninsurance is overstated among infants in the CPS? Table 3 addresses this question by comparing estimates of the rate of uninsurance for infants and older children, based on the March CPS and the SIPP. The CPS estimates of the proportion of infants who are uninsured are markedly higher than the SIPP estimates in both the 1993 and 1994 reference years: 11.5 versus 7.7 percent in 1993 and 17.3 versus 9.3 percent in 1994.

b.Awareness of Coverage

People may have insurance coverage without being aware that they have it. While this lack of awareness may seem improbable, both the CPS and SIPP provide direct evidence with respect to Medicaid coverage. Prior to welfare reform, families that received Aid to Families with Dependent Children (AFDC) were covered by Medicaid as well. Nevertheless, surveys that asked respondents about AFDC as well as Medicaid found that nontrivial numbers reported receiving AFDC but not being covered by Medicaid. Were such people unaware that they were covered by Medicaid, or did they know Medicaid by another name and not recognize the name(s) used in the surveys?(10)

We do not know the answer. To correct for such instances, the Census Bureau employs in both the CPS and SIPP a number of “logical imputations” or edits to reported health insurance coverage. All adult AFDC recipients and their children are assigned Medicaid coverage, for example. Of the 28.2 million people estimated to have had Medicaid coverage in 1996, based on the March 1997 CPS, 4.6 million or 16 percent had their Medicaid coverage logically imputed in this manner (Rosenbach and Lewis 1998). Most if not all of these 4.6 million would have been counted as uninsured if not for the Census Bureau’s edits. With AFDC, which accounted for half of Medicaid enrollment, being replaced by the smaller Temporary Assistance to Needy Families (TANF) program, the number of logical imputations will be reduced significantly, which could increase the number of children who in fact have Medicaid coverage but are counted in the CPS and SIPP as uninsured.(11)

Table 3. ESTIMATES OF THE PROPORTION OF CHILDREN UNINSURED BY AGE: COMPARISON OF MARCH CPS AND SIPP, SELECTED YEARS

Survey and Date

less than 1

1 to 5

6 to 14

15 to 18

Total

CPS, March 1994

11.5

11.6

13.7

19.4

14.1

CPS, March 1995

17.3

13.2

14.0

16.5

14.4

CPS, March 1996

16.7

12.7

13.7

16.1

14.0

SIPP, September 1993

7.7

10.9

13.7

16.7

13.1

SIPP, September 1994

9.3

10.5

13.1

16.3

12.7

 

SOURCE: Tabulations of public use files, CPS and SIPP.

c.Recall Bias

It is well known among experienced survey researchers that respondent recall of events in the past is imperfect and that recall error grows with the length of time between the event and the present. Error also increases with the amount of change in people’s lives. Respondents with steady employment have less difficulty recalling details of their employment than do respondents with intermittent jobs and uneven hours of work. Similarly, respondents who have had continuous health insurance coverage can more easily recall their coverage history than respondents with intermittent coverage. Obtaining accurate reports from respondents with complex histories places demands upon the designers of surveys and those who conduct the interviews. Panel surveys that ascertain health insurance coverage (and other information) with repeated interviews covering short reference periods are much more likely to obtain reliable estimates of coverage over time than one-time surveys that ask respondents to recall the details of the past year or more.

d.Sensitivity to Question Design

Even when recall is not an issue, when insurance coverage is measured “at the present time,” survey questions that appear to request more or less the same information can generate markedly different responses. This point was demonstrated in dramatic fashion when the Census Bureau introduced some experimental questions into the CPS to measure current health insurance coverage. At the end of the sequence of questions used to measure insurance coverage during the preceding year, respondents were asked:

These next questions are about your CURRENT health insurance coverage, that is, health coverage last week. (Were you/Was anyone in this household) covered by ANY type of health insurance plan last week?

Those who answered in the affirmative were asked to identify who in the household was covered and then, for each such person, by what types of plans he or she was covered. This sequence of questions, which first appeared in the March 1994 survey, yielded an uninsured rate that was about double the rate measured by the NHIS and the SIPP, and the experimental questions were discontinued with the March 1998 supplement.

Even if these questions had not followed a lengthy sequence of items asking about several sources of coverage in the preceding year, it would have been difficult to imagine that they could have generated such low estimates of coverage. That they did so despite the questions that preceded them is hard to fathom, and it underscores the point that researchers cannot simply write out a set of health insurance coverage questions and expect to obtain the true measure of uninsurance--or even a good measure of uninsurance, necessarily. It is not at all clear why this should be so. Health insurance coverage appears to be straightforward enough. Generally, people either have it or they don’t. Yet the Census Bureau’s experience sends a powerful message that questions about health insurance coverage can yield rather unanticipated results. Researchers who are fielding surveys that attempt to measure health insurance coverage would be well-advised to be wary of constructing new questions unless they can also conduct very extensive pretesting. In the absence of thorough testing, it is better to borrow from existing and thoroughly tested question sets rather than construct new questions from scratch.

e.Impact of Survey Design and Implementation

While perhaps not as important as question wording, differences in the design and implementation of surveys can have a major impact on estimates of the uninsured. These differences include the choice of universe and the level of coverage achieved, the response rate among eligible households, the use of proxy respondents, the choice of interview mode, and the use of imputation.

Universe and Coverage. Surveys may differ in the universes that they sample and in how fully they cover these universes. Typically, surveys of the U.S. resident population exclude the homeless, the institutionalized population--that is, residents of nursing homes, mental hospitals, and correctional institutions, primarily--and members of the Armed Forces living in barracks. There may be other exclusions as well. For example, household surveys do not always include Alaska and Hawaii in their sampling frames.

All surveys--even the decennial census--suffer from undercoverage; that is, parts of the universe are unintentionally excluded from representation in the sample. In a household-based or “area frame” sample, undercoverage can be attributed to three principal causes: (1) failure to identify all street addresses in the sample area, (2) failure to identify all housing units within the listed addresses, and (3) failure to identify all household members within the sampled housing units. Nonresponse, discussed below, is not undercoverage, although the absence of household listings for nonresponding households can contribute to coverage errors (in either direction). The 1990 census undercounted U.S. residents by about 1.6 percent.(12) Sample surveys have much greater undercoverage. The Census Bureau has estimated the undercoverage of the civilian noninstitutionalized population in the monthly CPS to be about 8 percent in recent years. Undercoverage varies by demographic group. For children under 15, undercoverage is closer to 7 percent than to 8 percent. But among older teens it approaches 13 percent, and for black males within this group the rate of undercoverage reaches 25 to 30 percent.

To provide at least a nominal correction for undercoverage, the Census Bureau and other agencies or organizations adjust the sample weights so that they reproduce selected population totals. These population totals or “controls” may even incorporate adjustments for the census undercount.(13) This “post-stratification,” a statistical operation that serves other purposes as well, is based on a limited set of demographic characteristics--age, sex, race and Hispanic origin, typically, and sometimes state.(14) Other characteristics measured in the surveys are affected by this post-stratification to the extent that they covary with demographic characteristics. We know, for example, that Medicaid enrollment and uninsurance vary quite substantially by age, race, and Hispanic origin, so a coverage adjustment based on these demographic characteristics will improve the estimates of Medicaid enrollment and uninsurance. To the extent that people who are missing from the sampling frame differ from the covered population even within these demographic groups, however, the coverage adjustment will compensate only partially for the effects of undercoverage on the final estimates. It is quite plausible, for example, that the Hispanic children who are missed by the CPS have an even higher rate of uninsurance than those who are interviewed. We would suggest, therefore, that survey undercoverage, even with a demographic adjustment to population totals corrected for census undercount, contributes to underestimation of uninsured children.

Response Rate. Surveys differ in the fraction of their samples that they succeed in interviewing. Federal government survey agencies appear to enjoy a premium in this regard. The Census Bureau, which conducts both the CPS and the SIPP and carries out the field operations for the NHIS, reports the highest response rates among the surveys that provide our principal measures of health insurance coverage. For the 1997 March supplement to the CPS, the Census Bureau reported a response rate of 84 percent.(15) For the first interview of the 1992 SIPP panel the Bureau achieved a response rate of 91 percent, with the cumulative response rate falling to 74 percent by the ninth interview. The 1995 NHIS response rate for households that were eligible for selection into the MEPS was 94 percent (Cohen 1997). In contrast to these , MPR obtained a 65 percent response rate for the CTS, and Westat achieved a comparable percentage for the NSAF, which includes a substantial oversampling of lower income households. For the first round of the MEPS, Westat secured an 83 percent response rate among the 94 percent of eligible households that responded to the NHIS in the second and third quarters of 1995, yielding a joint response rate of 78 percent (Cohen 1997). These response rates are based on people with whom interviews were completed, but there may have been additional nonresponse to individual items in the health insurance sequence. However, unlike more sensitive items, like those pertaining to income, health insurance questions do not appear to generate much item nonresponse.

The reported response rates also do not include undercoverage, which varies somewhat from survey to survey. Arguably, people who were omitted from the sampling frame never had an opportunity to respond and, therefore, may have less in common with those who refused to be interviewed than they do with respondents. Nevertheless, their absence from the collected data represents a potential source of bias and one for which some adjustment is desirable. Generally speaking, however, less is known about the characteristics of people omitted from the sampling frame than about those who were included in the sampling frame but could not be interviewed. Hence the adjustments for undercoverage, when they are carried out, tend to be based on more limited characteristics than the adjustments for nonresponse among sampled households.

How important is nonresponse as a source of bias in estimates of health insurance coverage? We are not aware of any information with which it is possible to address that question. Certainly the nearly 30 percent difference in response rates between the NHIS and the CTS or NSAF could have a marked impact on the estimated frequency of a characteristic (uninsurance) that occurs among less than 15 percent of all children, but we have no direct evidence that it does.

  


 

Proxy Respondents. Some members of a household may not be present when the household is interviewed. Surveys differ in whether and how readily they allow other household members to serve as “proxy” respondents. From the standpoint of data quality, the drawback of a proxy respondent is the increased likelihood that information will be misreported or that some information will not be reported at all. This is particularly true when the respondent and proxy are not members of the same family. For this reason some surveys restrict proxy respondents to family members. Ultimately, however, some responses are generally better than none, so it is rare that a survey will rule out particular types of proxy responses entirely. Rather, proxy responses may be limited to “last resort” situations--that is, as alternatives to closing out cases as unit nonrespondents. For this reason, it is important to compare not only how surveys differ with respect to their stated policies on proxy respondents but the actual frequency with which proxy respondents are used and the frequency with which household members are reported as missing.

Children represent a special case. While all the surveys we have discussed collect data on children, the surveys differ with respect to whether these children are treated as respondents per se or merely other members of the family or household, about whom information is collected only or largely indirectly. For example, both the CPS and SIPP define respondents as all household members 15 and older. Some information, such as income, is not collected for younger children at all while health insurance coverage is collected through questions that ask respondents who else in the household is included under specific plans. With this indirect approach, children are more susceptible to being missed.

Mode: Telephone Versus In-person. Surveys may be conducted largely or entirely by telephone or largely or entirely in-person.(16) There are two aspects of the survey mode that are important to recognize. The first bears on population coverage while the second pertains to how the data are collected.

Pure telephone surveys, which are limited to households with telephones, cover a biased subset of the universe that is covered by in-person surveys. Methodologies have been developed to adjust such surveys for their noncoverage of households that were without telephone service during the survey period. These methodologies use the responses from households that report having had their telephone service interrupted during some previous number of months to compensate for the exclusion of households that had no opportunity to appear in the sample. How effectively such adjustments substitute for actually including households without telephones is likely to vary across the characteristics being measured, and for this reason some telephone surveys include a complementary in-person sample to obtain responses from households without telephones.(17)

In addition to the coverage issue, distinguishing telephone from in-person interviews is important because the use of one mode versus the other can affect the way in which information is collected and the reliability with which responses are reported. Telephone surveys preclude showing a respondent any printed material during the interview (such as lists of health insurance providers), and they limit the rapport that can develop between an interviewer and a respondent. Furthermore, the longer the interview, the more difficult it is to maintain the respondent’s attention on the telephone, so data quality in long interviews may suffer. On the other hand, conducting interviews by telephone may limit interviewer bias and make respondents feel less uncomfortable about reporting personal information. Moreover, until recently, telephone interviewing allowed for the use of computer-based survey instruments that could minimize the risk of interviewer error in administering instruments with complex branching and skip patterns. For all of these reasons, survey researchers recognize that there can be “mode effects” on responses. The different modes may elicit different mean responses to the same questions, with neither mode being consistently more reliable than the other. To minimize differential mode effects when part of a telephone survey is conducted in person, survey organizations sometimes conduct the in-person interviews by cellular telephone, which field representatives loan to the respondents.

Panel surveys allow for another possibility: using a household-based sample design and conducting at least the initial interview in-person but using the telephone for subsequent interviews. Both the CPS and the SIPP have utilized this approach. In the CPS, the first and last of the eight interviews are conducted in person while the middle six are generally conducted by telephone. For any given month, then, about one-quarter of the interviews are conducted in person.(18)

The recent introduction of computer-assisted personal interviewing (CAPI) has created an important variation on the in-person mode and one with its own mode effects. In some respects, CAPI may be more like computer-assisted telephone interviewing than in-person interviewing with a paper and pencil instrument. The methodology is too new to have generated much information on its mode effects yet.

Imputation Methodology. Surveys differ in the extent to which they impute values to questions with missing responses and in the rigorousness of their imputation methodologies. For example, both the CPS and SIPP impute all missing responses, and they use methodologies that have been developed to do this very efficiently. For the SIPP imputation algorithms, over time the Census Bureau has made increasing use of the responses reported in adjacent waves of the survey. Generally, questions about health insurance coverage elicit very little nonresponse, so imputation strategies are less important than they are for more sensitive items, such as income. Nevertheless, in the March 1997 CPS, the Census Bureau imputed 10 percent of the “reported” Medicaid participants (Rosenbach and Lewis 1999).(19) In the NHIS, responses of “don’t know” are not replaced by imputed values, and in published tabulations the insurance coverage of people whose coverage cannot be determined is treated as unknown. While this may not have a large impact on the estimated rates of uninsurance among children or adults, this strategy does make it more difficult for data users to replicate published results.

4.Interpreting Uninsurance as Measured in the CPS

The estimate of uninsured children provided annually by the March supplement to the CPS has become the most widely accepted and frequently cited estimate of the uninsured. At this point, only the CPS provides annual estimates with relatively little lag, and only the CPS is able to provide state- level estimates, albeit with considerable imprecision.(20) But what, exactly, does the CPS measure? The renewed interest in the CPS as a source of state-level estimates for CHIP makes it important that we answer this question.(21) While the CPS health insurance questions ask about coverage over the course of the previous calendar year, implying that the estimate of uninsurance identifies people who had no insurance at all during that year, the magnitude of the estimate has moved researchers and policymakers to reinterpret the CPS measure of the uninsured as providing an indicator of uninsurance at a point in time.(22) How can this interpretation be reconciled with the wording of the questions themselves, and how far can we carry this interpretation in examining the time trend and other covariates of uninsurance? We consider these questions below.

a.In What Sense Does the CPS Measure Uninsurance at a Point in Time?

There is little reason to doubt that the CPS respondents are answering the health insurance questions in the manner that was intended. That is, they are attempting to report whether they ever had each type of coverage in the preceding year. We can say this, in part, because the health insurance questions appear near the end of the survey, after respondents have reported their employment status, sources and amou