Blog
How common is disability among adults in the US? Understanding survey differences
February 06, 2025
How common is disability among adults in the United States? This question is easy to ask, but hard to answer. For example, this infographic from the CDC says one in four adults (28.7%) in the United States have a disability, but the US Census Bureau’s American Community Survey says 13.6% of adults in the United States have one. Why the difference? If you use disability-related statistics for tasks like identifying problems, creating policies, planning programs, asking for funding, or measuring progress, it’s important to understand why the results aren’t the same.
To get a clear picture of the reasons for these differences, I interviewed Bill Erickson. Bill is the primary researcher behind the Northeast ADA Center’s DisabilityStatistics.org website and a senior research specialist at the Yang-Tan Institute on Employment and Disability.
The Short Answer
First, these two figures come from two different surveys. Each survey has different purposes and processes. Second, the figures are based on different demographic characteristics (such as age groups). These factors can significantly affect estimates.
For example, Bill pointed out that “the estimate in the CDC infographic is for adults age 18 and older in households, while the American Community Survey estimate includes children under age 18 and the non-institutionalized group quarters population.” By “limiting” the ACS sample to better match the CDC infographic, Bill gave me an estimate of 16% for the prevalence of adults with a disability in the United States.
But what explains the rest of the difference? According to Bill, it is due to “a variety of possible reasons and likely a complicated combination of them all.” These include different sample sizes and response rates, different questions, different data collection methods, and even priming differences. Priming occurs when a question in a survey changes a person’s response to a later question by subtly shifting their thoughts.
CDC’s Behavioral Risk Factors Surveillance System
The CDC, which is also called the US Centers for Disease Control and Prevention, has a mission of “protecting the United States from health, safety, and security threats.” The CDC’s “one in four adults” estimate is based on the Behavioral Risk Factors Surveillance System (BRFSS). This system has been in place since 1984.
The CDC can’t interview everyone—that would be impossible—so it uses statistical techniques to develop population estimates. It uses Random Digit Dialing to call and interview over 400,000 adults (ages 18 and older) annually on landlines and cellphones. Interview questions focus on health. They ask about disability, as well as health care, risky behavior, prevention, and more. Once the CDC has the data, it develops estimates of the population with disabilities in the country and in each state.
Census Bureau’s American Community Survey
The US Census began in the 1700s and is conducted once every 10 years. By the 1980s, it was felt that a more frequent survey would provide more timely information, so the annual American Community Survey, or ACS, was developed. (In Puerto Rico, the ACS is called the Puerto Rico Community Survey, or PRCS.) The ACS has been in place for about two decades in its current form. It asks a wide range of questions previously asked on the US Decennial Census 2000 long form. These questions cover over 40 topics, including disability.
The US Census Bureau mails an ACS questionnaire to almost 300,000 addresses each month. Respondents can fill out the questionnaire and return it by mail, or they can answer the same questions via the web. Of those who do not respond, Census Bureau employees follow up and conducts interviews via phone or, for a limited sample, in person. They also visit and interview a sample of people living in “group quarters,” such as college dorms, nursing homes, and prisons. They also call some people who have returned incomplete surveys.
Different Approaches Lead to Different Results
With the CDC and the Census Bureau taking different approaches, it’s not surprising that their numbers don’t match, and the table below summarizes a few of the basic differences.
Survey | Content | Data Collection Method | Location | Annual Participation | Response Rate |
---|---|---|---|---|---|
Behavioral Risk Factors Surveillance System | Health-related, in English and Spanish | Phone | 50 states, District of Columbia, Guam, Puerto Rico, US Virgin Islands | 400,000+ people | 44.7% (2023) |
American Community Survey | Over 40 topics, in English and Spanish with help in 30 other languages | Paper or online, with phone or in-person for a sub-sample of non-respondents | 50 states, District of Columbia, Puerto Rico; includes a sample of “group quarters” | 2 million households, 4.7 million people (2023) | 84.7% (2023) |
This table provides an overview of different survey methodologies, their content focus, how data is collected, where they operate, and key statistics like annual participation and response rates.
Why so different?
Bill explained that the different results are due to the many differences between these surveys, and it’s not just the big differences—little details also matter!
Different sample size and response rates
Bill explained that larger surveys and higher response rates generally lead to more reliable results. He pointed out that “The ACS gets around 2 million household responses—that’s 4.7 million people—and a very high response rate—nearly 85% in 2023—likely due to the extensive follow-up with people who don’t respond to the survey and the multiple modes people can use to respond. The phone-based BRFSS collects data from about 400,000 respondents and has a 45% response rate.”
But the numbers may not tell the whole story. “The characteristics of participants and their responses may be systematically different from those who did not respond, potentially introducing bias in the estimates for the population,” said Bill.
Different questions
The wording of the questions in the two surveys has lots of little differences, and the questions about hearing have larger differences.
According to Bill, “Even minor alterations to survey-question wording or shifting of where questions are within a survey can have serious, unintended, and unexpected impacts on responses.” For example, on the ACS for 2000–2002, and in the Census 2000 long form, questions were grouped under a lead-in header that said, “because of a physical, mental, or emotional condition does this person have difficulty in doing any of the following activities.” However, for some of the actual questions asked in in-person interviews, the interviewer did not repeat the lead-in, causing some respondents to apparently forget the lead-in and answer the opposite of what they intended. This may have also happened on the paper form, when too much space separated the lead-in from a specific question. (A more detailed explanation Census 2000 disability measurement issue is on the DisabilityStatistics.org website.)
Different data collection methods
The table above shows that the surveys use different data collection methods. The BRFFS is strictly a telephone survey, while the ACS uses mail, online, and phone. In some cases, it also uses in-person interviews. People may react differently to the same question in an online survey than they would in a telephone interview, and this could contribute to the different survey responses.
Possible priming
Another possible reason why the surveys yield different results is that the BRFSS focuses primarily on health, but the ACS covers many topics. Bill noted “it may be that the BRFSS ‘primes’ people to think more about their health and that could pre-dispose them to be aware of limitations related to disability that their health may cause. Given the variety of topics covered by the ACS, it would be far less likely to have this effect.”
Bill pointed out that a Canadian study from 2004 found a similar issue with disability-related questions asked by Statistics Canada. These questions were used on several surveys that covered different topics. They found that disability prevalence was two to three times higher in health-focused surveys than in an economics-focused survey. (The paper about the studyalso talks about differences in how people define disability.)
Thinking about the Differences
Although the two studies give different results, they both help us understand how many people have a disability, where they live, and the types of disabilities they have. They also gather information about topics like employment, education, race, and ethnicity. These surveys can also help with tracking change over time.
The Northeast ADA Center Publishes Custom ACS Statistics
The Northeast ADA Center’s DisabilityStatistics.org website provides easy access to disability prevalence data as well as statistics regarding employment rates, education, poverty, and more. The statistics are calculated by Bill using the ACS Public Use Microdata Sample, or PUMS, and focus on the non-institutionalized population. You can find this information at the DisabilityStatistics.org website, or you can access a special dashboard of this data at the county/municipio level for the Northeast ADA Center’s region of New Jersey, New York, Puerto Rico, and the US Virgin Islands.
DisabilityStatistics.org will relaunch in late February with updated 2023 ACS data—including information at the level of the county and congressional district—and new mapping and charting features. You can learn about it in a webinar on February 26, Launching the new DisabilityStatistics.org: Get the latest ACS disability estimates with new tools.
Tonya Engst works is an editor at the Yang-Tan Institute on Employment and Disability.