Skip to content

How to Calculate Sample Size for Surveys

CalculatorGlobe Team February 23, 2026 12 min read Math

How many people do you need to survey to get reliable results? This is one of the most important questions in research design, and the answer depends on your desired precision, confidence level, and the variability in your population. Survey too few people and your results have wide margins of error. Survey too many and you waste time, money, and resources for negligible gains in precision.

This guide explains the sample size formula, walks you through calculations for both proportions and means, shows you how to adjust for finite populations, and provides practical guidelines for designing surveys that deliver meaningful, trustworthy results.

Why Sample Size Matters

Sample size directly determines the precision and reliability of your survey results. A survey with too few respondents produces wide confidence intervals that make it impossible to draw actionable conclusions. Consider two hypothetical survey results:

Survey A: n = 50

Result: 60% approval

Margin of error: ±13.6%

Confidence interval: 46.4% to 73.6%

Conclusion: Approval is somewhere between less than half and nearly three-quarters. Not very useful.

Survey B: n = 1,000

Result: 60% approval

Margin of error: ±3.0%

Confidence interval: 57.0% to 63.0%

Conclusion: Clear majority approval between 57% and 63%. Actionable information.

Both surveys found 60% approval, but only Survey B provides a precise enough estimate to support confident decision-making. The difference is entirely due to sample size.

Beyond precision, sample size affects statistical power — the ability to detect real effects. An underpowered study might fail to detect a genuine difference between two groups, leading to a false conclusion that no difference exists. This is why calculating sample size before collecting data is essential, not an afterthought.

The Sample Size Formula

For Estimating Proportions

When your survey asks yes/no or categorical questions and you want to estimate the percentage of the population with a certain characteristic:

n = (z² × p × (1 - p)) / E²

n = required sample size

z = z-score for desired confidence level (1.96 for 95%, 2.576 for 99%)

p = estimated population proportion (use 0.5 if unknown)

E = desired margin of error (e.g., 0.05 for ±5%)

For Estimating Means

When your survey collects numerical data and you want to estimate the population average with a specific margin of error:

n = (z × σ / E)²

n = required sample size

z = z-score for desired confidence level

σ = estimated population standard deviation

E = desired margin of error (in the same units as σ)

Step-by-Step Calculation Examples

Example 1: Megan Designs a Customer Satisfaction Survey

Megan wants to estimate the percentage of customers satisfied with her company's service. She wants 95% confidence with a ±4% margin of error. She has no prior estimate of the satisfaction rate.

Step 1: Identify values

z = 1.96, p = 0.5 (unknown, use conservative), E = 0.04

Step 2: Apply the formula

n = (1.96² × 0.5 × 0.5) / 0.04²

n = (3.8416 × 0.25) / 0.0016

n = 0.9604 / 0.0016 = 600.25

Step 3: Round up

n = 601 respondents

Result: Megan needs at least 601 completed surveys for her desired precision. If she expects a 70% response rate, she should send surveys to at least 859 people (601 / 0.70).

Example 2: Carlos Estimates Average Processing Time

Carlos is an operations manager who wants to estimate the average time to process insurance claims. A pilot study of 20 claims found a standard deviation of 4.5 days. He wants 95% confidence with a margin of error of ±1 day.

Step 1: Identify values

z = 1.96, σ = 4.5 days, E = 1 day

Step 2: Apply the formula

n = (1.96 × 4.5 / 1)²

n = (8.82)²

n = 77.79

Step 3: Round up

n = 78 claims

Result: Carlos needs to sample at least 78 claims. If he wanted ±0.5 day precision instead, the required sample would be 312 (quadrupling sample size halves the margin of error).

Try Our Sample Size Calculator

Enter your confidence level, margin of error, and population parameters to instantly determine the right sample size.

Use Calculator

Example 3: Aisha Plans a School District Survey

Aisha works for a school district of 2,500 parents. She wants to estimate the proportion who support a new after-school program with 95% confidence and ±5% margin of error. She uses an estimated proportion of 0.6 from a smaller pilot study.

Step 1: Calculate initial sample size (infinite population)

n₀ = (1.96² × 0.6 × 0.4) / 0.05²

n₀ = (3.8416 × 0.24) / 0.0025

n₀ = 0.922 / 0.0025 = 368.8 → 369

Step 2: Apply finite population correction (see next section)

n = n₀ / (1 + (n₀ - 1) / N)

n = 369 / (1 + 368/2500)

n = 369 / 1.1472 = 321.7 → 322

Result: With the finite population correction, Aisha needs 322 responses instead of 369 — a reduction of 47 surveys, saving significant effort in a smaller community.

Adjusting for Finite Populations

The standard sample size formula assumes an infinitely large population. When your sample represents more than about 5% of the total population, you should apply the finite population correction (FPC) to reduce the required sample size:

n_adjusted = n₀ / (1 + (n₀ - 1) / N)

n₀ = sample size calculated from the standard formula

N = total population size

n_adjusted = corrected sample size

The correction becomes more significant as the sampling fraction (n/N) increases. For a population of five hundred and an initial calculation requiring 385 respondents, the corrected sample size drops to about 218 — a substantial reduction. For a population of 100,000, the correction barely changes the result because 385 is only 0.39% of the population.

Factors That Influence Sample Size

Four main factors determine how large your sample needs to be, and understanding each helps you make informed trade-offs during study design:

  • Confidence level — Higher confidence requires larger samples. Moving from 95% to 99% confidence increases the required sample size by about 78% (because the z-score increases from 1.96 to 2.576).
  • Margin of error — Tighter margins require dramatically larger samples. Halving the margin of error quadruples the sample size because the relationship is inverse-square. Going from ±5% to ±2.5% multiplies the requirement by 4.
  • Population variability — More variable populations require larger samples. For proportions, maximum variability occurs at p = 0.5. If you know your population is more homogeneous (p closer to 0 or 1), you can use a smaller sample. For means, higher standard deviation means larger samples.
  • Population size — Only matters for smaller populations (under 20,000). For large populations, the required sample size plateaus because the standard error is dominated by the sample size, not the sampling fraction.

Sample Size Reference Table

This table shows required sample sizes for estimating proportions at different confidence levels and margins of error, assuming unknown population proportion (p = 0.5) and an infinitely large population.

Margin of Error 90% Confidence 95% Confidence 99% Confidence
±1% 6,766 9,604 16,587
±2% 1,692 2,401 4,147
±3% 752 1,068 1,844
±5% 271 385 664
±10% 68 97 166

Notice how dramatically the required sample size increases as the margin of error decreases. Going from ±5% to ±1% multiplies the sample size by about 25 times. This is why national polls that report ±2% margins typically survey over 2,000 people.

Try Our Confidence Interval Calculator

See how your chosen sample size translates into margin of error at different confidence levels.

Use Calculator

Practical Guidelines for Research Design

Calculating the mathematically required sample size is just one step in designing a successful survey. These practical guidelines help you bridge the gap between theoretical requirements and real-world data collection.

  • Always calculate sample size before collecting data. Determining sample size after data collection (post-hoc power analysis) is widely considered poor practice because it does not change the precision you actually achieved. Plan your sample size during the design phase.
  • Budget for non-response. Response rates for email surveys typically range from 10% to 30%, phone surveys from 5% to 15%, and in-person interviews from 50% to 80%. Divide your target completed sample by the expected response rate to determine how many people to contact initially.
  • Consider subgroup analysis. If you plan to analyze results by subgroups (age, region, gender), each subgroup needs its own adequate sample size. A survey of 400 might be sufficient overall but only gives you 50 respondents per subgroup if you have 8 categories — often too few for meaningful analysis.
  • Use pilot studies to estimate variability. If you do not know the population standard deviation or proportion, run a small pilot study of 20 to 50 respondents first. This gives you a reasonable estimate to plug into the sample size formula and helps identify problems with your survey design.
  • Document your assumptions. Report the confidence level, margin of error, estimated proportion or standard deviation, and any adjustments you made for non-response or finite population. This transparency allows others to evaluate the credibility of your methodology.

Common Mistakes to Avoid

  • Using population size as the primary driver. Many people assume that surveying a fixed percentage of the population (like 10%) is sufficient. In reality, the margin of error is primarily determined by the absolute sample size, not the sampling fraction. A sample of 385 provides ±5% precision whether the population is 5,000 or 5 million.
  • Ignoring the effect of clustering. If your survey uses cluster sampling (sampling entire classrooms, hospitals, or neighborhoods), the effective sample size is smaller than the number of individual respondents because people within clusters tend to be more similar. Apply a design effect multiplier, typically between 1.5 and 3.0.
  • Forgetting to round up. Always round up to the next whole number when the formula gives a non-integer. Rounding down gives you a slightly larger margin of error than intended. Rounding 384.16 down to 384 instead of up to 385 is a small error, but it violates your precision specification.
  • Neglecting non-response bias. Increasing sample size to compensate for non-response addresses the precision problem but not the bias problem. If non-respondents systematically differ from respondents, even a very large sample will produce biased estimates. Invest in follow-up strategies and response rate improvement.
  • Using p = 0.5 when you have better information. While p = 0.5 is appropriately conservative when you have no prior knowledge, using it when you know the true proportion is around 0.1 or 0.9 wastes resources. A more accurate estimate of p reduces the required sample size substantially.

Try Our Statistics Calculator

After collecting your survey data, perform comprehensive statistical analysis to extract meaningful insights.

Use Calculator

Frequently Asked Questions

There is no single universal minimum, but statistical guidelines provide useful benchmarks. For estimating proportions with a 95% confidence level and a 5% margin of error, you need at least 385 respondents regardless of population size. For comparing two groups, each group typically needs at least 30 observations for the central limit theorem to apply. For regression analysis, a common rule of thumb is at least 10 to 20 observations per predictor variable. The right sample size always depends on your specific precision requirements and the variability in your data.

For large populations (above about 20,000), the population size has virtually no effect on the required sample size. Whether your population is 50,000 or 5 million, you need approximately the same number of samples to achieve a given margin of error. Population size only matters for small, finite populations where the sample represents a substantial fraction (more than 5%) of the total. In those cases, the finite population correction factor reduces the required sample size.

When the population proportion is unknown, use 0.5 (50%) as the estimate. This is the most conservative assumption because the product p times (1 minus p) reaches its maximum at 0.5, producing the largest possible sample size requirement. This guarantees your sample will be large enough regardless of the actual proportion. If you have prior research or a pilot study suggesting the true proportion is far from 50%, you can use that estimate to reduce the required sample size.

Statistical power is the probability that a study will detect an effect when one truly exists. Standard practice is to design studies with at least 80% power, meaning a 20% chance of a false negative. Higher power requires larger samples. Power depends on three factors: the significance level (alpha), the effect size you want to detect, and the sample size. Power analysis before data collection helps you determine the sample size needed to reliably detect effects of practical importance.

Yes, but you must accept trade-offs. With a smaller sample, you can either accept a larger margin of error, lower your confidence level from 95% to 90%, or narrow your research question to a more homogeneous subgroup with less variability. Document these decisions transparently in your methodology. A well-designed study with 200 carefully selected respondents can be more informative than a poorly designed study with 2,000 respondents from a biased sampling frame.

Stratified sampling divides the population into homogeneous subgroups (strata) and samples from each. If the strata have less variability internally than the population overall, stratified sampling can achieve the same precision with a smaller total sample size compared to simple random sampling. The sample size for each stratum can be allocated proportionally to the stratum size or optimally based on each stratum variability. Proper stratification can reduce required sample sizes by 10% to 30% depending on how effectively the strata capture population heterogeneity.

Absolutely. Non-response is a reality in almost all survey research. If you expect a 60% response rate and need 400 completed surveys, you should initially contact at least 667 people (400 divided by 0.60). However, non-response introduces bias beyond just reducing your effective sample size, because non-respondents may differ systematically from respondents. Plan follow-up strategies to boost response rates, and consider using non-response weights in your analysis.

Sources & References

  1. NIST/SEMATECH e-Handbook of Statistical Methods — Sample sizes required for statistical testing: itl.nist.gov
  2. Wikipedia — Sample Size Determination — Overview of sample size estimation methods and formulas: en.wikipedia.org
  3. Scribbr — Sampling Methods — Types, techniques, and examples of sampling methods for research: scribbr.com
Share this article:

CalculatorGlobe Team

Content & Research Team

The CalculatorGlobe team creates in-depth guides backed by authoritative sources to help you understand the math behind everyday decisions.

Related Calculators

Related Articles

Disclaimer: This calculator is for informational and educational purposes only. Results are estimates and may not reflect exact values.

Last updated: February 23, 2026