Split Testing Common Questions
Don't see the answer you're looking for? Get in touch.
Use this A/B Split Test Sample Size Calculator!
This easy-to-use tool helps you find out how many visitors you need to test two different versions of your webpage or app.
Here's what you'll need to enter:
Baseline Conversion Rate: Your current conversion rate (e.g., the percentage of visitors who make a purchase).
Minimum Detectable Effect: The smallest change in the conversion rate you want to detect.
Statistical Significance: The confidence level you want for your test results (usually 95%).
Statistical Power: The probability of detecting a true effect (typically set at 80%).
By entering these values, you'll get the recommended sample size for each version of your test, ensuring your results are both accurate and meaningful.
When conducting statistical tests, such as A/B tests, it’s crucial to understand the concepts of Type I and Type II errors. These errors are related to the conclusions we draw from our data.
Type I Error (False Positive)
Definition: A Type I error occurs when you reject the null hypothesis when it is actually true. In other words, you detect an effect that is not actually there.
Example: Imagine you run an A/B test to see if a new webpage design (Version B) increases conversions compared to the current design (Version A). A Type I error would mean concluding that Version B is better when, in reality, it isn’t.
Statistical Significance: This error is controlled by the significance level (α), often set at 0.05 (5%). If
your significance level is 5%, there is a 5% chance of making a Type I error.
Type II Error (False Negative)
Definition: A Type II error occurs when you fail to reject the null hypothesis when it is actually false. This means you miss detecting an effect that is actually present.
Example: Continuing with the A/B test example, a Type II error would mean concluding that there is no difference between Version A and Version B when, in fact, Version B does increase conversions.
Statistical Power: This error is controlled by the power of the test (1 - β), often set at 0.80 (80%). If your test has 80% power, there is a 20% chance of making a Type II error.
Key Points to Remember
Type I Error: False positive; detecting a difference when none exists. Controlled by the significance level (α).
Type II Error: False negative; failing to detect a difference when one exists. Controlled by the power of the test (1 - β).
Don't see the answer you're looking for? Get in touch.
This is the expected conversion rate of your control group without any changes.
The smallest change in conversion rate that you want to detect, expressed as a percentage increase or decrease from the baseline.
It helps you determine the likelihood that your results are due to the changes made and not by random chance.
It indicates the probability that your test will detect a true effect when there is one.