Quick Start
Humblytics Split Testing: Seamless A/B Testing Built for Performance
AB Split Testing
When you run a split test with Humblytics, the logic is seamlessly integrated into our standard analytics script—no extra setup required. If a visitor qualifies for an experiment, they'll be automatically and smoothly directed to the correct version of the page based on your test configuration.
We've designed our split testing engine to prioritize performance, SEO, and user experience:
Lightning-fast and lightweight: The testing code adds minimal overhead—just a few KB—and loads asynchronously, so it never blocks your page from rendering.
Smart redirect handling: If a visitor is part of a test group and needs to be redirected, we prevent any visual flicker or partial content from appearing during the switch.
Analytics-aware: Our tracking script is intelligent enough to suppress events for the initial page if a redirect occurs, ensuring your analytics stay clean and accurate—something many other tools miss.
Bot-safe: The testing logic is excluded for bots and crawlers, preventing unintended indexing or skewed results.
SEO-friendly: Variant pages can use canonical tags pointing to your original page, helping avoid duplicate content issues. Many customers choose to index only the control page for clarity.
Cookie-free testing: We use a short-lived query parameter to assign users to test groups without relying on cookies, preserving privacy and compatibility.
Flexible assignment options: You can run tests at the session level (new assignment on each visit) or user level (same experience every time), depending on your needs.
SPA-compatible: Our script supports both traditional websites and single-page apps (like Framer), automatically handling page loads and in-app navigation changes.
We've built Humblytics Split Testing to give you powerful optimization tools with zero performance trade-offs. Whether you're testing headlines, layouts, or full-page experiences, you can accomplish it all without slowing down your site or compromising your SEO.
Measuring Split Test Performance
Once your test is live, Humblytics automatically tracks how each variant performs against your selected goal—whether that's a button click, form submission, or page visit. We handle all the heavy lifting in the background, so you can focus on results, not statistics.
Goal Tracking
Each split test has a primary goal—the action you're trying to optimize. For example:
Clicking a "Sign Up" button
Reaching a confirmation page
Submitting a contact form
You'll define this goal when setting up your test, and we'll track how often it occurs for each variant.
Conversion Rate & Lift
We calculate the conversion rate for each variant by dividing the number of goal completions by the number of views. From there, we show you:
Absolute performance (e.g., 12% vs. 10%)
Relative lift (e.g., Variant B is performing 20% better than the control)
Confidence & Declaring a Winner
To ensure differences aren't just due to random chance, we apply standard statistical techniques to estimate confidence. When one variant performs significantly better than the others with enough data behind it, we flag it as the likely winner.
While we aim to give you actionable results quickly, we're also cautious about jumping to conclusions too early. In general:
The more traffic you have, the faster we can detect a winner
If results are close, we'll wait for more data to improve accuracy
We visually show when a result is trending better—but not yet statistically significant
You'll always see a clear summary of which variant is winning, by how much, and how confident we are in the result.
Continuous Monitoring
You don't need to manually calculate anything - our dashboard keeps everything up to date in real time. You can check in at any time to see how your test is performing and decide whether to:
Let it run longer
Manually pick a winner
End the test and apply the changes
How Confidence Is Calculated (Under the Hood)
For each variant, we track:
Number of views (visitors)
Number of goal completions (conversions)
Conversion rate = conversions ÷ visitors
To compare performance between two variants (e.g., Control vs. Variant B), we calculate the confidence level using a two-proportion Z-test, which tells us how likely the observed difference in conversion rates is due to chance.
The Steps:
Define conversion rates for both groups:
p₁ = conversions_A ÷ visitors_A
p₂ = conversions_B ÷ visitors_B
Calculate pooled probability (the average conversion rate across both groups):
p = (conversions_A + conversions_B) ÷ (visitors_A + visitors_B)
Compute standard error (SE):
SE = √[p × (1 - p) × (1/visitors_A + 1/visitors_B)]
Calculate Z-score:
Z = (p₁ - p₂) ÷ SE
Convert Z-score to confidence level using the cumulative distribution function (CDF) of the normal distribution.
The resulting confidence level represents the probability that the observed difference is statistically significant (i.e., unlikely to be due to random chance). For example, a Z-score of ±1.96 corresponds to approximately 95% confidence.
In the UI, we surface this confidence level with visual indicators (e.g., "95% confidence this variant performs better") and show trending results when the confidence threshold hasn't been reached yet.