The Variant comparison block enables you to test multiple versions of your study with different participants, helping you determine which design, content, or approach performs best. By splitting participants between variants, you can make data-driven decisions about which direction to take your product or research.
Setting up your variant comparison
Add a variant comparison block
Click "Add block" in your study builder and select "Variant comparison" from the available block types. This creates a container that will hold your different variants.
Name your variant comparison
Give your variant comparison a clear title that describes what you're testing, such as "Checkout flow comparison" or "Homepage design test". This helps you identify the comparison in your results. It will not be visible to participants.
Create your variants
Each variant comparison starts with two default empty variants:
Rename variants to something meaningful
Add additional variants using the "Add variant" button
Duplicate variants to quickly create similar versions
This will also duplicate any existing blocks within a variant
Add blocks to each variant
Click "Add block" within each variant to build what participants in that variant will experience. To avoid having to manually add and setup all the blocks in all variants it is recommended to set up one entire variant, duplicate and edit the blocks within the duplicated variant.
Each variant can contain different blocks or the same block types with different content. For meaningful comparison in your results, keep the overall structure similar while varying the specific element you want to test.
Variant distribution types
Exclusive comparison
Each participant sees only one of your variants. This is the most common approach for A/B testing and providing unbiased feedback on individual designs.
When to use exclusive comparison:
Testing different designs or approaches
When seeing multiple variants could influence participant responses
Standard A/B testing scenarios
When you need independent participant groups for each variant
Participants are randomly assigned to a variant when they begin the study, and they experience only that variant throughout their session. We will always aim to distribute each variant as evenly as possibly between all participants of the study.
Alternating comparison
Each participant sees all variants in a randomised order. This approach is useful when you want a participants perspective on all options and not be influenced by the order of the tasks.
When to use alternating comparison:
Collecting preference data between options
When you want direct comparisons from the same participants
Testing subtle variations where exposure to multiple versions won't bias results
Maximising feedback from a smaller participant pool
The order in which variants appear is randomised for each participant to prevent order bias from affecting your results.
Why use variant comparison
Test design alternatives: Compare two different prototype designs to see which performs better
Evaluate different approaches: Test whether different task instructions or question wording influences results
Optimise user flows: Determine which navigation structure or information architecture works best
A/B test content: Compare different messaging, layouts, or feature presentations
Reduce order bias: Using alternating comparison with randomised order prevents participants from performing better on later variants simply because they've learned the structure. Randomisation distributes any learning effects or fatigue evenly across all variants
Rather than running multiple separate studies, variant comparison allows you to test alternatives simultaneously with comparable participant groups, saving time and ensuring consistent testing conditions.



