Suppose you have two personalizations, say a merchandising messaging experience and a social-proof urgency messaging experience. There are two ways you might set up your randomized test: you could run two separate A/B tests, one for each experience, or you could run one A/B/n test. If you run two A/B tests, your traffic would be split into four groups: the control group, the group that sees only the merchandising messaging, the group that sees only the urgency messaging, and the group that sees both forms of messaging. If you instead run one A/B/n test, and if you have those four groups as your control and your three variants, then your A/B/n test is set up as an MVT.
Since multivariate tests are A/B/n tests, a good place to start would be to read In an A/B test, how many variations should I test? The specifics of MVT will be discussed here.
MVT suffers from the same “decreased/diluted traffic” issue as any A/B/n test. If you think that the experiences you want to test concurrently have no significant interaction effects, then the full factorial design of an MVT is not helpful: you’ll spend a lot of traffic checking that the interaction effect is negligible, which could otherwise be spent studying the variations themselves. This is especially true of the “many small changes” that typically occur in CRO.
The statistical worry with MVT is of course that measuring all the interactions as separate variations of an A/B/n test, will result in an inflated exposure to false positives. Indeed, the number of “interaction” variants increases combinatorially quickly in the number of “experience” variations: any sufficiently large MVT is essentially guaranteed to produce a false-positive result in at least one of these “interaction” variants.
Qubit limits the number of variants in an A/B/n test to avoid these inefficient and ineffective uses of MVT. However, if you do expect significant interaction effects between two experiences, or if you have some traffic to spare, then setting up a test to measure these interactions explicitly (i.e. setting up the A/B/n test as an MVT) is statistically the most rigorous thing you can do.
Yes! Multivariate tests are still A/B/n tests, and because they are randomized tests, you can run them simultaneously with other A/B(/n) tests.