Qubit uses A/B testing as a framework to compare two versions of a website or mobile app. Your visitors are randomly bucketed into one of the two versions - control or variation - then we observe their behavior over time. To maintain good frequentist properties, we recommend allowing experiments to complete, at which point, we make an observation about the performance of each version.
Visitors bucketed into your experience control will see your website or mobile app without any changes, no treatment, or the same treatment as before, i.e. we don't show the banner we are testing; the control serves as a basis for comparison in A/B testing.
Conversely, the experience variation is a version of your website or mobile app where the treatment is applied, i.e. we show the banner we are testing.
A/B tests (respectively A/B/n tests) compare one variation (respectively many variations) against a single control. Whether you want to run one A/B/n test or many A/B tests is a matter of preference.
It may seem faster to run an A/B test than an A/B/n test since your traffic will be less diluted amongst the variations. If you run a sequence of A/B tests, each A/B test in the sequence will complete faster than the A/B/n test that runs all the variations you want to test in parallel.
However, the A/B/n test will complete faster than the entire sequence of A/B tests. It's more efficient to run an A/B/n test, since the control group is shared by the variations. A/B/n testing also has the benefit that data in all the variations is collected during the same duration, and so the variations in the A/B/n test are all subject to the same seasonal effects.
That is not to say you should always run A/B/n tests with as many variations as possible: diluting traffic into variations that are for all practical purposes, conducting A/A tests, is not an efficient use of traffic.