Forcing variations without breaking targeting
GrowthBook's native URL override bypasses every targeting rule. The QA preview tool I built keeps the rules in place — forcing only fires if the user already qualifies.
Every experimentation platform ships with a URL override for QA — punch a parameter into the address bar, see the variation. Convenient. The problem is that almost every implementation I’ve used does it wrong: the override skips the experiment’s targeting rules entirely. Mobile-only test? Forces on desktop. Page-specific test? Forces on every page. Attribute-gated test? Fires before attributes have loaded.
That’s not a preview. That’s a different experiment.
I ran into this hard enough on GrowthBook that I built a replacement. It’s been part of the Toolbox for a while now and it’s the single feature QA reaches for most often. The implementation details aren’t the interesting part — what matters is the principle that makes it useful, and that’s portable to any platform with URL-based overrides.
The gotcha with native overrides
GrowthBook’s allowUrlOverrides reads a query param and forces the variation directly. That bypasses:
- Device targeting (mobile-only experiments fire on desktop)
- URL/path filters (PDP-only experiments fire on the homepage)
- User-attribute gating (B2B experiments fire for B2C users)
- Race conditions (the experiment fires before attributes have loaded)
There’s no banner, no warning, no signal that you’ve stepped outside the actual experiment conditions. QA passes a build, it ships, real users hit the page on a phone, and the variation does something unexpected because the QA pass was never representative.
The fix has to keep the targeting rules in place.
Targeting-safe forcing
The whole guarantee is one rule applied at the right moment:
Check the gate first. Apply the override second. Never let a debug flag bypass real runtime conditions.
In practice: when the render callback would normally evaluate whether an experiment should run, it does that evaluation first. Only if the user genuinely qualifies does the override get to swap the variation. If targeting says “no”, a warning lands in the trace log and the experiment doesn’t run — even though the QA team passed a URL flag asking for it to run.
Two cases:
- GrowthBook would have run the experiment anyway → the override picks the variation.
- GrowthBook would not have run the experiment → log a warning and bail.
Mobile-only forced on desktop? Warning, no run. Wrong page? Warning, no run. The override can change the variation, never the targeting decision. That’s the whole guarantee.
There’s a visible signal too: when forcing is active, a banner pins to the bottom of the page listing every currently-forced experiment, with a way to clear them. QA didn’t know they wanted a banner until they had one — but it makes it impossible to forget you’re in a forced state, and one click drops you back to the real targeting outcome. The state survives navigation; it clears on tab close. Otherwise QA forgets they’ve forced anything and starts filing bugs against the wrong reality.
Why this pattern is portable
The targeting-safety idea isn’t really about GrowthBook. Anywhere you have URL-based overrides for gated execution — feature flags, A/B platforms, even auth shims — the rule is the same. Otherwise QA stops being representative of production, and the bugs you’re hunting move from the variation into the gate that hides under the override. I’d rather see a warning in the console than ship a green QA build that lied about which device it was on.
It’s not a glamorous feature. It’s the kind of small fix that quietly removes a whole category of “but it worked in QA” tickets — which is exactly the kind of feature I want more of.
Happy forcing 🙂