How do I use an existing feature flag in an experiment?
We generally don't recommend this, since experiment feature flags need to be in a specific format (see below) or otherwise they won't work.
However, if you insist on doing this (for example, you don't want to make code change), you can do this for multiple variant feature flags only by doing the following:
- Delete the existing feature flag you'd like to use in the experiment
- Create a new experiment and give your feature flag the same key as the feature flag you deleted in step 1.
- Name the first variant in your new feature flag 'control'.
Note: Deleting a flag is equivalent to disabling it, so it is off for however long it takes you to create the draft experiment. The flag is enabled as soon as you create the experiment (not launched).
How do I run a second experiment using the same feature flag as the first experiment?
This is similar to running an experiment using an existing feature flag. If you want to re-run an experiment (using the same feature flag key) while preserving the previous experiment results, delete the existing feature flag (not the experiment) and use the same key in the new experiment.
How can I run experiments with my custom feature flag setup?
See our docs on how to run an experiment without using feature flags.
How do I assign a specific person to the control/test variant in an experiment?
Once you create the experiment, go to the feature flag, scroll down to "Release Conditions". For each condition, there is an "Optional Override". This enables you to choose a release condition and force all people in this release condition to have the variant chosen in the optional override.
My Feature Flag Called
events show None
, (empty string)
, or false
instead of my variant names
The Feature Flag Response
property is false
for users who called your feature flag but did not match any of the rollout conditions.
None
or (empty string)
indicates that the feature flag is disabled or failed to load. For example, due to a network error, adblocking, or something unexpected. (empty string)
also appears when some of the events for an experiment lack feature flag information.
Why are my A/B test event numbers lower than when I create an insight directly?
Experiment results only count events that include the experiment's feature flag data. Sometimes, when you capture experiment events, the flags are not loaded yet. This means users don't see the experiment, their events won't have the flag data, and they are not included in the results calculation.
By default, insights count all the events, whether they include flag data or not. This is why they show a higher number. To confirm this, break down an insight by your experiment's flag and check the number of events with the value None
.
A situation where this happens is using pageviews as your goal metric. Because pageviews are captured as soon as PostHog loads, the flag data may not have loaded yet, especially for first time users where flags aren't cached. Thus, the pageview count in insights might be higher than in your experiment.
To fix this, you make sure flags are immediately available on a page load. There are two options to do this:
- Wait for feature flags to load before showing the page (low engineering effort, but slows page down by ~200ms).
- Use client-side bootstrapping (high engineering effort, but keeps the page blazing fast).