Understand Lift measurement statuses and metrics in Google Ads
Google Ads can narrow down how much lift your Brand Lift metric generated based on the amount of positive survey responses between people who have checked your ads and those of people who were withheld from checking your ads. Generally, more responses are required in order to accurately detect smaller amounts of absolute lift. Before your lift is detected, you will be able to view an estimation of it based on your response count.
When to expect detectable lift
View the following guidelines about how many responses are required to detect your lift.
-
For high-performing campaigns, you can expect to detect lift once you receive about 2,000 responses per lift metric.
-
At the recommended budget minimum, you can expect to detect lift once you receive 4,100 responses per lift metric.
-
If your campaign has not shown any lift after reaching 16,800 responses per metric, you may not be able to detect your lift.
Note: In general, you may start viewing lift results once your surveys reach 2,000 responses, and users may see results fluctuate throughout the duration of the study. It’s recommended waiting until the study is complete for the final report, because the results may change throughout the duration of the study. Learn more about your Brand Lift measurement data
Brand Lift study results
While Brand Lift studies receive results as quickly as possible, campaign volume determines how fast we are able to gather results. We allow up to 10 days for budget eligibility to be met to give more flexibility with budget distribution and spend goals. Your Brand Lift study may continue collecting results for a short period, usually less than 14 days, due to the gap between when an impression is served and when the survey is shown to a user.
Criteria
Releasing results is based on the criteria mentioned in the "Measurement eligibility" section above. If you have higher lift, above 2%, we require less survey responses to report your results. If you have lower lift, less than 2%, we require more survey responses.
The response volume is reported in near real time. We look at the detectable lift and the number of responses you may have to determine if we can provide you with results in Google Ads. While your Brand Lift study is running, you’ll be able to check the progress based on the following:
Progress
What you view in Google Ads
Less than 50%
"Not enough data"
Between 50% - 100%
If there’s statistically significant, positive lift, we’ll report it. If not, we’ll report "Not enough data".
100%
If there’s statistically significant, positive lift, we’ll report it. If not, we’ll report "No lift detected".
Required total responses for measuring Brand Lift
In order to measure Brand Lift accurately at various levels, the total response count must be within a certain range. The smaller the absolute lift, the more survey responses are required to ensure accuracy. The table below shows the required total response, given a detectable absolute lift:
Detectable absolute lift
Required total response count
> 4%
1,200 ~ 2,800
3%
2,800 ~ 5,000
2%
4,100 ~ 11,000
1.5%
11,000 ~ 20,000
1%
20,000 ~ 45,000
0.5%
45,000 ~ 180,000
< 0.5%
> 180,000
Example
For detectable absolute lift percentages not mentioned in the chart, you may need to estimate to find the total required response count.
Let’s say you have .75% absolute lift and want to know the number of responses you need to detect the absolute lift. 45,000 responses would be more than what you need (since the minimum requirement to detect .5% absolute lift is 45,000 responses), while 20,000 responses wouldn’t be enough (since the minimum requirement to detect 1% absolute lift is 20,000 responses).
Since 75% is halfway between 1% and .5%, you would need roughly between 20,000 and 45,000 responses to get .75% detectable absolute lift (or about 33,000 survey responses).
If your Brand Lift metric’s absolute lift approaches 0, more survey responses are required to accurately measure absolute lift. This is because if there's only a small difference between the responses of people who have checked your ads and those of people who have not checked your ads, more responses are required to determine exactly what difference there is.
Statuses
"X% Lift"
An X% lift indicates that we've detected high enough lift based on the number of responses we received to generate a report. For example, a 5% increase in the Absolute brand lift column indicates that your ads influenced your audience's positive feelings towards your brand or product by +5%. Learn more about the different Brand Lift metrics
"Not enough data"
"Not enough data" means that based on the date range you’ve selected in your account, the number of Brand Lift survey responses received in that date range is below the minimum threshold required to surface results.
Fix "not enough data"
There could be multiple reasons for not getting enough data for your study or an individual slice. To fix it, make sure that:
- You spend your budget in full.
- The actual spend in your campaigns meets the minimums, and not just the budget.
If your campaigns are spending enough but are still not getting Brand Lift results, check for the following:
Is your CPV bid too low?
Low traffic can indicate that you're getting outbid. Raise your bid to win more impressions and generate traffic. However, keep in mind that, if you raise your bid, you’ll spend your budget faster (assuming those impressions lead to views). When you use your budget faster, you’ll have less unique viewers and less potential for more viewers to fill out a survey.
Recommendation: If your traffic is low despite broad targeting, consider raising your bid. If raising your bid means you are hitting your budget cap, consider raising your budget to accommodate the higher bid.
Is your campaign configuration negatively affecting the survey control group?
If a Brand Lift study uses campaigns that target audiences that viewed the ad video before, it currently does not build a control group.
For example, let’s say you create a study with Video A. Next, you create a second study in which you target a YouTube list of "Viewers who watched Video A as an ad". With this setup, you won’t be able to build a control group. You may have progress, but it will only be on the exposed side, so you can't expect results to post.
Another example would be Brand lift study that uses campaigns that targets audiences who saw the first ad of a video ad sequence (VAS) campaigns. With VAS campaign subtypes you can create sequences of ads you want users to view in a certain order (i.e. ‘Show users Ad A then Ad B then Ad C’). Let’s say you create a campaign to add to your brand lift study that targets an audience list ‘Viewers who watched Video B as an Ad’ and then use Ad C as your creative. Because all users that saw Ad C would have had to have seen Ad B first, this means your ‘control group’ will be primarily composed of users that have already seen your ad within the VAS campaign.
Such configurations mean the study can’t build a control group because targeted users who will view your ad have already seen it. If only viewers who are eligible to be entered into the study are going to be blocked, your control group won’t progress. In this case, you shouldn’t expect results to post.
Is the campaign targeting too narrow?
The following study and campaign setup configurations may sometimes reduce the number of survey responses that your study will be able to gather. The extent to which they slow down survey response collection varies depending on the degree to which they’re narrowing your targeting reach.
Audiences (particularly retargeting), placements, keywords, and topics
More restrictive targeting types, such as placements, keywords, and retargeting, reduce the number of eligible viewers and can lead to less impressions. Less impressions and less viewers in turn mean that there's less potential for viewers to fill out a survey.
Small geography
Too small of a geography might limit unique viewers, which reduces your odds of getting enough responses. Ideally, studies are run at the country level, but you can also target smaller geographies, as long as there’s a large enough population of viewers.
Recommendation: Monitor your traffic closely as the study is progressing. If you aren’t spending in full, broaden any overly restrictive targeting by expanding geography or removing overly restrictive targeting types like placements or keywords.
Are you issuing surveys that might have a low response rate by showing non-English surveys in all languages?
Your survey can only serve in one language. If you target multiple or "All languages", you’re serving your survey to viewers who don’t speak that language. These viewers are likely to dismiss your survey. Thus, targeting multiple or "All languages'' isn't recommended, as this could lead to a negative experience for many viewers. If your survey is in English, depending on the country, you can target "All languages", because English is a commonly spoken second language in many countries. Note that even in this case, it isn’t a recommended practice.
Recommendation: In your campaign targeting, have the language you target match the language of the survey. Avoid targeting multiple geographies that speak different languages unless you know there's a high number of bilingual users or if your survey is in English, which tends to be the most common second language of bilingual speakers.
Are there too many campaigns (or Video experiment arms) in the Lift Measurement Configuration (LMC)?
Too many campaigns (or Video experiment arms) in the LMC result in lower impressions per campaign/Video experiment arm. Use of Video experiments with many experiment arms may result in "Not Enough Data" at the campaign level if your campaign traffic isn’t large enough for each experiment arm.
Recommendation: If campaign level data is important to you, be conscientious of the number of experiment arms/campaigns within an arm that you add.
Additionally, inclusion of many campaigns in the same study (especially with overlapping targeting) may result in "Not Enough Data" at the campaign level. If you add more campaigns, this means that, at the campaign level, you need enough responses per campaign or reporting slice (for example, device, demo, or ad). If that level of reporting is a priority, this is something to keep in mind when you think about how many campaigns to add to the same study.
Avoid adding lots of campaigns to your study if campaign level reporting is a priority for you. If it is, consider running multiple studies with one campaign per study or use video experiments to ensure you don’t have cross contamination across studies.
For reach-focused campaigns, are you showing multiple ads to the same viewer?
"No lift detected"
Sometimes a study that has ended with enough survey responses will still show "No lift detected". This happens when there was no statistically significant difference between the survey responses from viewers who watched your video ad and those who didn’t. If you don't have lift at the study level, check if you have lift in specific segments (for example, age, gender, campaign or device). Consider focusing on those segments with positive lift.
As with any media channel, some metrics are more difficult to move than others. Some audiences are more difficult to reach than others. It’s normal for video campaigns to have no lift on certain metrics and audiences.
Below are a few things you can do to improve your campaign’s set up, creative or targeting to increase the chances of seeing lift.
Set up your study correctly
- Select your competitor answer choices carefully
- Mismatch in the competitor’s brand or product compared to your brand or product might lead them to be selected more than you. For example, if you’re a small beverage company and choose a globally recognized soda brand as your competitor in the brand lift survey answer choices, viewers might choose them more, resulting in no lift for your brand.
- Ensure you entered your brand or product as the "Preferred Answer"
- If you didn’t enter the advertised brand or product as the "Preferred answer", the study ran with the wrong parameter. You can make edits and use Re-measurement to re-enable your study with the correct brand or product and competitors.
- If the creative is focused on a product, choose the right product category
- If your creative focused on a specific product, and you measured the impact on the brand, you’ll likely have "No lift detected". Unfortunately, the study ran with too large a scope. You should wait for the next campaign to measure its effectiveness. You’ve learned that this creative is too product specific to move the overall brand.
Improve your creative
Quality of the creative plays a huge role in getting lift. Check if your ad is following the ABCDs of effective YouTube creative. Contact your account manager for detailed guidance on improving your creative.
For light-branded ads, if your brand or product name isn’t present, appears late in the ad, or is too subtle, the audience won’t attribute the creative back to the brand or product advertised. To correct this, consider adding branding, like an icon, watermark, or banner, earlier in the ad. You can also change the script to integrate the brand or product more clearly.
To lift lower funnel metrics, such as conversions, adding the branding early probably won’t be enough to cause a significant lift. The creative needs to be more persuasive. Consider moving the main argument to earlier in the creative, or include more arguments in the ad script.
Limit exposure to your creative outside the lift study
If a creative has been seen by viewers before the Brand Lift study has been launched, it’s possible that the control group (group that does not view the ad) has been contaminated, resulting in no lift. This will make them the control group respond similarly to how your exposed users respond, thus reducing ‘lift’. To minimize creative contamination:
- Avoid running YouTube Video campaigns with non-Youtube channels like TV and other ad platforms.
- Avoid multiple brand lift studies with the same or similar creative (unless using Video Experiments)
- Avoid leaving out other video campaigns from your brand lift study with a similar creative
Target the right viewers for your campaign
Understanding and Interpreting Brand Lift metrics
Brand Lift measurement data is available in most tables in Google Ads, including "Campaign", "Ad Group", "Demographics", and more. You can also view results at the "Product" or "Brand" level in the "Lift Measurement" table.
Use the following steps to check your Brand Lift measurement data:
- Go toLift measurement within the Goals menu Goals Icon.
- Select the columns icon A picture of the Google Ads columns icon.
- Select Modify columns.
- Select Brand Lift, then select Apply.
You can also segment your measurement data by a specific metric (such as "Ad recall", "Awareness", "Consideration", "Favorability", and "Purchase Intent"):
- Select the segment icon Segment.
- Select Brand lift type to find the measurement data for your chosen metric.
Lifted users
This shows the estimated number of users in a sample survey whose perception of your brand changed as a result of your ads, extended to the overall reach of the campaign. It shows the difference in positive responses to your brand or product surveys between the group of users who saw your ad and the group who didn’t. For example, your ads could result in a lift in consideration (or awareness, or ad recall) with regard to your brand or product after seeing your ads.
The "lifted users" metric doesn’t necessarily measure unique users. A user may become lifted more than once during the course of your campaign.
Lifted users (co-viewed)
This metric is similar to the "lifted user" metric, but it also takes co-viewing into consideration. When multiple people watch YouTube on a connected TV (CTV) device together and view an ad at the same time, it could lead to more lifted users for your campaign. This metric includes lifted users from co-viewed impressions on CTV devices. Because there are no profiles available for co-viewers, they are treated as the same audience profile as the users who responded to surveys for the study.
Cost per lifted user
This shows the average cost for a lifted user who's now thinking about your brand after seeing your ads. Cost per lifted user is measured by dividing the total cost of your campaign by the number of lifted users. You can use this metric to understand the cost to change someone’s mind about your brand in terms of brand consideration, ad recall, or brand awareness.
Absolute Brand Lift
This metric shows the difference in positive responses to brand or product surveys between the group of people who saw your ads (the exposed group) and the group withheld from seeing your ads (the baseline group). This metric is calculated by subtracting the positive response rate of the baseline group from the exposed group. Absolute Brand Lift measures how much your ads influenced your audience’s positive feelings towards your brand or product. For example, an increase from 20% to 40% in the positive survey responses between the 2 surveyed groups represents an absolute lift of 20%.
Absolute Brand Lift and campaign performance
Absolute lift doesn’t necessarily reflect your overall brand lift performance. It is better to focus on a metric like cost per lifted user as the primary success metric of your campaign, because it factors in both reach and cost. View the following table:
If you look at absolute lift only, Campaign 1 seems to perform better than Campaign 2. But at the same cost, Campaign 2 drove 50% more lifted users, at a 66% lower CPM, and with a 33% more efficient cost per lifted user.
Headroom Brand Lift
This measures the impact your ads had on increasing positive feelings towards your brand or product compared to the positive growth potential your brand or product could've gotten. This metric is calculated by dividing absolute lift by 1 minus the positive response rate of the baseline group. For example, an increase from 20% to 40% in the positive survey responses between the exposed group and the baseline groups represents a headroom lift of 25%.
Relative Brand Lift
This describes the difference in positive responses to brand or product surveys between users who saw your ads, versus users who were stopped from viewing your ads. This difference is then divided by the number of positive responses from the group of users who didn’t view your ads. The result measures how much your ads influenced your audience’s positive perception of your brand. For example, an increase from 20% to 40% in the positive survey responses between the two surveyed groups represents a relative lift of 100%.
Since survey responses can’t be collected for the entire exposed and baseline groups, this data is calculated from the responses that have been collected, which gives you an estimated number within a certain range. Usually, the confidence interval is 90%, so you can expect that in 90% of the cases, the true lift number will be within that range (if you were to have reached everyone).
Baseline positive response rate
This defines how often users who were stopped from seeing your ads responded positively to your brand. Use this metric to better understand how positive responses to your brand were influenced by general media exposure and other factors, not by seeing the ads in your campaigns.
Exposed survey responses
This metric shows the number of survey responses from people who saw your ads.
Baseline survey responses
This metric describes the number of survey responses from people who were withheld from seeing your ads.
Exposed positive response rate
This defines how often users who saw your ads responded positively to your brand.
Confidence interval
When talking about lift metrics like absolute lift, "point estimate" is usually referred to, which is the most likely lift generated by the ad. However, in Google Ads, you can also find a confidence interval for all brand lift metrics which is an estimated range in which your result could fall. This range is defined by an upper and lower bound, which are the highest and lowest values where your lift is likely to actually be. Lift results use 80% 2-sided confidence intervals, which means that there is an 80% chance that the true lift is between the lower bound and upper bound. This also means that you have a 90% chance that the lift is greater than the lower bound. For example, you may notice that your relative lift is 35%, which is the point estimate. However, you can also find that the confidence interval goes from 30% to 40%, which means that there is an 80% chance that the true lift is between 30%, the lower bound, and 40%, the upper bound. Another way to look at this is that there is a 90% chance that lift is greater than the 30%.
Certainty of lift
The certainty of lift is an important metric to understand the reliability of your lift results. It represents the likelihood that the measured lift is generated by your campaigns, and not due to chance. The certainty of lift is calculated as 1 - p-value and can sometimes be referred to as the "statistical significance" or the "confidence" of lift results. The p-value tells you how likely your lift results would be if the ads were actually ineffective. Thus, a high certainty, corresponding to a low p-value, indicates that results are unlikely to have happened purely by chance and is a strong indication that your ads generated lift. Learn more about certainty of lift.