Skip to content

NEW RESEARCH COURSES: UX Research Methods, Heatmaps | Learn More

Conversion Rates

Jakob Nielsen

November 24, 2013

Summary: Increased conversion is one of the strongest ROI arguments for better user experience and more user research. Track over time, because it's a relative metric.

Defining Conversion Rates

Definition: The conversion rate is the percentage of users who take a desired action. The archetypical example of conversion rate is the percentage of website visitors who buy something on the site.

Example: An ecommerce site is visited by 100,000 people during the month of April. During that month, 2,000 users purchased something from the site. Thus, the site's conversion rate is 2,000/100,000 = 2%.

There is room to tighten the definition somewhat:

  • How do we count the baseline number of "users"? Only as unique visitors, or do we count a person for as many times as they visit during the measurement period? (If Bob visits 5 times and Alice visits once, does the site has 2 visitors or 6?) Either way of counting is appropriate, and you can pick whichever works best for your type of website as long as you're consistent and count the same way during all measurement periods.
  • How do we count users who take "desired actions"? That is, how do we count the conversion events? The same two options present themselves: count a specific person only once, no matter whether they buy once or several times during the period. Or count each person as many times as they buy. (If Bob buys twice and Alice doesn't buy anything, did the site have one or two conversions?) It seems most appropriate to follow the same rule as determined for counting the baseline number of visitors, but again either rule will work as long as you apply it consistently.

In this article, I mainly refer to "websites," but conversion rates can be measured for anything that has users and actions. Intranets, mobile apps, enterprise applications? All the same, in terms of being able to define and track conversion rates, though the exact conversion events of interest will obviously differ.

What's a Conversion Event?

While conversion rates are discussed most often for ecommerce sites, it's a concept that matters to everybody who cares about the value of design projects. Conversions don't have to be sales but can be any key performance indicator (KPI) that matters for your business. Examples include:

  • Buying something on an ecommerce site
  • Becoming a registered user
  • Allowing the site to store the user's credit-card information for easier checkout in the future
  • Signing up for a subscription (whether paid or free)
  • Downloading trial software, a whitepaper, or some other goodie that presumably will predispose people to progress in the sales funnel
  • Requesting more information about a consulting service or B2B product
  • Using a certain feature of an application — especially new or advanced features
  • Upgrading from one level of a service to a higher level — in this case, the baseline user count would only include those users who are already at the lower service level
  • People who didn't just download a mobile app to their phone but also used it; or people who keep using the app a week later
  • Spending a certain amount of time on the site or reading a certain number of articles
  • Returning to the site more than a certain number of times during the measurement period — in this case, it would make the most sense to define the user count as unique visitors
  • Anything else that can be unambiguously counted by a computer and that you want users to do

We can also count microconversions like simply clicking a link, watching a video, scrolling down past the page fold, or other secondary actions that may not be valuable in themselves but do indicate some level of engagement with the site. Such smaller actions can often be helpful for UX-oriented website analytics that attempt to track smaller design elements.

Why Conversion Rates Are Important

Of course, you want to track the absolute number of whatever user actions you value. But for the sake of managing your user-interface design and tracking the effectiveness of your UX efforts over time, the conversion rate is usually more important than the conversion count.

Even while keeping the design absolutely unchanged, the conversion count could explode if you run a strong advertising campaign that makes a lot of people interested in your product. Good job, marketing team. But since the increased site activity wasn't caused by any design changes, we can't quite give the same kudos to the design team.

The conversion rate measures what happens once people are at your website. Thus it's greatly impacted by the design and it’s a key parameter to track for assessing whether your UX strategy is working.

Lower and lower conversion rates? You must be doing something wrong with the design, even if great advertising campaigns keep driving lots of traffic.

Higher conversion rates? Now you can praise your designers. (Unless you're simply running a sale. Of course you can only ascribe improved business value to the design if other factors were unchanged.)

When Absolute Counts Beat Relative Measures

Even though it's usually better to track the ratio of users who convert, there are exceptions to this rule. (What did you expect? This is a user experience article, after all.)

If traffic is both highly variable and of widely varying quality, then ratios can become misleading.

A personal example: on April 1, 2013, I published a humorous article about usability for cats. That day the nngroup.com website got 4 times as much traffic as it does on a normal day. I assume the link had been forwarded among cat lovers, because most of these extra visitors didn’t buy a thing — sales remained at about the same absolute number as on an average day. In other words, our conversion rate plummeted to almost a quarter of its normal level.

Now, in any case one shouldn’t obsess over conversion rates to the extent of tracking them on a daily basis. But even for the full week, the conversion rate was about half of normal (because traffic was twice the norm). Was the site design suddenly half as good? Were our offerings suddenly being rejected by half the target audience? No, what happened was that there were a huge number of one-time visitors who wanted to read the cat article but who were not in the target audience for user-experience reports, courses, or consulting. (That’s all right — I don't mind giving these fine folks a free pageview.)

If you get a surge in traffic, consider the cause. Quite likely these new visitors are different from your normal users and won't convert at the same rate. Small fluctuations will be smoothed over in long-term statistics, but big ones need manual attention. One trick is to look at the absolute number of conversion events, and if that remains at the norm then it's likely that you got an inflow of people who are not in your target market.

What Measurement Period Should Be Used?

If you want a single answer, then use a month as the period in which you measure the baseline user count and the number of conversion events.

Of course, there's no single answer to any such question. The real answer is that many different periods will work for different purposes. The key criteria for deciding on the period length are:

  • The measurement period should be short enough that you have time to track the conversion rate across multiple periods and still make an impact on the business. If you use a full year as the measurement period, you would get very solid numbers, but your company might be out of business before you could conclude anything to increase profitability.
  • The measurement period should be long enough that it's not susceptible to random fluctuations and also accommodates as many structural fluctuations as possible. For example, many B2B sites have substantial less traffic during weekends when their core users aren't in the office. If such sites tracked conversion rates on a daily basis, they would see big swings that had nothing to do with the site's real performance but were simply caused by the difference between weekdays and weekends. Using a full week as the measurement period would smooth over these irrelevant fluctuations.
  • The measurement period should align with your product-development cycles. For example, if you launch major design updates once a month, it would be problematic to use a full quarter as the measurement period: during each measurement period you would be measuring 3 different designs and thus you would not know which design changes truly made a difference to the conversion rate.

Almost no matter what measurement period you pick, you also have to consider seasonal variations that span the period. For example, most consumer sites experience increased sales during the December holiday shopping season, whereas we can tell you from experience that during those last hectic shopping days, activity on a site like our own falls to almost nothing.

For low-volume sites your measurement period needs to be long enough to achieve a decent level of statistical significance. If there's only a handful of conversion events within each period, then the estimated conversion rates will bounce all over the map for no real reason other than random fluctuations. Use standard statistical estimates of the confidence interval to make sure that you're actually measuring something real.

What's a Good Conversion Rate?

This is the question we get the most, but there is no single answer, other than to say that a good conversion rate for your site is one that's higher than what you had before. In other words, it's relative.

Some of the factors that impact conversion rate beyond the control of user-experience professionals:

  • The company's preexisting brand reputation: if people like the brand, conversion will be higher than if people hate the brand, even if everything else were to be held constant.
  • Price: cheap stuff is easier to sell than expensive stuff, so it's trivial to increase the conversion rate — have a sale.
  • Sales complexity: products that are impulse buys will tend to have higher conversion rates than complex services that require months of research and the approval of a committee before the contract is signed.
  • The required level of commitment: it's easier to get users to read 5 free articles than to get them to sign up for an email newsletter, because people don't feel that they need to commit to something simply to browse a website. Thus "read 5 articles" will tend to have a higher conversion rate than "subscribe to newsletter" even for the same website.

During the dot-com bubble around year 2000, ecommerce sites typically had average conversion rates around 1%. In 2013, ecommerce sites averaged around 3% conversion rates. This example also shows that expected conversion rates can vary over time, as users get more comfortable with taking your desired action.

Micro conversions (users making incremental progress through the user interface) will hopefullly have much higher conversion rates than the macro conversions (complete actions) I mainly discuss here.

Conversion vs. Usability Study Metrics

Depending on what you're counting, a good conversion rate is usually in the 1%–10% range. On the other hand, when we measure success rates in usability studies, websites often score around 80%. How to explain this wide disparity?

The difference is simple: in a usability study we ask the user to perform a task, so we measure whether it's possible for them to do so. Even if the price is too high, people will still "buy" when it's only a test because they engage according to the scenario. In other words, an 80% success rate means that 20% of the visitors are incapable of using the site. It doesn’t mean that 80% of prospects will become paying customers.

Another way of looking at these two different statistics is to consider a site with a 4% conversion rate and an 80% success rate in usability studies. If we could fix all the usability problems so that we didn’t turn away 20% of prospects, the site would have a 5% conversion rate.

(The full-day course on Measuring User Experience goes into more detail on how to measure interactive behavior and how to relate behavioral metrics with business metrics.)

Should You Maximize the Conversion Rate?

This is a trick question, because the answer is no. You shouldn't maximize the conversion rate, you should optimize it. The difference is that sometimes it costs so much to increase conversions past a certain point that it's not worth doing.

Let's consider just one parameter: price. Depending on customers' price elasticity, the number of purchases may go up more or less as you drop the price:

  • Highly elastic price sensitivity means that most customers will only convert at dirt cheap prices.
  • Inelastic price sensitivity means that many customers will continue to convert even at substantially increased prices.

Let’s say that you increase the price by 10%. If the number of sales drops by 10% you're about even in total revenue. If customers have high price elasticity, sales may drop by, say, 20%, and your revenue would be down. In the case of inelastic customers, sales might only drop by 5% and revenue would be up.

If your customers have low price elasticity, your profitability might be higher by accepting the slightly lower conversion rate that would accompany a beefy price increase.

(Lots of other business considerations apply as well and are beyond the scope of this article, including your marginal cost of serving incremental customers.)

Conversion Rate and User Experience

It's important to track conversion rates and align them with design changes to justify the cost of an organization's user-experience investment. As mentioned above, there are many non-UX parameters that impact conversion, but the actual design has a huge impact.

As a simple example, in our work on usability ROI, we have seen countless cases of hugely increased conversion rates for registration forms every time a form was simplified.

It’s a very safe bet to assume that removing any question from a form will result in a higher completion rate for the form and thus a higher conversion rate for the associated action. In real life, there's a reason for every extra question on the form: somebody thinks that it would be "interesting" to collect that data. If your only counterargument is that it’s well known that forms usability suffers as the form balloons, you may lose in the design meeting. On the other hand, if you put both versions online in an A/B test and measure the conversion rates for the shorter form vs. the longer form, you'll know exactly how much it costs the business to ask that extra question.

Sample calculation:

The conversion rate for the original form is 10%, and removing one question changes the conversion rate to 11%.

The baseline number of users who get to that form is 100,000 people per year.

Thus, removing the question causes 1,000 more people to complete the form.

If the average business value of each completion is 20,ドル then asking that extra question costs the company 20,000ドル per year.

Now what's the value of having that "interesting" data from the additional question? Maybe the company can make 100,000ドル based on that deeper knowledge about customers. If so, do keep the extra question and the less-usable form. But, in most cases we know of, the value is closer to zero because the "interesting" data simply sits in a report and is never acted upon. If so, clean up the form and watch conversion rates grow.

(These types of design tradeoffs are discussed further in our course on Analytics and User Experience.)

Learn More:

AltStyle によって変換されたページ (->オリジナル) /