Products

SurveyMonkey is built to handle every use case and need. Explore our product to learn how SurveyMonkey can work for you.

Get data-driven insights from a global leader in online surveys.

Integrate with 100+ apps and plug-ins to get more done.

Build and customise online forms to collect info and payments.

Create better surveys and spot insights quickly with built-in AI.

Purpose-built solutions for all of your market research needs.

Templates

Measure customer satisfaction and loyalty for your business.

Learn what makes customers happy and turn them into advocates.

Get actionable insights to improve the user experience.

Collect contact information from prospects, invitees and more.

Easily collect and track RSVPs for your next event.

Find out what attendees want so that you can improve your next event.

Uncover insights to boost engagement and drive better results.

Get feedback from your attendees so you can run better meetings.

Use peer feedback to help improve employee performance.

Create better courses and improve teaching methods.

Learn how students rate the course material and its presentation.

Find out what your customers think about your new product ideas.

Resources

Best practices for using surveys and survey data

Our blog about surveys, tips for business and more.

Tutorials and how-to guides for using SurveyMonkey.

How top brands drive growth with SurveyMonkey.

Contact SalesLog in
Contact SalesLog in

AB testing calculator

Are your results statistically significant?

1.00%

1.14%

A two-sided test accounts for the possibility that your variant could have a negative impact on your result.
The level of confidence you can have that your results are not due to random chance.

Variant B’s conversion rate (1.14%) was 14% higher than variant A’s conversion rate (1.00%). You can be 95% confident that variant B will perform better than variant A.

86.69%

0.0157

In the context of AB testing experiments, statistical significance is how likely it is that the difference between your experiment’s control version and test version isn’t due to error or random chance.

For example, if you run a test with a 95% significance level, you can be 95% confident that the differences are real.

It’s commonly used in business to observe how your experiments affect your business’s conversion rates. In surveys, statistical significance is usually used as a way to ensure you can be confident in your survey results. For example, if you asked people whether they preferred ad concept A or ad concept B in a survey, you’d want to make sure the difference in their results was statistically significant before deciding which one to use.

Let us do the math for you. Get automated statistical significance with an Advantage plan. See pricing.

The first step is to form a hypothesis. For any experiment, there is a null hypothesis, which states there’s no relationship between the two things you’re comparing, and an alternative hypothesis. An alternative hypothesis typically tries to prove that a relationship exists and is the statement you’re trying to back up. If you’re talking about conversion-rate AB testing, your hypothesis may involve adding a button, image, or some copy to a page to see if it affects conversion rates. When you’re using surveys for concept testing, like in the example above, your hypothesis might involve testing different ad variants to see which people find most appealing.

After formulating null and alternative hypotheses, statisticians sometimes do tests to ensure their hypotheses are sound. A z-score evaluates the validity of your null hypothesis. It can tell you if there is, in fact, no relationship between the things you’re comparing. A p-value tells you whether the evidence you have to prove your alternative hypothesis is strong.

When running statistical significance tests, it’s useful to decide whether your test will be one sided or two sided (sometimes called one tailed or two tailed). A one-sided test assumes that your alternative hypothesis will have a directional effect, while a two-sided test accounts for if your hypothesis could have a negative effect on your results, as well. Generally, a two-sided test is the more conservative choice.

Even professional statisticians use statistical modeling software to calculate significance and the tests that back it up, so we won’t delve too deeply into it here. However, if you’re running an AB test, you can use the calculator at the top of the page to calculate the statistical significance of your results. If you’re trying to calculate the significance of your survey results, SurveyMonkey can do it for your automatically.

Toolkits Directory

Discover our toolkits, designed to help you leverage feedback in your role or industry.

Survey Best Practices

Learn our survey best practices to make the most of your next survey. Explore our survey guidelines and get started today, for FREE!

Easily create and customise online application forms

Easily create and customise application forms. Customise their design, collaborate with your team live and launch faster with our free templates.

Discover SurveyMonkey Enterprise features

Manage feedback at scale with SurveyMonkey enterprise-grade security features, privacy and compliance standards and data integrations.

Try sending a survey to your customers to find out what they’re looking for.