Skip to main content

What is statistical significance?

Significance in statistics is an important term when it comes to interpreting the results of statistical tests. It enables one to conclude that a result is not merely due to chance. To identify whether the result of a test is statistically significant, we often compare the significant level alpha and the p-value. We may also compare the critical value with the test statistic. Definitions are given in the following sections.

What is a statistical test?

A statistical test is a way to evaluate the evidence that the data provides, through a sample, against a hypothesis that we try to reject, called the null hypothesis. This hypothesis often referred to as H0, implies that data are generated by random processes. The null hypothesis is usually opposed to an alternative hypothesis, referred to as H1 or Ha, which is the hypothesis we are trying to prove by rejecting H0.

If the data does not provide enough evidence against the status quo (H0), then the null hypothesis is not rejected. On the other hand, if data shows strong evidence against H0, then the null hypothesis is rejected and the alternative hypothesis is considered as true with a quantified risk of being wrong.

What is statistical significance?

A test is said to be statistically significant when the quantified risk of being wrong, also called the p-value, is lower than a given significance level, called alpha. To be more specific, the p-value is the probability of obtaining the data or more extreme data under the null hypothesis. The significance level acts as a risk threshold that is used to make the decision of rejecting the null hypothesis or not. This threshold needs to be chosen before interpreting the result of the test and the conclusion might vary depending on the value that was chosen for alpha.

For example, a p-value of 0.02 would lead us to reject H0 when using a significance level alpha of 0.05. In other words, we would reject the null hypothesis of the test with a quantified risk of making an error lower than 5%. The interpretation of this result would be different when using a significance level of 0.01 and we would not reject the null hypothesis in this case.

How to define the significance level in XLSTAT?

It is possible to set a significance level alpha in the various XLSTAT dialog boxes so that the software can automatically interpret the p-value of the test.

The option to fix the significance level is usually available in the Options tab of the dialog box. The value by default is 5% but we can enter any value between 1 and 100. See an example below for the one-sample variance test:

StatSigni1.PNG XLSTAT offers the possibility to display the test interpretation in the output sheet just after the results table.

StatSigni2.PNG The test interpretation helps us confirm whether we can reject or not the null hypothesis for a given alpha value.

Go further

Choice of statistical test Read our guide Which statistical test should you use to learn how to choose the right statistical test for your analysis (i.e., linear regression, logistic regression, anova) according to your question, the type of your variables (i.e., categorical variables, binary, continuous) and the distribution of data. Correction of p-values Pairwise multiple comparisons tests usually imply the computation of a p-value for each pair of compared levels. The higher the number of pairs we wish to compare, the higher the number of computed p-values, and subsequently the risk of detecting significant effects which are not significant in reality. Considering a significance level alpha of 5%, we would likely find 5 significant p-values by chance over 100 significant p-values. Consequently, multiple pairwise comparisons tools involve p-value corrections: p-values are penalized ( = their value is increased) as their number grow. Read more about muitiple comparisons tests.

Was this article useful?

  • Yes
  • No