-
Notifications
You must be signed in to change notification settings - Fork 1
Statistical Tests Selection
Selection of the proper statistical test is of the essential importance for analysing retrieved data of an experiment. Depending on the type of data points, different tests could more or less misleading or truthful. Here is presented a division in the interest of making the selection process easier and accurate.
The mean and expected value are used synonymously to refer to one measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution.
- Implementation in R:
mean(x)
- Implementation in Python:
numpy.mean(a, axis=None, dtype=None, out=None, keepdims=<class 'numpy._globals._NoValue'>)
The median is the value separating the higher half of a data sample, a population, or a probability distribution, from the lower half.
- Implementation in R:
median(x)
- Implementation in Python:
numpy.median(a, axis=None, out=None, overwrite_input=False, keepdims=False)
The standard deviation is a statistical method that is used to quantify the amount of variation or dispersion of a set of data values.
- Implementation in R:
sd(x)
- Implementation in Python:
numpy.std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=<class 'numpy._globals._NoValue'>)
The Kaplan–Meier estimator is a non-parametric statistical method used to estimate the survival function from lifetime data.
- The event status should consist of two mutually exclusive and collectively exhaustive states: "censored" or "event".
- The time to an event or censorship (known as the "survival time") should be clearly defined and precisely measured.
- Where possible, left-censoring should be minimized or avoided.
- There should be independence of censoring and the event.
- There should be no secular trends (also known as secular changes).
- There should be a similar amount and pattern of censorship per group.
The Kolmogorov–Smirnov test is a nonparametric statistical method of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution, or to compare two samples.
- The sample is a random sample
- The theoretical distribution must be fully specified.
- The theoretical distribution is assumed to be continuous.
- The sample distribution is assumed to have no ties.
- Implementation in R:
ks.test(x, y)
- Implementation in Python:
scipy.stats.kstest(rvs, cdf, args=(), N=20, alternative='two-sided', mode='approx')
The Kruskal-Wallis test* is a non-parametric statistical method for testing whether samples originate from the same distribution.
- The samples drawn from the population are random.
- The observations are independent of each other.
- The measurement scale for the dependent variable should be at least ordinal.
- Implementation in R:
kruskal.test(list(g1=a, g2=b, g3=c, g4=d))
- Implementation in Python:
scipy.stats.kruskal(*args, **kwargs)
The Pearson correlation is a statistical method used for testing the linear correlation between two variables X and Y.
- The data sets to be correlated should approximate the normal distribution.
- If the points lie equally on both sides of the line of best fit, then the data is homoscedastic.
- The data follows a linear relationship.
- The data is continuous.
- Data are paired and come from the same population.
- No outliers must be present in the data.
The Spearman correlation is a nonparametric statistical method used for testing the rank correlation (statistical dependence between the rankings of two variables).
- The data sets to be correlated should approximate the normal distribution.
- If the points lie equally on both sides of the line of best fit, then the data is homoscedastic.
- The data follows a linear relationship.
- The data is continuous.
- Data are paired and come from the same population.
- No outliers must be present in the data.
- Implementation in R:
cor(df,method="spearman")
- Implementation in Python:
scipy.stats.spearmanr(a, b=None, axis=0)
The one-sample t-test is a statistical method used for testing the null hypothesis that the population mean is equal to a specified value mu_0
.
- The dependent variable must be continuous (interval/ratio).
- The observations are independent of one another.
- The dependent variable should be approximately normally distributed.
- The dependent variable should not contain any outliers.
- Implementation in R:
t.test(a, mu=mu_0)
- Implementation in Python:
scipy.stats.ttest_1samp(a, popmean, axis=0)
The one-sample t-test is a statistical method used for testing the null hypothesis such that the means of two populations are equal.
- The populations from which the samples have been drawn should be normal - appropriate statistical methods exist for testing this assumption (e.g. the Kolmogorov Smirnov non-parametric test).
- The standard deviation of the populations is unknown. This assumption can be tested by the F-test.
- Samples have to be randomly drawn independent of each other.
- Implementation in R:
t.test(a,b, var.equal=TRUE, paired=FALSE)
- Implementation in Python:
scipy.stats.ttest_ind(a, b, axis=0, equal_var=True, nan_policy='propagate')
The paired t-test is a statistical method used for testing the null hypothesis that the difference between two responses measured on the same statistical unit has a mean value of zero. The dependent variable must be continuous (interval/ratio).
- The observations are independent of one another.
- The dependent variable should be approximately normally distributed.
- The dependent variable should not contain any outliers.
- Implementation in R:
t.test(a,b, paired=TRUE)
- Implementation in Python:
scipy.stats.ttest_rel(a, b, axis=0, nan_policy='propagate')
The paired t-test is a statistical method used for testing whether the slope of a regression line differs significantly from 0.
- The observations are independent of one another.
- The dependent variable should be approximately normally distributed.
- The dependent variable should not contain any outliers.
- The data is continuous.
- The groups should have equal variance.
- Implementation in R:
t.test(x, y, alternative="two.sided", var.equal=FALSE)
- Implementation in Python:
scipy.stats.ttest_ind(a, b, axis=0, equal_var=True, nan_policy='propagate')
The Wilcoxon test is a non-parametric statistical method used to compare two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ.
- Data are paired and come from the same population.
- Each pair is chosen randomly and independently.
- The data are measured on at least an interval scale when, as is usual, within-pair differences are calculated to perform the test (though it does suffice that within-pair comparisons are on an ordinal scale).
- Implementation in R:
wilcox.test(a,b, paired=TRUE)
- Implementation in Python:
scipy.stats.wilcoxon(x, y=None, zero_method='wilcox', correction=False)
[1] Table summary is extracted from https://www.graphpad.com/support/faqid/1790/
For more information about the Triangle of Life concept visit http://evosphere.eu/.
_________________
/ Premature \
| optimization |
| is the root of |
| all evil. |
| |
\ -- D.E. Knuth /
-----------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||