58. Statistics: Chi-squared test

See https://en.wikipedia.org/wiki/Chi-squared_test

Without other qualification, ’chi-squared test’ often is used as short for Pearson’s chi-squared test. The chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories.

Example chi-squared test for categorical data: Continue reading “58. Statistics: Chi-squared test”

59. Statistics: Fisher’s exact test

Fisher’s exact test is similar to the Chi-squared test, but is suitable for small sample sizes. As a rule it should be used if at least 20% of values are less than 5 or any value is zero. Although in practice it is employed when sample sizes are small, it is valid for all sample sizes.

For example, let us look at an example where a group of 16 people may choose tennis or football. In the group of 16 there are six boys and ten girls. The tennis group has one boy and eight girls. The football group has five boys and two girls. Does the sport affect the proportion of boys and girls choosing it? Continue reading “59. Statistics: Fisher’s exact test”

56. Statistics: Multiple comparison of non-normally distributed data with the Kruskal-Wallace test

For data that is not normally distributed, the equivalent test to the ANOVA test (for normally distributed data) is the Kruskal-Wallace test. This tests whether all groups are likely to be from the same population. Continue reading “56. Statistics: Multiple comparison of non-normally distributed data with the Kruskal-Wallace test”

55. Statistics: Multi-comparison with Tukey’s test and the Holm-Bonferroni method

If an ANOVA test has identified that not all groups belong to the same population, then methods may be used to identify which groups are significantly different to each other.

Below are two commonly used methods: Tukey’s and Holm-Bonferroni.

These two methods assume that data is approximately normally distributed. Continue reading “55. Statistics: Multi-comparison with Tukey’s test and the Holm-Bonferroni method”

47. Linear regression with scipy.stats

%matplotlib inline

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats

# Set up x any arrays

x=np.array([1,2,3,4,5,6,7,8,9,10])
y=np.array([2.3,4.5,5.0,8,11.1,10.9,13.9,15.4,18.2,19.5])
y=y+10

# scipy linear regression

gradient, intercept, r_value, p_value, std_err = stats.linregress(x,y)

# Calculated fitted y

y_fit=intercept + (x*gradient)

# Plot data

plt.plot(x, y, 'o', label='original data')
plt.plot(x, y_fit, 'r', label='fitted line')

# Add text box and legend

text='Intercept: %.1f\nslope: %.2f\nR-square: %.3f' %(intercept,gradient,r_value**2)
plt.text(6,15,text)
plt.legend()

# Display plot

plt.show()plot_19Linear regression with scipy.stats