Module stikpetP.tests.test_wald_os

Expand source code
import pandas as pd
from statistics import NormalDist

def ts_wald_os(data, p0=0.5, p0Cat=None, codes=None, cc = None):
    '''
    One-sample Wald Test
    --------------------
     
    A one-sample score test could be used with binary data, to test if the two categories have a significantly different proportion. It is an approximation of a binomial test, by using a standard normal distribution. Since the binomial distribution is discrete while the normal is continuous, a so-called continuity correction can (should?) be applied.
    
    The null hypothesis is usually that the proportions of the two categories in the population are equal (i.e. 0.5 for each). If the p-value of the test is below the pre-defined alpha level (usually 5% = 0.05) the null hypothesis is rejected and the two categories differ in proportion significantly.
    
    The input for the function doesn't have to be a binary variable. A nominal variable can also be used and the two categories to compare indicated.
    
    A significance in general is the probability of a result as in the sample, or more extreme, if the null hypothesis is true. 
    
    This function is shown in this [YouTube video](https://youtu.be/06q7qlTOs-s) and the test is also described at [PeterStatistics.com](https://peterstatistics.com/Terms/Tests/proportion-one-sample.html)
    
    Parameters
    ----------
    data : list or pandas data series 
        the data
    p0 : float, optional 
        hypothesized proportion for the first category (default is 0.5)
    p0Cat : optional
        the category for which p0 was used
    codes : list, optional 
        the two codes to use
    cc : {None, "yates"}, optional
        continuity correction to use. Default is None
        
    Returns
    -------
    pandas.DataFrame
        A dataframe with the following columns:
    
        - *n* : the sample size
        - *statistic* : test statistic (z-value)
        - *p-value (2-sided)* : two-sided significance (p-value)
        - *test* : test used
   
    Notes
    -----
    This test differs from the one-sample score test in the calculation of the standard error. For the ‘regular’ version this is based on the expected proportion, while for the Wald version it is done with the observed proportion.

    To decide on which category is associated with p0 the following is used:
    * If codes are provided, the first code is assumed to be the category for the p0.
    * If p0Cat is specified that will be used for p0 and all other categories will be considered as category 2, this means if there are more than two categories the remaining two or more (besides p0Cat) will be merged as one large category.
    * If neither codes or p0Cat is specified and more than two categories are in the data a warning is printed and no results.
    * If neither codes or p0Cat is specified and there are two categories, p0 is assumed to be for the category closest matching the p0 value (i.e. if p0 is above 0.5 the category with the highest count is assumed to be used for p0)
    
    The formula used (Wald, 1943):
    $$z=\\frac{x - \\mu}{SE}$$
    
    With:
    $$\\mu = n\\times p_0$$
    $$SE = \\sqrt{x\\times\\left(1 - \\frac{x}{n}\\right)}$$
    
    *Symbols used:*
    
    * $x$ is the number of successes in the sample
    * $p_0$ the expected proportion (i.e. the proportion according to the null hypothesis)
    
    If the Yates continuity correction is used the formula changes to (Yates, 1934, p. 222):
    $$z_{Yates} = \\frac{\\left|x - \\mu\\right| - 0.5}{SE}$$
    
    The formula used in the calculation is the one from IBM (2021, p. 997). IBM refers to Agresti, most likely Agresti (2013, p. 10), who in turn refer to Wald (1943)
    
    Before, After and Alternatives
    ------------------------------
    Before running the test you might first want to get an impression using a frequency table:
    [tab_frequency](../other/table_frequency.html#tab_frequency)

    After the test you might want an effect size measure:
    * [es_cohen_g](../effect_sizes/eff_size_cohen_g.html#es_cohen_g) for Cohen g
    * [es_cohen_h_os](../effect_sizes/eff_size_cohen_h_os.html#es_cohen_h_os) for Cohen h'
    * [es_alt_ratio](../effect_sizes/eff_size_alt_ratio.html#es_alt_ratio) for Alternative Ratio
    * [r_rosenthal](../correlations/cor_rosenthal.html#r_rosenthal) for Rosenthal Correlation

    Alternatives for this test could be:
    * [ts_binomial_os](../tests/test_binomial_os.html#ts_binomial_os) for One-Sample Binomial Test
    * [ts_score_os](../tests/test_score_os.html#ts_score_os) for One-Sample Score Test
    
    References
    ----------
    Agresti, A. (2013). *Categorical data analysis* (3rd ed.). Wiley.
    
    IBM SPSS Statistics Algorithms. (2021). IBM.
    
    Wald, A. (1943). Tests of statistical hypotheses concerning several parameters when the number of observations is large. *Transactions of the American Mathematical Society, 54*(3), 426–482. doi:10.2307/1990256
    
    Yates, F. (1934). Contingency tables involving small numbers and the chi square test. *Supplement to the Journal of the Royal Statistical Society, 1*(2), 217–235. doi:10.2307/2983604

    Author
    ------
    Made by P. Stikker
    
    Companion website: https://PeterStatistics.com  
    YouTube channel: https://www.youtube.com/stikpet  
    Donations: https://www.patreon.com/bePatron?u=19398076
    
    Examples
    ---------
    >>> pd.set_option('display.width',1000)
    >>> pd.set_option('display.max_columns', 1000)
    
    Example 1: Numeric list
    >>> ex1 = [1, 1, 2, 1, 2, 1, 2, 1]
    >>> ts_wald_os(ex1)
       n  statistic  p-value (2-sided)                                 test
    0  8  -0.730297           0.465209  one-sample Wald (assuming p0 for 1)
    >>> ts_wald_os(ex1, p0=0.3)
       n  statistic  p-value (2-sided)                                 test
    0  8  -1.898772           0.057595  one-sample Wald (assuming p0 for 1)
    >>> ts_wald_os(ex1, p0=0.3, cc="yates")
       n  statistic  p-value (2-sided)                                                                  test
    0  8  -1.496663           0.134481  one-sample Wald with Yates continuity correction (assuming p0 for 1)
    
    Example 2: pandas Series
    >>> df1 = pd.read_csv('https://peterstatistics.com/Packages/ExampleData/GSS2012a.csv', sep=',', low_memory=False, storage_options={'User-Agent': 'Mozilla/5.0'})
    >>> ts_wald_os(df1['sex'])
          n  statistic  p-value (2-sided)                                      test
    0  1974  -4.570499           0.000005  one-sample Wald (assuming p0 for FEMALE)
    >>> ts_wald_os(df1['mar1'], codes=["DIVORCED", "NEVER MARRIED"])
         n  statistic  p-value (2-sided)                                    test
    0  709  -3.062068           0.002198  one-sample Wald (with p0 for DIVORCED)
    
    '''
    
    if type(data) is list:
        data = pd.Series(data)

    #remove missing values
    data = data.dropna()
        
    #Determine number of successes, failures, and total sample size
    if codes is None:
        #create a frequency table
        freq = data.value_counts()

        if p0Cat is None:
            #check if there were exactly two categories or not
            if len(freq) != 2:
                # unable to determine which category p0 would belong to, so print warning and end
                print("WARNING: data does not have two unique categories, please specify two categories using codes parameter")
                return
            else:
                #simply select the two categories as cat1 and cat2
                n1 = freq.values[0]
                n2 = freq.values[1]
                n = n1 + n2
                #determine p0 was for which category
                p0_cat = freq.index[0]
                if p0 > 0.5 and n1 < n2:
                    n3=n2
                    n2 = n1
                    n1 = n3
                    p0_cat = freq.index[1]
                cat_used =  " (assuming p0 for " + str(p0_cat) + ")"
        else:
            n = sum(freq.values)
            n1 = sum(data==p0Cat)
            n2 = n - n1
            p0_cat = p0Cat
            cat_used =  " (with p0 for " + str(p0Cat) + ")"            
    else:        
        n1 = sum(data==codes[0])
        n2 = sum(data==codes[1])
        n = n1 + n2
        cat_used =  " (with p0 for " + str(codes[0]) + ")"
    
    minCount = n1
    ExpProp = p0
    if (n2 < n1):
        minCount = n2
        ExpProp = 1 - ExpProp
        
    #Wald approximation
    if cc is None:
        p = minCount / n
        q = 1 - p
        se = (p * (1 - p) / n)**0.5
        Z = (p - ExpProp) / se
        sig2 = 2 * (1 - NormalDist().cdf(abs(Z)))
        testValue = Z
        testUsed = "one-sample Wald"
    elif (cc == "yates"):
        #Wald approximation with continuity correction
        p = (minCount + 0.5) / n
        q = 1 - p
        se = (p * (1 - p) / n)**0.5
        Z = (p - ExpProp) / se
        sig2 = 2 * (1 - NormalDist().cdf(abs(Z)))
        testValue = Z
        testUsed = "one-sample Wald with Yates continuity correction"
        
    testUsed = testUsed + cat_used
    testResults = pd.DataFrame([[n, testValue, sig2, testUsed]], columns=["n", "statistic", "p-value (2-sided)", "test"])
    
    return (testResults)

Functions

def ts_wald_os(data, p0=0.5, p0Cat=None, codes=None, cc=None)

One-sample Wald Test

A one-sample score test could be used with binary data, to test if the two categories have a significantly different proportion. It is an approximation of a binomial test, by using a standard normal distribution. Since the binomial distribution is discrete while the normal is continuous, a so-called continuity correction can (should?) be applied.

The null hypothesis is usually that the proportions of the two categories in the population are equal (i.e. 0.5 for each). If the p-value of the test is below the pre-defined alpha level (usually 5% = 0.05) the null hypothesis is rejected and the two categories differ in proportion significantly.

The input for the function doesn't have to be a binary variable. A nominal variable can also be used and the two categories to compare indicated.

A significance in general is the probability of a result as in the sample, or more extreme, if the null hypothesis is true.

This function is shown in this YouTube video and the test is also described at PeterStatistics.com

Parameters

data : list or pandas data series
the data
p0 : float, optional
hypothesized proportion for the first category (default is 0.5)
p0Cat : optional
the category for which p0 was used
codes : list, optional
the two codes to use
cc : {None, "yates"}, optional
continuity correction to use. Default is None

Returns

pandas.DataFrame

A dataframe with the following columns:

  • n : the sample size
  • statistic : test statistic (z-value)
  • p-value (2-sided) : two-sided significance (p-value)
  • test : test used

Notes

This test differs from the one-sample score test in the calculation of the standard error. For the ‘regular’ version this is based on the expected proportion, while for the Wald version it is done with the observed proportion.

To decide on which category is associated with p0 the following is used: * If codes are provided, the first code is assumed to be the category for the p0. * If p0Cat is specified that will be used for p0 and all other categories will be considered as category 2, this means if there are more than two categories the remaining two or more (besides p0Cat) will be merged as one large category. * If neither codes or p0Cat is specified and more than two categories are in the data a warning is printed and no results. * If neither codes or p0Cat is specified and there are two categories, p0 is assumed to be for the category closest matching the p0 value (i.e. if p0 is above 0.5 the category with the highest count is assumed to be used for p0)

The formula used (Wald, 1943): z=\frac{x - \mu}{SE}

With: \mu = n\times p_0 SE = \sqrt{x\times\left(1 - \frac{x}{n}\right)}

Symbols used:

  • $x$ is the number of successes in the sample
  • $p_0$ the expected proportion (i.e. the proportion according to the null hypothesis)

If the Yates continuity correction is used the formula changes to (Yates, 1934, p. 222): z_{Yates} = \frac{\left|x - \mu\right| - 0.5}{SE}

The formula used in the calculation is the one from IBM (2021, p. 997). IBM refers to Agresti, most likely Agresti (2013, p. 10), who in turn refer to Wald (1943)

Before, After and Alternatives

Before running the test you might first want to get an impression using a frequency table: tab_frequency

After the test you might want an effect size measure: * es_cohen_g for Cohen g * es_cohen_h_os for Cohen h' * es_alt_ratio for Alternative Ratio * r_rosenthal for Rosenthal Correlation

Alternatives for this test could be: * ts_binomial_os for One-Sample Binomial Test * ts_score_os for One-Sample Score Test

References

Agresti, A. (2013). Categorical data analysis (3rd ed.). Wiley.

IBM SPSS Statistics Algorithms. (2021). IBM.

Wald, A. (1943). Tests of statistical hypotheses concerning several parameters when the number of observations is large. Transactions of the American Mathematical Society, 54(3), 426–482. doi:10.2307/1990256

Yates, F. (1934). Contingency tables involving small numbers and the chi square test. Supplement to the Journal of the Royal Statistical Society, 1(2), 217–235. doi:10.2307/2983604

Author

Made by P. Stikker

Companion website: https://PeterStatistics.com
YouTube channel: https://www.youtube.com/stikpet
Donations: https://www.patreon.com/bePatron?u=19398076

Examples

>>> pd.set_option('display.width',1000)
>>> pd.set_option('display.max_columns', 1000)

Example 1: Numeric list

>>> ex1 = [1, 1, 2, 1, 2, 1, 2, 1]
>>> ts_wald_os(ex1)
   n  statistic  p-value (2-sided)                                 test
0  8  -0.730297           0.465209  one-sample Wald (assuming p0 for 1)
>>> ts_wald_os(ex1, p0=0.3)
   n  statistic  p-value (2-sided)                                 test
0  8  -1.898772           0.057595  one-sample Wald (assuming p0 for 1)
>>> ts_wald_os(ex1, p0=0.3, cc="yates")
   n  statistic  p-value (2-sided)                                                                  test
0  8  -1.496663           0.134481  one-sample Wald with Yates continuity correction (assuming p0 for 1)

Example 2: pandas Series

>>> df1 = pd.read_csv('https://peterstatistics.com/Packages/ExampleData/GSS2012a.csv', sep=',', low_memory=False, storage_options={'User-Agent': 'Mozilla/5.0'})
>>> ts_wald_os(df1['sex'])
      n  statistic  p-value (2-sided)                                      test
0  1974  -4.570499           0.000005  one-sample Wald (assuming p0 for FEMALE)
>>> ts_wald_os(df1['mar1'], codes=["DIVORCED", "NEVER MARRIED"])
     n  statistic  p-value (2-sided)                                    test
0  709  -3.062068           0.002198  one-sample Wald (with p0 for DIVORCED)
Expand source code
def ts_wald_os(data, p0=0.5, p0Cat=None, codes=None, cc = None):
    '''
    One-sample Wald Test
    --------------------
     
    A one-sample score test could be used with binary data, to test if the two categories have a significantly different proportion. It is an approximation of a binomial test, by using a standard normal distribution. Since the binomial distribution is discrete while the normal is continuous, a so-called continuity correction can (should?) be applied.
    
    The null hypothesis is usually that the proportions of the two categories in the population are equal (i.e. 0.5 for each). If the p-value of the test is below the pre-defined alpha level (usually 5% = 0.05) the null hypothesis is rejected and the two categories differ in proportion significantly.
    
    The input for the function doesn't have to be a binary variable. A nominal variable can also be used and the two categories to compare indicated.
    
    A significance in general is the probability of a result as in the sample, or more extreme, if the null hypothesis is true. 
    
    This function is shown in this [YouTube video](https://youtu.be/06q7qlTOs-s) and the test is also described at [PeterStatistics.com](https://peterstatistics.com/Terms/Tests/proportion-one-sample.html)
    
    Parameters
    ----------
    data : list or pandas data series 
        the data
    p0 : float, optional 
        hypothesized proportion for the first category (default is 0.5)
    p0Cat : optional
        the category for which p0 was used
    codes : list, optional 
        the two codes to use
    cc : {None, "yates"}, optional
        continuity correction to use. Default is None
        
    Returns
    -------
    pandas.DataFrame
        A dataframe with the following columns:
    
        - *n* : the sample size
        - *statistic* : test statistic (z-value)
        - *p-value (2-sided)* : two-sided significance (p-value)
        - *test* : test used
   
    Notes
    -----
    This test differs from the one-sample score test in the calculation of the standard error. For the ‘regular’ version this is based on the expected proportion, while for the Wald version it is done with the observed proportion.

    To decide on which category is associated with p0 the following is used:
    * If codes are provided, the first code is assumed to be the category for the p0.
    * If p0Cat is specified that will be used for p0 and all other categories will be considered as category 2, this means if there are more than two categories the remaining two or more (besides p0Cat) will be merged as one large category.
    * If neither codes or p0Cat is specified and more than two categories are in the data a warning is printed and no results.
    * If neither codes or p0Cat is specified and there are two categories, p0 is assumed to be for the category closest matching the p0 value (i.e. if p0 is above 0.5 the category with the highest count is assumed to be used for p0)
    
    The formula used (Wald, 1943):
    $$z=\\frac{x - \\mu}{SE}$$
    
    With:
    $$\\mu = n\\times p_0$$
    $$SE = \\sqrt{x\\times\\left(1 - \\frac{x}{n}\\right)}$$
    
    *Symbols used:*
    
    * $x$ is the number of successes in the sample
    * $p_0$ the expected proportion (i.e. the proportion according to the null hypothesis)
    
    If the Yates continuity correction is used the formula changes to (Yates, 1934, p. 222):
    $$z_{Yates} = \\frac{\\left|x - \\mu\\right| - 0.5}{SE}$$
    
    The formula used in the calculation is the one from IBM (2021, p. 997). IBM refers to Agresti, most likely Agresti (2013, p. 10), who in turn refer to Wald (1943)
    
    Before, After and Alternatives
    ------------------------------
    Before running the test you might first want to get an impression using a frequency table:
    [tab_frequency](../other/table_frequency.html#tab_frequency)

    After the test you might want an effect size measure:
    * [es_cohen_g](../effect_sizes/eff_size_cohen_g.html#es_cohen_g) for Cohen g
    * [es_cohen_h_os](../effect_sizes/eff_size_cohen_h_os.html#es_cohen_h_os) for Cohen h'
    * [es_alt_ratio](../effect_sizes/eff_size_alt_ratio.html#es_alt_ratio) for Alternative Ratio
    * [r_rosenthal](../correlations/cor_rosenthal.html#r_rosenthal) for Rosenthal Correlation

    Alternatives for this test could be:
    * [ts_binomial_os](../tests/test_binomial_os.html#ts_binomial_os) for One-Sample Binomial Test
    * [ts_score_os](../tests/test_score_os.html#ts_score_os) for One-Sample Score Test
    
    References
    ----------
    Agresti, A. (2013). *Categorical data analysis* (3rd ed.). Wiley.
    
    IBM SPSS Statistics Algorithms. (2021). IBM.
    
    Wald, A. (1943). Tests of statistical hypotheses concerning several parameters when the number of observations is large. *Transactions of the American Mathematical Society, 54*(3), 426–482. doi:10.2307/1990256
    
    Yates, F. (1934). Contingency tables involving small numbers and the chi square test. *Supplement to the Journal of the Royal Statistical Society, 1*(2), 217–235. doi:10.2307/2983604

    Author
    ------
    Made by P. Stikker
    
    Companion website: https://PeterStatistics.com  
    YouTube channel: https://www.youtube.com/stikpet  
    Donations: https://www.patreon.com/bePatron?u=19398076
    
    Examples
    ---------
    >>> pd.set_option('display.width',1000)
    >>> pd.set_option('display.max_columns', 1000)
    
    Example 1: Numeric list
    >>> ex1 = [1, 1, 2, 1, 2, 1, 2, 1]
    >>> ts_wald_os(ex1)
       n  statistic  p-value (2-sided)                                 test
    0  8  -0.730297           0.465209  one-sample Wald (assuming p0 for 1)
    >>> ts_wald_os(ex1, p0=0.3)
       n  statistic  p-value (2-sided)                                 test
    0  8  -1.898772           0.057595  one-sample Wald (assuming p0 for 1)
    >>> ts_wald_os(ex1, p0=0.3, cc="yates")
       n  statistic  p-value (2-sided)                                                                  test
    0  8  -1.496663           0.134481  one-sample Wald with Yates continuity correction (assuming p0 for 1)
    
    Example 2: pandas Series
    >>> df1 = pd.read_csv('https://peterstatistics.com/Packages/ExampleData/GSS2012a.csv', sep=',', low_memory=False, storage_options={'User-Agent': 'Mozilla/5.0'})
    >>> ts_wald_os(df1['sex'])
          n  statistic  p-value (2-sided)                                      test
    0  1974  -4.570499           0.000005  one-sample Wald (assuming p0 for FEMALE)
    >>> ts_wald_os(df1['mar1'], codes=["DIVORCED", "NEVER MARRIED"])
         n  statistic  p-value (2-sided)                                    test
    0  709  -3.062068           0.002198  one-sample Wald (with p0 for DIVORCED)
    
    '''
    
    if type(data) is list:
        data = pd.Series(data)

    #remove missing values
    data = data.dropna()
        
    #Determine number of successes, failures, and total sample size
    if codes is None:
        #create a frequency table
        freq = data.value_counts()

        if p0Cat is None:
            #check if there were exactly two categories or not
            if len(freq) != 2:
                # unable to determine which category p0 would belong to, so print warning and end
                print("WARNING: data does not have two unique categories, please specify two categories using codes parameter")
                return
            else:
                #simply select the two categories as cat1 and cat2
                n1 = freq.values[0]
                n2 = freq.values[1]
                n = n1 + n2
                #determine p0 was for which category
                p0_cat = freq.index[0]
                if p0 > 0.5 and n1 < n2:
                    n3=n2
                    n2 = n1
                    n1 = n3
                    p0_cat = freq.index[1]
                cat_used =  " (assuming p0 for " + str(p0_cat) + ")"
        else:
            n = sum(freq.values)
            n1 = sum(data==p0Cat)
            n2 = n - n1
            p0_cat = p0Cat
            cat_used =  " (with p0 for " + str(p0Cat) + ")"            
    else:        
        n1 = sum(data==codes[0])
        n2 = sum(data==codes[1])
        n = n1 + n2
        cat_used =  " (with p0 for " + str(codes[0]) + ")"
    
    minCount = n1
    ExpProp = p0
    if (n2 < n1):
        minCount = n2
        ExpProp = 1 - ExpProp
        
    #Wald approximation
    if cc is None:
        p = minCount / n
        q = 1 - p
        se = (p * (1 - p) / n)**0.5
        Z = (p - ExpProp) / se
        sig2 = 2 * (1 - NormalDist().cdf(abs(Z)))
        testValue = Z
        testUsed = "one-sample Wald"
    elif (cc == "yates"):
        #Wald approximation with continuity correction
        p = (minCount + 0.5) / n
        q = 1 - p
        se = (p * (1 - p) / n)**0.5
        Z = (p - ExpProp) / se
        sig2 = 2 * (1 - NormalDist().cdf(abs(Z)))
        testValue = Z
        testUsed = "one-sample Wald with Yates continuity correction"
        
    testUsed = testUsed + cat_used
    testResults = pd.DataFrame([[n, testValue, sig2, testUsed]], columns=["n", "statistic", "p-value (2-sided)", "test"])
    
    return (testResults)