Friday, March 30, 2012

Concordance, Correlation, Agreement -- Statistics


Name
Description/Function
Stata Command
Lin’s Concordance Correlation
According to Lin (1989), this index “evaluates the agreement between two readings (from the same sample) by measuring variation from the 45 degree line through the origin (the concordance line).”  Neither Lin nor the Stata Technical Bulletin (STB-43) insert suggest that this index can be used for categorical data, although neither explicitly forbids as much.
-concord-
Cohen’s Kappa Coefficient
Jacob Cohen’s (1960) measure for inter-rater agreement.  Values range between [0,1] with zero denoting the amount of agreement expected by chance alone and one denoting perfect agreement.  Although the statistic is grounded upon assessing agreement between “raters”, could it be adapted for use with establishing agreement between items on a questionnaire/survey? 
-kap-,
-kappa-
Kendall’s Coefficient of Concordance / Kendall’s W / Friedman’s Test
Calculates Friedman’s non-parametric two-way analysis of variance and Kendall’s coefficient of concordance.  One p-value is provided since the tests are equivalent although the Kendall’s statistic may be easier to interpret since it is bounded by [0,1] and is a measure of the agreement between rankings.  It is unclear whether this test is suitable for ordinal variables. 
-friedman-
Kendall’s Rank Correlation / Kendall’s Tau
Kendall’s Tau-a and Tau-b are calculated where the only difference between the two is in their denominators.  Tau-a uses total number of pairs whereas Tau-b incorporates number of tied values (Tau-b will be larger if ties exist).  These statistics are closely related to Spearman’s Rho and don’t necessarily assess agreement, but independence.  According to Conover (1999, p.323), Spearmans and Kendall’s will produce nearly identical results in most cases although Spearman’s will tend to be larger in an absolute sense.  Since I'm more concerned with assessing agreement rather than independence -- rejection of an independence null is expected -- I question this test's applicability.
-ktau-
McNemar’s Test (2x2); Bowker’s Test (KxK)
For a 2x2 table, the test reduces to a McNemar’s test whereas for a KxK table, the Bowker’s test for table symmetry and the Stuart-Maxwell test for marginal homogeneity are calculated.  The test assumes a 1-to-1 matching of cases and controls and is used to analyze matched-pair case-control data with multiple discrete levels of the outcome/exposure variable.  I’m not 100% sure whether this test is suitable for what I need although if I can frame it such that the instrument items are case and control, respectively, and the symmetry and marginal homogeneity tests are non-significant then it would suggest that a subject’s responses to two items aren’t different.  Need to investigate this possibility.
-symmetry-

The research into a suitable method for assessing agreement between two items on a survey/questionnaire hasn't been as straightforward and unambiguous as I'd hoped.  (Although if it were then perhaps the Ph.D. wouldn't be nearly as masochistic?)  Per a search of the literature, the Stata help files, and the Stata listserve I've identified some test statistics that are pretty good candidates for what I need.  I figure placing them in a table along with brief descriptions will aid in identifying which, if any, is most appropriate (this is, of course, assuming that the agreement/equivalence aspect of my research remains in place).  There are also graphical methods of assessing categorical, ordinal agreement -- of which I'll present those in a forthcoming post.

No comments:

Post a Comment