2011 hasn't been my best year of running. The Vienna Marathon started out promising enough but once the shortsightedness of a decision to wear an illfitting minimalist shoe for the race became much more real (and painful), it then felt more like a death march to the end. Then the onset of acute tendonitis in my lower left calf sidelined me for the latter half of June, all of July, and the first half of August resulting in my withdrawal from the Mountainman Ultra 80k in Lucerne, Switzerland in early August. And then, in early September I ran the Budapest Half Marathon with the lastminute plan to run sub2:00. I've run a half marathon under two hours before and have a marathon PR of 3:39 so I was hopeful that I'd cross the finish line in under two hours even though my training had been pretty casual and eschewed any speed work. But 'twas not to be: I finished in 2:00:56. I suppose I shouldn't have been too disappointed since it was unseasonably hot, the first water station was empty by the time the midpackers arrived, and my training was lackadaisical but, alas, I was. Since Budapest, I've been running 35 days per week with no goal other than to get out of the apartment, maintain some semblance of fitness, and continue to rehab my left calf. I've started researching ultras for 2012 and have a couple in mind although nothing is definitive. I'll update this blog once I decide on what race  or races  I'll run. In the meantime, here's to a better year of running in 2012!
Wednesday, November 30, 2011
Tuesday, November 29, 2011
Optimistic About Beer
In the spirit of prudence  I'm not sure what boundaries exist with respect to disclosure of my research topic to a larger, nonacademic audience  I haven't described exactly what my research topic is. Hell, I haven't even given a general description. And aside from the title of this post  "Optimism"  not much else will be disclosed. That being said, I'm not feeling terribly optimistic lately about the progress of my research. A timeline I charted a few months ago had my committee reviewing a first draft of my proposal at this point  I have yet to even assemble and write a first draft. The problem isn't the actual writing, per se, but the absence of a detailed and executable statistical analysis plan. In an ideal situation, the statistical analysis plan would follow naturally from the research question/objective/hypothesis and data structure (cohort, case/control, RCT, crosssectional, survey, etc.), but alas, an ideal is just that: ideal and rarely realized. In my situation, I know where I need to go  Google Maps is cued up and ready to go  but I can't find my car. I'm not even sure I'm searching for it in the right parking lot. But I continue to search, albeit with the occasional distraction: the most recent being the writing of a Stata program that created a series of variables each containing a random shuffling of the numbers [1,6] with no two variables sharing the same sequence. This exercise was for the hosting of a beer tasting/testing party by my wife and me where the sequence of beers each person would taste/test would differ and be randomly generated (per the Stata random number generator). In a previous blog posting, I described my approach to verifying that no two variables shared the same number sequence; what follows is the entirety of the .do file I used to create and output the randomly shuffled number sequences.
capture log close _all
log using BeerTestRandomNumbers, name(log1) replace
datetime
// program: BeerTestRandomNumbers.do
// task: create list of beers for each person w/ list randomized for each person
// project: Team Clisa Beer Testing/Tasting Party
// author: cjt
// born on date: 20111024
// #0
// program setup
version 11.2
clear all
macro drop _all
set more off
// #1
// declare number of observations (beers to taste)
set obs 6
// #2
// set randomization seed to ensure reproducibility
set seed 20111119
// #3
// create format label for numbers...
label define beerf 1 "Ottakringer" 2 "Gosser" 3 "Stiegl" 4 "Puntigamer" ///
5 "Schwechater" 6 "Zipfer"
// #4
// create 28 variables containing numbers [1,6] randomly shuffled
foreach num of numlist 1/28 {
gen int seq`num' = _n
label values seq`num' beerf
gen rand`num' = runiform()
sort rand`num'
drop rand`num'
}
// #5
// generate 27! variables for pairwise comparisons to verify that randomization // order isn't the same between any two variables.
// !!all randomization orders are different!!
forvalues i = 1(1)27 {
forvalues j = `i'(1)27 {
display ""
display "Variables being compared are seq`i' and seq`++j'"
gen var`i'_`j' = 1 if seq`i' == seq`j'
quietly sum var`i'_`j' if `i' != `j'
assert `r(sum)' < 6
drop var`i'*
}
}
*end
// #6
// label the variables w/ guest names (note that this code was taken from a Stata Journal
// article ("Speaking Stata: Problems With Lists"), SJ3,2.
local vars "28 PARTY GUEST NAMES INSERTED HERE"
forvalues i = 1/28 {
local v : word `i' of `vars'
rename seq`i' `v'
}
codebook, compact
// #7
// list beer tasting key for each person
capture log BeerKeys close
log using BeerKeys, name(log2) replace
foreach var of varlist z_Toddz_Unknown2 {
list `var', sep(6)
}
log using BeerTestRandomNumbers, name(log1) replace
datetime
// program: BeerTestRandomNumbers.do
// task: create list of beers for each person w/ list randomized for each person
// project: Team Clisa Beer Testing/Tasting Party
// author: cjt
// born on date: 20111024
// #0
// program setup
version 11.2
clear all
macro drop _all
set more off
// #1
// declare number of observations (beers to taste)
set obs 6
// #2
// set randomization seed to ensure reproducibility
set seed 20111119
// #3
// create format label for numbers...
label define beerf 1 "Ottakringer" 2 "Gosser" 3 "Stiegl" 4 "Puntigamer" ///
5 "Schwechater" 6 "Zipfer"
// #4
// create 28 variables containing numbers [1,6] randomly shuffled
foreach num of numlist 1/28 {
gen int seq`num' = _n
label values seq`num' beerf
gen rand`num' = runiform()
sort rand`num'
drop rand`num'
}
// #5
// generate 27! variables for pairwise comparisons to verify that randomization // order isn't the same between any two variables.
// !!all randomization orders are different!!
forvalues i = 1(1)27 {
forvalues j = `i'(1)27 {
display ""
display "Variables being compared are seq`i' and seq`++j'"
gen var`i'_`j' = 1 if seq`i' == seq`j'
quietly sum var`i'_`j' if `i' != `j'
assert `r(sum)' < 6
drop var`i'*
}
}
*end
// #6
// label the variables w/ guest names (note that this code was taken from a Stata Journal
// article ("Speaking Stata: Problems With Lists"), SJ3,2.
local vars "28 PARTY GUEST NAMES INSERTED HERE"
forvalues i = 1/28 {
local v : word `i' of `vars'
rename seq`i' `v'
}
codebook, compact
// #7
// list beer tasting key for each person
capture log BeerKeys close
log using BeerKeys, name(log2) replace
foreach var of varlist z_Toddz_Unknown2 {
list `var', sep(6)
}
log close _all
exit
Monday, November 14, 2011
forvalues nested in forvalues
While not working on the statistical methods section of my proposal  I'm banging my head against my desk trying to figure out how to assess predictive value in the absence of a gold standard  I was playing around with Stata forvalues loops in a program I wrote that isn't even remotely related to my dissertation work. Nevertheless, I was pleased when I managed to get a nested forvalues loop working properly (even if the time spent working on it was a couple hours more than expected).
The problem: I created 27 variables where each variable contained a random ordering of the number sequence [1, 6] and I wanted to compare the first variable with the 26 following, the second variable with 25 following, the third variable with the 24 following, and so on, to verify that no two variables contained equivalent number orderings. I wanted, essentially, 26! (factorial) variables to be created with no two variable pairs being repeated. For example, once the comparison of variable one and variable two was accomplished with the creation of the new variable, var1_2, it wasn't necessary to compare them again in the other direction, i.e. creation of var2_1. So what I needed was an outer forvalues loop that looped from 1 to 26 and an inner forvalues loop that looped from 2 to 27. After much conceptualizing, tweaking, and experimenting, I eventually found success with this:
forvalues i = 1(1)26 {
forvalues j = `i'(1)26 {
display ""
display "Variables being compared are seq`i' and seq`++j'"
gen var`i'_`j' = 1 if seq`i' == seq`j'
quietly sum var`i'_`j' if `i' != `j'
assert `r(sum)' < 6
drop var`i'*
}
}
forvalues j = `i'(1)26 {
display ""
display "Variables being compared are seq`i' and seq`++j'"
gen var`i'_`j' = 1 if seq`i' == seq`j'
quietly sum var`i'_`j' if `i' != `j'
assert `r(sum)' < 6
drop var`i'*
}
}
Solution/Explanation: In the first iteration of the outer forvalues loop, i is set to one and j is initially set to one, but as soon as the looping begins, the j index is incremented by one in the display statement. The ++j syntax tells Stata to increment the index before evaluating the value thus changing the value from one to two. Now when the comparison variable is generated, the macros evaluate to i=1 and j=2, thus creating var1_2 for the alreadyexisting variables, seq1 and seq2. After the first iteration of the inner loop is completed, the j index is again incremented from two to three  the outer index is still one  thus generating another variable, var1_3, comparing seq1 to seq3. After the inner loop completes  generation of 26 variables later  the outer loop (index i) is incremented from one to two and the inner loop (index j) is reset to start at two but as soon as the looping begins, this value is incremented from two to three by way of the ++j macro call. This results in the generation of var2_3, var2_4, and so on up thru the 27th variable, var2_27. This tandem looping continues up thru the 26th variable when only one comparator variable is created: var26_27.
This was a novel programming exercise for me for two reasons: (1) I haven't had a lot of experience with forvalues loops in Stata (none with nesting them!); and (2) the incrementation of a macro value when calling the macro is pretty damn powerful (++j) and a technique I plan to add to my Stata toolbox.
Tuesday, November 8, 2011
Diagnostic and Screening Test Validity
Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) constitute the four primary mechanisms for assessing the validity of a diagnostic or screening test. You see the words "sensitivity" and "specificity" bandied about in the medical literature and discussed (albeit briefly) in some epidemiology or biostatistics textbooks, but I had yet to encounter as concise, wellwritten, and elegantly explained description of these diagnostic tools as the one in chapter eight of Trisha Greenhalgh's "How to Read a Paper: The Basics of EvidenceBased Medicine". (I had been considering a blog post on this topic for quite some time but hadn't gotten around to it  big surprise  until a recent email exchange with my adviser re: my difficulty in developing my statistical analysis plan and her subsequent clarification that my analysis should include a "predictive value" component. The "predictive value" I eventually settle on may have little to do with sensitivity, specificity, PPV, and NPV, nevertheless, these concepts form the foundation that the aforementioned will draw upon. This blog post borrows heavily from Greenhalgh's text.)
The introduction of sensitivity and specificity can range from the use of conditional probability statements to the drawing up of a "jury verdict versus true criminal status" 2x2 table. I learned sensitivity/specificity both ways and found that they complemented each other and enhanced my understanding of a concept that arises in the epi and medical literature with tremendous frequency. In the "jury verdict versus true criminal status" method, a table is drawn up such that all possible outcomes are presented in the four cells of a 2x2 table. In an ideal world, all murderers would be rightly convicted and those innocent would be rightly acquitted. But the ideal world is rarely realized so we compute statistics to summarize the quality of a test, establish benchmarks, and either choose to ignore or use a test depending on its validity. In this example, the sensitivity is the proportion of murderers that were convicted  a/(a + c)  whereas the specificity is the proportion of nonmurderers acquitted  d/(b + d). The PPV is the probability that someone convicted of murder actually did it and the NPV is the probability that a person acquitted is actually innocent.
True Criminal Status
 
Murderer (a + c)

Not murderer (b + d)
 
Jury Verdict

Guilty (a + b)

rightly convicted (a)

wrongly convicted (b)

Innocent (c + d)

wrongly acquitted (c)

rightly acquitted (d)

More formally, the definitions  along with their probability statements and mathematical calculations assuming a 2x2 table configuration  are presented below:
Test Primary Name

Alternative Name

Central Question the Test Answers

Conditional Probability Statement*

Test Formula**

Sensitivity

True Positive Rate

How good is test at identifying those with the condition?

P(T+D+) =
P(T+ ∩ D+)
P(D+)

a/(a + c)

Specificity

True Negative Rate

How good is test at excluding those without the condition?

P(TD) =
P(T ∩ D)
P(D)

d/(b + d)

Positive Predictive Value (PPV)

Posttest Probability of Positive Test

What is probability of having condition if test is positive?

P(D+T+) =
P(D+ ∩ T+)
P(T+)

a/(a + b)

Negative Predictive Value (NPV)

Posttest Probability of Negative Test

What is probability of not having condition if test is negative?

P(DT) =
P(D ∩ T)
P(T)

d/(c + d)

* P denotes probability, T denotes test, D denotes disease, and the + and – indicate positivity or negativity.
** The letters a, b, c, & d correspond to the four cells of a 2x2 table where a is the upperleft, b is the upperright, c is the lowerleft, and d is the lowerright.
Symbolically, the data can (and ought) to be presented by way of a 2x2 table:
Reference Criterion/Condition/Disease
 
Diseased (a + c)

Not Diseased (b +d)
 
Test Result

Positive (a + b)

True Positive (a)

False Positive (b)

Negative (c + d)

False Negative (c)

True Negative (d)

Now consider the example presented by Greenhalgh to illustrate the calculation and interpretation of sensitivity, specificity, PPV, and NPV. The data are presented below with the calculations following:
Gold Standard Glucose Test (2h OGTT)
 
Diseased
(6 + 21 = 27)

Not Diseased
(7 + 966 = 973)
 
Glucose Test Result

Glucose Present (6+7=13)

6

7

Glucose Absent
(21 + 966 = 987)

21

966

Sensitivity: 6/27 = 22.2%
Specificity: 966/973 = 99.3%
Positive Predictive Value (PPV): 6/13 = 46.2%
Negative Predictive Value (NPV): 966/987 = 97.9%
In this scenario, the sensitivity is lousy but the specificity is quite good. That is, the test captures only about a fifth of those that are actually diseased whereas it identifies nearly all of those that are actually diseasefree. The PPV is the probability the person is actually diseased given that they have glucose present  a value this low would warrant a second or followup test  whereas the NPV, although not 100%, indicates that the probability of not being diseased, considering the negative test result, is quite high.
One final point  and another crucial distinction Greenhalgh makes between sensitivity/specificity and PPV/NPV that is sometimes glossed over in other texts is this:
"As a rule of thumb, the sensitivity or specificity tells you about the test in general, whereas the predictive value tells you about what a particular test result means for the patient in front of you. Hence, sensitivity and specificity are generally used more by epidemiologists and public health specialists whose daytoday work involves making decisions about populations" (pp. 103, emphasis in original text).
Subscribe to:
Posts (Atom)