Tuesday, July 31, 2012

Mitt the Twit

The headline from the British newspaper, The Sun.  (credit:  http://underthemountainbunker.com/2012/07/27/mitt-the-twit-headline-from-the-london-sun-romneyshambles/)
Just when I think Mitt Romney can't be any more of a putz, he outdoes himself.  The latest follows from his recent trip to London --- a supposed opportunity for him to present himself in a presidential light and to cozy up to our closest allies --- but where he, instead, insulted his hosts and their preparation of the games.  Yes, there have been a couple of blunders in the lead-up to the opening ceremony (bus drivers getting lost, too few guards trained by the contractor hired to provide security) but you'd think a man of his experience and pedigree could come up with something, anything, that isn't directly insulting. His insult was in response to a question about London's preparation for the games:  

"There are a few things that were disconcerting. The stories about the private security firm not having enough people, the supposed strike of the immigration and customs officials—that obviously is not something which is encouraging."  

Bravo, Mitt, bravo.  This was clearly one of those lapses in judgement where you actually say what you're thinking and as with most of your other gaffes, your comments betray your outsized arrogance.  Honesty is, of course, a good and admirable characteristic but Mitt's on-again off-again relationship with honesty --- with himself, with other politicians, with the American people --- sets a new low for the already dishonest, pandering lot of politicians running America.  In this instance he was honest but it came at the expense of diplomacy whereas when he's campaigning in the US, he panders at the expense of honesty.  I suppose the best politicians are the ones who are simultaneously honest and diplomatic but Mitt is (consistently) neither.   Props to the Brits for calling Mitt as he is:  a twit.

Sunday, July 22, 2012

Should we be surprised?

Credit:  www.themoralliberal.com
No, not at all.  What happened early Friday morning in Aurora, Colorado is tragic and people should rally around the victims but, frankly, this will be old news in a week or so.  All the standard rhetoric will be spit up by the liberal and conservative camps:  the liberals will call for increased gun regulation (seems reasonable) whereas conservatives will claim that the vast majority of gun owners are responsible (I agree) and had the conceal-and-carry laws been more liberal a heroic bystander would have gunned down the crazed madman before he shot dozens of people (unlikely but anything is possible).  There are already a spate of newspaper articles and editorials pouring forth (NY Times, Washington Post, Slate, Fox News, etc.) and they are unlikely to abate over the next several days.  The pissing contest that inevitably follows after a tragedy like this is simultaneously entertaining and depressing but it seems so...hollow.  Even the least-informed among us will (hopefully?) concede that Americans love guns, prize our constitutional right to gun-ownership ("A well-regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed"), and gleefully practice that right.   There are lawyers, constitutional scholars, and policy wonks far more intelligent than me that debate the intent & meaning of the constitution at length so I won't bother chiming in or pretending to know more than I do (admittedly not much) but I can't help surrender to a sense of resignation and cynicism.  

We, as a country, created and perpetuate an environment where procuring Arms is relatively easy and the consequences of irresponsibly using those Arms are relatively minor and short-lived, at least at the community and social level.  Sure, the gun-wielding madman that mows down a dozen patrons in a movie theater will probably spend the rest of his life in prison, maybe even be executed, but the laws and regulations on the books will largely remain unchanged (maybe even loosened?).  Does anyone remember Gabby Giffords?  She was shot in the head on January 8, 2011 and although there were a few calls for reform and tightening of some gun control laws, very little changed.  It's well known that the NRA is an enormously powerful lobby and politicians don't have the balls to stand up to the lobby to initiate change but my more cynical opinion is that the at-large citizenry is unwilling to surrender their constitutional right to guns, 33-round magazines, and easy procurement of assault weapons and until we loosen our grip, change will remain a distant, if not unrealistic, goal.  Before I'm pigeon-holed as yet another out-of-touch, pie-in-the-sky liberal who wants for a big-government communist-like society where individual liberty and freedom are trampled upon let me re-emphasize that I'm not advocating for a revocation of the second amendment.  The United States was borne as a "frontier" country where protection and self-sufficiency were an ingrained part of our identity and to some extent, we still are, so the ability to take up Arms --- to own guns --- will always be a central part of our identity.  But that doesn't mean we can't exercise a little prudence.  Perhaps ban sale of assault weapons?  Or the accessories that make them even more deadly than they already are?  I can't imagine any regulation like this would ruin the hunting experience (unless people are the hunted in which case, well, we have bigger problems) or infringe on law-abiding folks right to purchase and own a gun.  Until we (seriously) consider some degree of reform, what happened in Aurora should be viewed as a consequence of what we are.  If we want to make it easy for virtually anyone to "bear Arms" --- even if those Arms bear virtually no resemblance to what the framers of the constitution imagined --- then, well, we need to live with the consequences.  Those twelve deaths and 59 injuries are collateral damage for a freedom we hold dear.

Monday, July 16, 2012

Quote of the moment

In a previous post, I mentioned the creation of a personal Stata .ado file, -quote-, that, when invoked, returns a randomly selected quote from a dataset of quotes I keep on my desktop.  The program, though, uses the current date as the 'seed' for generating the list of randomized numbers so a repeated calling of the -quote- command on the same day will return the same quote.  I wanted to change this so that during those episodes of non-productivity and distraction while working in Stata, a repeated calling of the -quote- command would return (more often than not) a different quote.  I also wanted to modify the .ado file so that a calling of the -quote- command would -preserve- any data in memory then -restore- it.  The former I accomplished by changing the 'seed' from the current day (number of days since 1960) to the current time (number of seconds since 01/01/1960 00:00:00) and the latter I fixed with the correct placement of -preserve- and -restore- in the .ado file.  The full content of the .ado file is pasted below.  Note that I retained the code for using the current date as the 'seed' but have commented it out (the line is preceded with an asterisk); the line following it establishes the current time as the 'seed'. 

*! version 1.0 \ cjt 2011-08-04
*! version 2.0 \ cjt 2012-07-12

// program:  quote.ado
// task:  -read- in dataset of quotes from desktop, randomize them, then display one
// project:  n/a
// author:    cjt
// born on date:  04 August 2011
// updated:  (20120712) changed the system parameter used for the seed from date to time
//        such that the command can be called repeatedly w/ a different quote appearing each
//        time.  also added -preserve- and -restore- so that -quote- can be used when another
//        dataset is open w/out clearing the data.


// #0
// program setup

capture program drop quote
program define quote
version 11.2
preserve            // preserve data in memory, if applicable
clear all
macro drop _all
set more off


// #1
// -read- in quote dataset
use "C:\Documents and Settings\cjt\Desktop\QuoteList.dta"


// #2
// assign observation number to quotes
gen obsno = _n
order obsno quote


// #3
// generate random numbers from the uniform distribution
* macro out the current date to an integer (# days since 01/01/1960)
*local cdate = date(subinstr("$S_DATE" , " " , "" , .), "DMY")
local ctime = clock(subinstr("$S_TIME" , " " , "" , .), "hms")
* set seed to current time...
*set seed `cdate'
set seed `ctime'
* random number from uniform distribution
gen xselect = runiform()
order obsno xselect


// #4
// -sort- by random number then print first quote
sort xselect
* macro out first quote
local quote1 = quote[1]

* display the quote
display "Quote of the moment:  " _newline _col(5) "`quote1'"

clear all

restore        // restore data in memory, if applicable.

end 


A sampling of consecutive calls to the -quote- program:

. quote
Quote of the moment: 
    After winning an argument with his wife, the wisest thing a man can do is apologize. (Anonymous)

. quote
Quote of the moment: 
    Denial ain't just a river in Egypt. (Mark Twain)

. quote
Quote of the moment: 
    You can't buy love, but you can pay heavily for it. (unattributed)

. quote
Quote of the moment: 
    We may have all come on different ships, but we're in the same boat now. (Martin Luther King, Jr.)

. quote
Quote of the moment: 
    Doubt is not an agreeable condition, but certainty is an absurd one. (Voltaire)

. quote
Quote of the moment: 
    Common sense is not so common. (Voltaire)

. quote
Quote of the moment: 
    To speak much is one thing, to speak well is another. (Sophocles)

. quote
Quote of the moment: 
    A bookstore is one of the only pieces of evidence we have that people are still thinking. (Jerry Seinfield)

. quote
Quote of the moment: 
    They've finally come up with the perfect office computer. If it makes a mistake, it blames another computer. (Milton Berle)

. quote
Quote of the moment: 
    Fashion is only the attempt to realize Art in living forms and social intercourse. (Oliver Wendell Holmes)

Thursday, July 5, 2012

Going the Distance: Plotting Cumulative Time

I recently ran an ultra marathon, the Trail PeƱalara 60k,  in Navacerrada, Spain (about 30 miles outside of Madrid) --- you can read my race report here --- and although neither the running nor the writing up of the experience have anything to do with pursuit of my Ph.D., the graphical presentation of the checkpoint times seemed rigorous and interesting enough to label as a "PhD" blog post.  (And since I was working on this instead of the power calculation for my dissertation I figured I may as well legitimize the time spent and add a blog post about it.) 

This race, unlike any other I've run in the United States, required that each runner wear an electronic chip bracelet that had to be scanned at the start, the finish, and seven control points in between.  I can only speculate as to why our race chips were scanned as many times as they were --- to deter cheating? for data collection? --- but no matter, the race organizers posted the data on their website and I thought it would be interesting to identify the fastest, slowest, middle, and average times for each aid station/checkpoint (including the finish) then compare those times to my times at the respective points.  Based on the finishing times --- I was 39th among 147 finishers --- I knew I was faster than the middle (median) and average (mean) runners but I was curious as to how my times stacked up at the intermediate check points.  The short explanation of how I did this is I imported the data into Stata, identified the fastest (minimum), slowest (maximum), middle (median), and average (mean) then created a new dataset containing these summary statistics.  I then merged my times into that summary statistic dataset such that the resultant dataset contained nine observations with six variables.  The first variable in this dataset is the aid station/checkpoint and the remaining five variables contain the slowest/fastest/average/middle/my times recorded at each checkpoint. The summary dataset is presented below:

  +-----------------------------------------------------------------------------+
  |                   as     min_as     max_as    mean_as     mdn_as     cjt_as |
  |-----------------------------------------------------------------------------|
  |    Start (Rascafria)   00:00:00   00:00:00   00:00:00   00:00:00   00:00:00 |
  |     El Reventon Pass   01:02:12   02:23:08   01:44:04   01:43:26   01:31:44 |
  |             Penalara   01:58:58   04:44:30   03:18:15   03:14:29   02:54:09 |
  |            La Granja   02:49:44   07:02:42   05:00:20   04:56:00   04:25:27 |
  |     Casa de la Pesca   04:17:27   10:16:42   07:26:50   07:18:47   06:45:11 |
  |             Fuenfria   05:00:54   11:41:58   08:34:15   08:26:47   07:41:40 |
  |     Navacerrada Pass   05:40:14   12:53:42   09:38:06   09:27:47   08:41:14 |
  |          La Barranca   06:23:54   14:37:36   10:48:46   10:35:31   09:39:31 |
  | Finish (Navacerrada)   06:45:43   15:34:39   11:29:04   11:15:00   10:15:21 |
  +-----------------------------------------------------------------------------+

The code I used from start to finish is presented below.

capture log close
log using tp60k_graph, replace
datetime

// program:  tp60k_graph.do
// task:  graph times from TP60k
// project:  drivel
// author:    cjt
// born on date:  20120705


// #0
// program setup

version 11.2
clear all
macro drop _all
set more off


// #1
// insheet .csv
insheet using "C:\Documents and Settings\cjt\Desktop\TP60k\TP60k_times.csv", comma


// #2
// convert string time variables to numeric variables
foreach var of varlist as1_reventon as2_penalara as3_lagranja as4_cpesca as5_fuenfria ///
as6_navac as7_barranca finishtm {
  gen double `var'_temp = clock(`var', "hms")
  drop `var'
  rename `var'_temp `var'
  format `var' %tcHH:MM:SS
}
*end;

* **add start time variable
gen as0_start = 0
format as0_start %tcHH:MM:SS


// #3
// -save- entire dataset
save tp60k, replace


// #4
// extract my times from the dataset
keep if place==39
keep as* finishtm
* **prefix time variables w/ my initials, cjt
rename as0_start cjt_as0
rename as1_reventon cjt_as1
rename as2_penalara cjt_as2
rename as3_lagranja cjt_as3
rename as4_cpesca cjt_as4
rename as5_fuenfria cjt_as5
rename as6_navac cjt_as6
rename as7_barranca cjt_as7
rename finishtm cjt_as8
* **save dataset for later merge
save tp60k_cjt, replace


// #5
// recall earlier, fuller dataset
use tp60k, clear


// #6
// -collapse- data for graph
collapse (min) min_as0=as0_start min_as1=as1_reventon min_as2=as2_penalara min_as3=as3_lagranja ///
 min_as4=as4_cpesca min_as5=as5_fuenfria min_as6=as6_navac min_as7=as7_barranca ///
 min_as8=finishtm ///
 (max) max_as0=as0_start max_as1=as1_reventon max_as2=as2_penalara max_as3=as3_lagranja ///
 max_as4=as4_cpesca max_as5=as5_fuenfria max_as6=as6_navac max_as7=as7_barranca ///
 max_as8=finishtm ///
 (mean) mean_as0=as0_start mean_as1=as1_reventon mean_as2=as2_penalara mean_as3=as3_lagranja ///
 mean_as4=as4_cpesca mean_as5=as5_fuenfria mean_as6=as6_navac mean_as7=as7_barranca ///
 mean_as8=finishtm ///
 (median) mdn_as0=as0_start mdn_as1=as1_reventon mdn_as2=as2_penalara mdn_as3=as3_lagranja ///
 mdn_as4=as4_cpesca mdn_as5=as5_fuenfria mdn_as6=as6_navac mdn_as7=as7_barranca ///
 mdn_as8=finishtm

 
// #7
// -merge- in the cjt data
merge 1:1 _n using tp60k_cjt
drop _merge


// #8
// -reshape- from wide to long
gen index = .
reshape long min_as max_as mean_as mdn_as cjt_as, i(index) j(as)
drop index


// #9
// -label- values of "as" varialble
label define aid 0 "Start (Rascafria)" 1 "El Reventon Pass" 2 "Penalara" 3 "La Granja" ///
4 "Casa de la Pesca" 5 "Fuenfria" 6 "Navacerrada Pass" 7 "La Barranca" 8 "Finish (Navacerrada)"
label values as aid


// #10
// generate -graph-
line min_as as || line max_as as || line mean_as as || line mdn_as as || line cjt_as as, ///
ytick(0(7200000)57600000) ylabel(0 "0h" 7200000 "2h" 14400000 "4h" 21600000 "6h" 28800000 "8h" ///
36000000 "10h" 43200000 "12h" 50400000 "14h" 57600000 "16h", angle(horizontal)) ///
ytitle("Cumulative Time") xtitle("Aid Stations/Check Points") xtick(0(1)8) xlabel(0(1)8, ///
valuelabel angle(45)) legend(col(1) pos(3) lab(1 "Minimum") lab(2 "Maximum") lab(3 "Mean") lab(4 "Median") ///
lab(5 "My Time")) legend(subtitle("Times")) title("Trail Penalara 60k") subtitle("Control Post Times") ///
note("Data downloaded from http://www.grantrail.es/index.asp") scheme(s1color)


// #11
// -export- graph
gr export tp60k_graph.png, replace


log close
exit



The resulting graph:  
As is evident from the graph, the middle and average run times were nearly the same over the course of the race, thus suggesting a non-skewed distribution of times.  My times were only slightly better than the middle and median times up until Casa de la Pesca but then I managed to increase the gap, per the widening of the gap between my time and the mean/median times after that checkpoint.