I am going to do my best to be civil, since I really am here to learn--and I still hold out the hope that you may have something to teach. But in order for me to learn, I must actually probe what you are saying. If I "learn" that I should accept what you say because you understand statistics and science and your views are shared by very smart people from Harvard and Yale and MIT and Brown and the like, I will be none the wiser.
If, after accepting your views, I was faced with someone asking the questions I am asking now, and if I had no answer but that a guy on the internet said so (and that some smart guys from Harvard, Yale... believed it too), I would not be able to respect myself.
293. Kleck, again...
It seems only fair to point out why I am talking about Kleck. You brought Kleck up and suggested that I consider his study (post 262):
For example, think about the big influential study on the pro-gun side, which is a telephone survey asking people whether they used a gun defensively. What, exactly, makes a sociologist so much better at conducting phone surveys than economists or epidemiologists? In this case, if anything, the epidemiologists would be in a better position, because Kleck's primary flaw is something that epidemiologists would be familiar with: if you try and measure something very rare like DGU, your results are extremely sensitive to false positives. To take an extreme example, if we did a similar phone survey to estimate how many people contacted aliens in the last year, we would likely come up with a similar estimate of around 1% or 2.5M, just like DGU (and maybe even more), because, a small but positive percentage of survey respondents will say yes to anything. If you are measuring something where the true rate is closer to 50%, then you don't get this same problem, but when you are dealing with percentages near either 0% or 100%, things work out differently.
Your irritation with the subject matter seems strange.
Statistics is not "my thing", despite your attempt to pigeonhole me. But I do understand statistics, and I do have a scientific background, so I can read this stuff and separate the wheat from the chaff. The usual accusation that I'm placing too much weight on the technical matters reflects the your preference for softer and more subjective territory where you can peddle your theories of bias or conspiracy or whatever without running into any hard data.
As stupid as it feels to type this I will type it anyway; it might facilitate communication. I have a technical background. I apply scientific and technical principles daily. I have patents for inventing technology. I am not afraid of science, technology or hard data.
Regarding your points on Kleck's study. First off, by any measure DGUs are very rare. Contrary to your insinuation, this is not part of some circular logic of mine, it is something everyone agrees on: even if we accept Kleck's DGU numbers, we still only get a rate of 1%....
Granted, DGUs are very rare if by very rare you mean "in the range of 1% or less" (accepting your numbers for the sake of discussion). But this is not what Hemenway is saying. He is saying that DGUs are very rare, and by "very rare" he means "so much lower than 1% that they invalidate Kleck's numbers and the corroborating evidence generated by over a dozen other studies." That application of the “rareness” argument is what makes Hemenway’s point circular.
...The question is whether the annual DGU rate per person is 1% like Kleck says or 0.05% like NCVS says.
Ok.
Also, the reason it is significant that DGUs are rare has nothing to do with whatever all-caps fallacy you claimed to have caught me in. It is because when you get down in the range of 1% or lower, you face a different set of statistical issues than you would face if you were measuring something in the more normal range of 10%-90%. And my point is that Kleck fails to adequately account for this fact, and if you had read my post honestly and tried to understand what I was saying, rather than take the "very rare" out of context to score some cheap points, I wouldn't have to explain this...
I don't see you explaining anything here. All I see is a restatement of your earlier point--the point you made in the second quote box in this post. I'll repeat the salient part:
...if you try and measure something very rare like DGU, your results are extremely sensitive to false positives.
I accepted that in post 288:
Kleck's alleged blind spot is no secret. I am far from an expert in statistics, but we covered false positives in my class.
The fact that if you try and measure something very rare, your results are extremely sensitive to false positives has been accepted and accounted for. Repeating that fact now is not explaining anything. I knew about the false positive problem before the class, it was explained clearly and demonstrated with numbers in class. But as technical matters go, it is a very simple thing, at least conceptually. This point—“if the true positive rate is 1%, this survey would be 99X as sensitive to the false positive rate as it is to the false negative rate”—is a demonstration of the simplicity. That is elementary or junior high math. It certainly bears no comparison to calculus, differential equations or linear algebra. And I have no idea of how you think I took anything out of context.
If you are making some deeper point, you are being too subtle. You don't have to get into the nitty gritty if you don’t have time, just point the way. Is there some specific methodology or principle, say
Miller's technique or
Thompson's theorem (made up names to illustrate how you could point me in the right direction) that is used in these cases? Just say so!
Then you imply that I claimed that Kleck didn't know about false positives, and how "telling" it was that I would "blithely imply" that he didn't know something you learned in your intro stats class and so I must have not read Kleck's paper at all and blah blah blah... Please. Can we be serious? I was under the impression you wanted to have a real discussion and maybe learn something. Obviously, I didn't suggest Kleck didn't know that there was such a thing as false positives.
What happens is that when the positive response rate is near 1%, a survey like this becomes extremely sensitive to false positives, and it is this heightened sensitivity that is the biggest weakness in Kleck's methodology, and it is also something he did not take into account. And by the way, again the sensitivity to FPs is a technical thing: for example, if the true positive rate is 1%, this survey would be 99X as sensitive to the false positive rate as it is to the false negative rate, since there are so many true negatives that a small change in the false positive rate results in a relatively large change in the overall number of positive responses. In any case, if you disagree with me, and you do think Kleck took this FP sensitivity into account, and actually gave any kind of quantifiable evidence that the FP rate was well below 1%, which it would have had to have been if his estimate is accurate. And I mean evidence, not empty assertions about how they had "up to 19 questions, etc." Extraordinary claims require extraordinary evidence, and an FP rate this low is certainly an extraordinary claim.
Here's the paper, you go ahead and find me the evidence that he was even aware of this FP sensitivity issue, much less that he was able to demonstrate that he had it controlled:
http://www.guncite.com/gcdgklec.html With all due respect to the knowledge that I am still hoping you are able to impart, it still seems you are still implying that Kleck had no knowledge of false positives as well as saying that he did nothing to provide quantifiable evidence that it was accounted for.
The request for quantifiable evidence on the false positives is interesting. How would you, understanding statistics, show that the error rate was less than 1%? What would have been an acceptable approach that would have demonstrated the survey's validity?
You also seem to really like the germ story despite the fact that I pointed out the Pub Health people often study injuries and safety, and that a lot of the research is statistical in nature anyway. I guess that, since we actually are talking about safety in a lot of cases, it's easier just to ignore that inconvenient fact and stick with the germs. As to your example that they shouldn't have coded people with nearby guns as carrying, this is truly laughable and a tiny technical point. You understand that they say "quickly available", not in a car halfway down the block.
Neither the OP nor the study you cited are about injuries or safety, per se. They are not about gun accidents or wound analysis, for example. As to why I focus on epidemiology, read the list of credentials of the people responsible for the study you cited:
Charles C. Branas and Douglas J. Wiebe are with the Department of Biostatistics and Epidemiology, Firearm and Injury Center at Penn, University of Pennsylvania School of Medicine, Philadelphia. Therese S. Richmond is with the Division of Biobehavioral and Health Sciences, Firearm and Injury Center at Penn, and University of Pennsylvania School of Nursing, Philadelphia. Dennis P. Culhane is with the Cartographic Modeling Laboratory, University of Pennsylvania School of Social Policy and Practice, Philadelphia. Thomas R. Ten Have is with the Department of Biostatistics and Epidemiology, University of Pennsylvania School of Medicine, Philadelphia.
Once again, I am following your lead—and getting blamed for it. Charles C. Branas, Douglas J. Wiebe and Thomas R. Ten have backgrounds in epidemiology. Therese S. Richmond has a background in nursing. Dennis P. Culhane has a background in cartography. I focused on epidemiology. I hardly see how focusing on nursing or cartography would have made a difference.
The epidemiologists’, nurse’s and mapmaker’s study apparently lacked someone with expertise in human criminal behavior and criminal-victim interaction.
On top of that, what you miss is the fact that by erring on the side of coding more controls rather than less as carrying, that actually weakens the ultimate result of the study, so this is an example of erring on the side of caution, to make sure that they didn't miss any controls who might have had access to a gun.
I missed no such thing. I saw that and its implications. Read my words again and think about them. I not only knew that this acted against their premise—I said so:
(And yes, I am aware that they only said explicitly that they counted guns in nearby cars for the controls--the people who didn't get shot. So they weren't cheating. They were either hurting their own side or being evenhandedly flawed. IMO).
What do those words mean if I didn’t understand? It seems only fair to point out that if you “read my post honestly and tried to understand what I was saying” I wouldn’t have had to repeat that.
If they were being cautious and conservative, I would expect them to say so. The fact that they didn’t say so strongly implies that they didn’t realize that if a gun in a car is accessible enough to matter then the car itself is accessible, and that the car being accessible is a really big deal. Not realizing that goes to competence.
Laughable and tiny as you think my point is, it has far reaching implications. Why should they be trusted to rate issues involving criminal-victims interactions given their frail grasp of such matters?:
Each case’s chance-to-resist status was assigned after being independently rated by 2 individuals (initial j=0.64 indicating substantial agreement34) who then reconciled differential ratings.
There is also their apparent belief that gun take-aways occur in statistically significant numbers:
Alternatively, an individual may bring a gun to an otherwise gun-free conflict only to have that gun wrested away and turned on them.
The nurse, cartographer and epidemiologists should have consulted a criminologist.
Compare that the the flaw in the Kleck study, which is not some minor nitpick but a serious methodological oversight regarding FP. As is your tendency, you took me out of context and suggested that I was implying Kleck knew less statistics than you. But now that you know better, and given that this DGU issue is a pretty big point of contention, I'll ask you again and maybe get an answer this time. What is it about a sociologists background gives an upper hand over economists or epidemiologists in analyzing this kind of phone survey correctly? And if the answer is "nothing", which it is, then the next question is, given the statistical nature of the field, might it not be wise to reconsider your stubborn insistence on judging research by the field listed on the PhD rather than the content? Because this is a real meaty issue. The coding of guns in nearby cars is a tiny speck, which actually they probably handled correctly.
It still appears that you are implying Kleck knew less about statistics than I do. I addressed that above. Maybe you are making some other subtle point, but it is unclear.
Kleck did address the issue with his questions and probably in other ways not mentioned in the report. Marvin Wolfgang (who I am sure saw the details that we don’t) was quite impressed:
I am as strong a gun-control advocate as can be found among the criminologists in this country. If I were Mustapha Mond of Brave New World, I would eliminate all guns from the civilian population and maybe even from the police. I hate guns--ugly, nasty instruments designed to kill people….
What troubles me is the article by Gary Kleck and Marc Gertz. The reason I am troubled is that they have provided an almost clearcut case of methodologically sound research in support of something I have theoretically opposed for years, namely, the use of a gun in defense against a criminal perpetrator. Maybe Franklin Zimring and Philip Cook can help me find fault with the Kleck and Gertz research, but for now, I have to admit my admiration for the care and caution expressed in this article and this research….
The Kleck and Gertz study impresses me for the caution the authors exercise and the elaborate nuances they examine methodologically. I do not like their conclusions that having a gun can be useful, but I cannot fault their methodology.
Source: Title: A tribute to a view I have opposed. (response to article by Gary Kleck and Marc Gertz in this issue, p. 150)(Guns and Violence Symposium)
Author: Marvin E. Wolfgang
Publication: Journal of Criminal Law and Criminology (Refereed)
Date: September 22, 1995
Publisher: Northwestern University, School of Law
Volume: 86 Issue: n1 Page: 188-192
This is the mea culpa of “the most influential criminologist in the English-speaking world” according to the
British Journal of Criminology. He is definitely making a confession against his position and interests. And it would be hard to characterize him as a so-called “gun militant.” What was his problem? Did he not understand statistics? Was this
“pioneer of quantitative and theoretical criminology” incompetent? Did he miss something that is taught in introductory statistics classes, something as simple as false positives? Or was it the fact that he wasn’t at Harvard, Yale or one of the other approved schools? And why, if he was that incompetent, would a
British criminology journal consider him, an American, more prominent than any British criminologist?
Then there was the survey expert Kleck hired to help his team get the technical issues right. Here’s the NY Times on him:
Dr. Sudman was an expert in survey sampling and the design of survey questionnaires. He wrote scores of articles on the subject, and was the author or co-author of nearly 20 books.
Some are classic textbooks for students and lay readers trying to grapple with statistics and survey writing. Among them are ''Applied Sampling'' (1976), ''Asking Questions: A Practical Guide to Questionnaire Design'' (1982) and ''Polls and Surveys'' (1988).
Source:
http://www.nytimes.com/2000/05/08/us/seymour-sudman-71-expert-in-survey-design.html Yes, I know, he wasn’t at Harvard or Yale, he was a University of Illinois professor. But he wrote classic textbooks on survey design explaining, among other things, statistics in surveys. So this guy is less skilled in FPs than a professor of health policy?! Why, because the professor of health policy teaches at Harvard?
You keep talking about hard data and facts; Supposedly I’m afraid of them. So let’s look at some data and facts. I’ll use mostly your data and reasoning.
1) 1% or 2.5M is a result that can easily result from false positives—source: your post 262
2) highest annual estimate of criminal gun use = 847,652 as of time of Kleck’s study source: NCVS as cited by Kleck
3) if 2,500,000 = 1%
then
847,652 = .34%
That’s almost exactly 3 times as small. So let’s apply your (and Hemenway’s) logic:
What happens is that when the positive response rate is near {or lower than} 1%, a survey like this becomes extremely sensitive to false positives, and it is this heightened sensitivity that is the biggest weakness in {the NCVS} methodology, and it is also something {they} did not take into account. And by the way, again the sensitivity to FPs is a technical thing: for example, if the true positive rate is {.34}%, this survey would be {293}X as sensitive to the false positive rate as it is to the false negative rate, since there are so many true negatives that a small change in the false positive rate results in a relatively large change in the overall number of positive responses.
In any case, if you disagree with me, and you do think {the NCVS} took this FP sensitivity into account, and actually gave any kind of quantifiable evidence that the FP rate was well below {.34}%, which it would have had to have been if his estimate is accurate. And I mean evidence, not empty assertions {or ignoring the point as Hemenway and others who accept his arguments do}. Extraordinary claims require extraordinary evidence, and an FP rate this low is certainly an extraordinary claim. {Based on the published NCVS results}, you go ahead and find me the evidence that {they were} even aware of this FP sensitivity issue, much less that {they were} able to demonstrate that {they} had it controlled{.}
Here are some questions that I hope you will actually answer.
1. Have you ever questioned the gun crime data on the same logical basis that you question Kleck’s data? If not, why not?
2. Same questions for Hemenway and other similar researchers.
3. Are crimes committed by people with guns numerically comparable, statistically speaking, with alien encounters?
4. Do you actually believe that Kleck, Marvin Wolfgang, “a pioneer of quantitative and theoretical criminology” and “the most influential criminologist in the English-speaking world,” and Dr. Sudman, “expert in survey sampling and the design of survey questionnaires,” and author or co-author of nearly 20 books including “classic textbooks for students and lay readers trying to grapple with statistics” missed your very elementary statistics point regarding false positives?
5. Do you actually believe that an elementary statistics point that got by Wolfgang and Sudman was caught by a professor of health policy?
6. Do you believe that your point, legitimately and conveniently only applies to DGUs, and not to crimes committed by people with guns?
Perhaps Hemenway’s facile critique is not as honest as you think.
I appreciate your wanting to be scientifically informed without getting too deep, but, though not all science is statistical, the science of gun violence is, as much as you would like to think that you can simply intuit your way to the truth without looking at any hard data or facts on the ground. Because, you really ought to watch out for this tendency to think that you, despite not really understanding what is going on, have found neat little logical inconsistencies in the thinking of large numbers of scientists who do understand the science down to the nitty gritty. This almost never happens, except in bumblebee lala-land.
It's funny that you say you "weren't impressed" by Hemenway's take on the false positives, two sentences after you concede that you aren't actually able to fully understand the science. It's a debate about a technical scientific point that you don't understand, and you even know that you don't understand it. What do you expect to be impressed about, word choice? Font selection? As I pointed out above, the "circular logic" you think you found was actually a concrete technical flaw regarding FP sensitivity. Yes, superficially it sounds like he's use the fact that DGUs are rare to prove that DGUs are rare. But this is sophomoric nonsense, which quickly becomes apparent if you scratch the surface even slightly.
I’ll mostly skip this for now. I am trying to be civil and I think what I’ve written above actually addresses most of this (snark aside).
I will, however, speak to the bumblebee point that you so love. I heard that from a professor of fluid dynamics in one of the top ten schools in the US in my discipline. There were no creationists websites (or any other kind as we know them now) in existence. Believe what you wish. In any event, I backed down immediately as I could not be sure that I was correctly remembering an off the cuff remark from many years ago. Apparently you have a hard time accepting admissions of mistakes.
The point I was trying to make is still sound and I stand by it: observation trumps scientific theory. That is true whether the subject is guns and crime or planetary orbits. It is quite possible that, despite my backing down, the professor said exactly what I quoted him as saying. Scientists do say things like that:
Now, for the first time, it has found a planet in orbit around a double star. Laurance Doyle from the SETI Institute in California says these twin stars are 200 light-years away from us in the constellation Cygnus, and each one has a slightly different hue.
"You have an orange star that's 69 percent the mass of the sun, and it is basically dancing with a 20-percent-the-mass-of-the-sun red star," he says. "And they go around each other every 41 days."
…
One of the biggest conundrums is understanding the very existence of planets around twin stars,
"because they shouldn't be there," says Dave Charbonneau, a professor of astronomy at Harvard University. He says the standard story is that planets form from a pancake-shaped disk of material that's left over after a singleton star coalesces.
Source:
http://www.vpr.net/npr/140499991/Saying that the planet shouldn’t be there is functionally identical to saying that bumblebees shouldn’t be able to fly, as far as my argument was concerned. Go ahead, say that the professor of astronomy at Harvard is a creationist hack.
My point stands. Observation trumps theory. It would have stood even if a professor of Harvard hadn’t validated it last week. Refute my point if you can, but going on about the bumblebee is silly, especially after I admitted that I might have misspoken.
And then there's the fact that, while you don't want to get too close to the actual science, you also don't trust the scientific consensus. The researchers from Harvard, Yale, Stanford, JHopkins, UCDavis, UPenn, UChicago, Duke, etc., that stuff you mostly toss out.
See above. Did I get close enough to the
science statistics? (Math isn’t science, it is a tool of science.)
As for tossing out the “scientific consensus” all I said was that
… I would throw out ALL research funded by the Joyce Foundation and by the NRA on gun issues. I would throw out ALL research by Phillip Morris on tobacco. I would look with a very jaundiced eye at research on a product funded by its producer or industry organization.
If you are admitting that most of the researchers from Harvard, Yale, Stanford, JHopkins, UCDavis, UPenn, UChicago, Duke, etc., are funded by the Joyce Foundation, then yes, I throw out most of the research from those places. What of it? Why should I accept that “science” when you wouldn’t accept a “scientific consensus” paid for by the NRA?
Stop dodging this point and answer—yes or no—would you? Would you accept science funded by the NRA as legitimate? How about the tobacco industry?
On the other hand, the stuff coming from the criminology department at FSU and the "Independence Institute" you think is golden. I'm sure you've got some way of justifying it all to yourself, but I must say you do seem to have yourself fenced off pretty well from reality.
Reality equals using one methodological standard for DGUs and another for everything else? Reality equals using studies by people who have no clue about the criminal-victim interaction and demonstrate it in their studies by speculating about statistically miniscule about gun take-aways? Reality equals admitting the obvious logical error I pointed out about the car and then speculating that “actually they probably handled {the coding} correctly”—without one shred of math, science, hard data or anything else besides trust in the situational expertise of people with expertise in epidemiology, nursing and maps?!
I do not say this to offend you—if you have something substantive and enlightening to say I don’t want you to stalk off without saying it—but if that’s reality, I have fenced myself off from it, and I’m proud to have done so. (On the other hand, if I am mistaken, I want to be corrected. Better to be embarrassed today than to be wrong the rest of my life.)
If I point out that you are ignoring the bulk of the mainstream research, you can insist this is an appeal to authority, and then retreat into talk of germs, the Joyce Foundation, bumblebees. If I point out specific technical flaws, you say that I'm putting too much emphasis on the statistics, and you start looking for ways to misleadingly quote either me or Hemenway or whoever to try and find a superficial logical inconsistency. Wouldn't it be easier to just look directly at the actual science, with a fair and open mind, and figure out what's going on?
I believe that I have looked at the science with a fair and open mind. If I have misleadingly quoted you, I apologize in advance in the hope that you will point out the error and I will be able to acknowledge my error in fact and not in faith in your good word.
I still think you overestimate the statistics. Where the statistical analysis is pointed (and where it isn’t) can be much more important than how mathematically sound it is. I think I have demonstrated that.
So please, rip my arguments to shreds if you can. And I will be the first to thank you if you succeed.
<arghh, forgot Planet citation>