.380 ACP the Red Haired Stepchild?

Gotcha. I don't doubt you're correct on that.

I'm not saying I prefer to believe those particular statistics you're referring to, just that I personally would prefer to base my decisions on accurate and true statistical information rather than projecting the results of a simulation.

The ballistic gelatin tests are the most accepted means at this time. No question about that.


(And, for the record, I don't carry a .380, my smallest caliber CCW is a 9mm.)
 
Gotcha. I don't doubt you're correct on that.

I'm not saying I prefer to believe those particular statistics you're referring to, just that I personally would prefer to base my decisions on accurate and true statistical information rather than projecting the results of a simulation.

It's important to realize that the modern wound ballistics community, have never been the ones to assign mathematic "statistics" to bullet performance in shootings, with a bent to predict perfomance.

Statistics about "one shot stops" are meaningless, because how one person reacts to being shot, can be totally different that another's, despite being hit the same. Nor is each person's physiology the same. My ribs may be harder than yours, your muscle denser than mine, etc. Some people take unsurvivable wounds and kill the person who shot them, while others take superficial hits and immediately cease their hostile actions. Hits to the cranium with .32 ball, stop people as well as 10mm 1400 fps triple wazoo bullets to the head do. That's the problem with "one shot stop" statistics

Reproducible testing on a particular bullet's performance as far as expansion and penetration, on a known tissue simulant, that has been scientifically correlated to actual shootings many years ago, can give us a baseline for what we hope a bullet can accomplish, if placed well. It doesn't accurately predict exact outcomes. All handgun projectiles do is poke holes. There's no magic juju to one bullet over another. Choosing a bullet that will poke a hole deep enough, while expanding as much as possible, is what the goal is. But it in no way guarantees any specific percentage of outcomes. Anyone who claims otherwise, is a fraud.
 
The reason the "real world" "one shot stop" statistics went away, is that it was all fiction.
People have certainly collected actual statistics on outcomes of real world shootings. The problem is that these outcomes tend to show a lot of anomalies when compared to terminal performance test results because there are other factors that affect outcomes far more strongly than terminal performance differences.

It's not so much that they are fiction as that it's not really productive to use them to help select between calibers if the assumption is that terminal performance is what's making the difference in the outcomes.
Replaced by scientific, reproducible testing, that gives a reasonable simulation of what a particular projectile will do in human tissue.
Right. It's fairly easy to compare terminal performance differences with controlled testing.
It doesn't accurately predict exact outcomes.
It's not that it doesn't accurately predict exact outcomes, it's that it really provides very little useful information at all in terms of predicting actual gunfight outcomes. If you want to know which bullet expands more or penetrates deeper, this testing is great. If you want information about whether one bullet will result in a faster stop than another in a real world gunfight, the testing is next to useless.
 
What works better than the slip ons is stippling. Easy to do, and you get the right texture right where you want and need it.
 
What I like about the slip on is that it adds a little girth on a smaller grip to accommodate larger hands.
Stippling can too, to some extent anyway. It tends to "fluff" the polymer a bit, depending on how you do it and the texture you want.

Where it really shines though is, the more aggressive texture locks the gun in your hand nicely and stops it from squirming around while you shoot, especially if youre hands are calloused and/or sweaty.
 
People have certainly collected actual statistics on outcomes of real world shootings. The problem is that these outcomes tend to show a lot of anomalies when compared to terminal performance test results because there are other factors that affect outcomes far more strongly than terminal performance differences.
.

There certainly have been legitimate attempts to collect data on shootings by some LE organizations, to try and understand terminal effects. But in no case have they ever issued numerical grades to various projectiles abilities to achieve "one shot stops", because of those anomalies you mention, and the lack of accurate, cohesive data collected in those shootings.

The problem is others who have claimed to have made such studies, and have published such percentages. Those are meaningless numbers at best, and discredited as outright fraud in some cases.

It's not so much that they are fiction as that it's not really productive to use them to help select between calibers if the assumption is that terminal performance is what's making the difference in the outcomes.Right. It's fairly easy to compare terminal performance differences with controlled testing.It's not that it doesn't accurately predict exact outcomes, it's that it really provides very little useful information at all in terms of predicting actual gunfight outcomes. If you want to know which bullet expands more or penetrates deeper, this testing is great. If you want information about whether one bullet will result in a faster stop than another in a real world gunfight, the testing is next to useless.

After 40+ years of different legitimate researchers studying the issue, we have a pretty good idea of what makes people fall down, when they don't want to, when struck with handgun projectiles. And it's boringly simple. It's putting holes in things that cause the central nervous system to cease to function. It's not "energy dump", it's not "hydrostatic shock", or any of the other gunwriter/ammo company excrement, that folks keep lapping up. And we've learned that a couple hundredths of an inch, or a couple hundred feet per second, doesn't make as big of a difference as gun magazines told us they did.

The FBI protocol with gelatin testing, is far from perfect all encompassing testing, to predict bullet terminal performance. But it's leagues ahead of gunwriter "real world" fantasy/fiction/fraud, that so many want to believe is the better answer.


.
 
The FBI protocol with gelatin testing, is far from perfect all encompassing testing, to predict bullet terminal performance.
The intent of the gel allows one to compare different types of rounds in one medium and to compare the results... It doesn't prove that one style of ammo is better than another. You derive from what you see and what you take away from the test. It just levels the playing field.
 
But in no case have they ever issued numerical grades to various projectiles abilities to achieve "one shot stops", because of those anomalies you mention, and the lack of accurate, cohesive data collected in those shootings.
Attempting to reduce real-world handgun stopping power to a single number is never going to produce useful results. That doesn't mean there's nothing to be learned from the data that's collected. In fact, one very important thing we can learn is that if we try to rank outcomes as a function of terminal effect differences the results don't make sense. Which tells us that in the real world, there are factors that affect gunfight outcomes so strongly that differences due to terminal effect are extremely difficult to detect.
And we've learned that a couple hundredths of an inch, or a couple hundred feet per second, doesn't make as big of a difference as gun magazines told us they did.
Right. And from real-world shooting data we've learned that there are other factors that affect the outcome so strongly--so much more strongly than terminal effect differences--that things can actually work out just the reverse of what the gelatin testing results suggest should happen. Which means that in terms of predicting real-world outcomes, the gelatin testing results have surprisingly little utility.

What you want from gelatin testing is assurance that the bullet will expand and still penetrate deeply enough to reach the vitals. The temptation is to look at that data and say that this bullet penetrates X more deeply and expands Y larger and therefore it is going to produce better real-world results in terms of stopping someone who trying to kill you. Those differences do have an effect on the outcome of real-world shootings, but the effect is so small compared to other factors that it is very difficult to detect. Urey Patrick, the author of Handgun Wounding and Effectiveness characterized those kinds of differences as being detectable only when a "very large of number shootings" were analyzed. Other factors that have nothing to do with terminal effect differences dominate.
 
Regarding the OSS data:

Certainly there are variables in the real world that cannot be accounted for. There will always be anomalies that will affect the data sets and cloud conclusions to some degree. But with a large enough sample size, this issue can be minimized. A sample size of 1,000 instances is better than only 100 instances, which is better than 10 instances.

Assigning a numerical value to the data sets, is the only useful way to compare them. Otherwise the data sets are useless. But claiming one load has an "88% chance to cause a one shot stop" based on a data set that only includes instances where the target/subject/suspect was hit one time, would be incorrect. That cartridge very likely has a much lower percentage chance of a OSS, because all the cases with multiple hits (not shots, but hits) have been discounted. But it does not mean the data sets or the percentages are useless.

On the contrary. With a large enough sampling, and data sets equally based on specific criteria (which we could discuss), the differences between the individual data sets can tell us something. In the case of the M&S OSS data, it can give use a numerical comparison between two loadings, to tell us which may be more likely to result in a fast incapacitation/stop.

It doesn't tell us how that incapacitation happened. It doesn't tell us what percentage chance a single hit from your gun or mine will have at rapid incapacitation. But it can tell us that Load A is less likely to cause a rapid stop than Load B. Which means a person can choose between a 9mm cartridge at the top of the chart, and one at the bottom of it, because there actually is a chart based on real world instances.

If that information is of no interest to you, so be it. But autopsy results and gelatin testing don't tell us what happened in the 5-10 second period immediately after the first hit occured. And that 5-10 seconds if far more important than the 20 minutes following it.
 
Certainly there are variables in the real world that cannot be accounted for. There will always be anomalies that will affect the data sets and cloud conclusions to some degree. But with a large enough sample size, this issue can be minimized. A sample size of 1,000 instances is better than only 100 instances, which is better than 10 instances.
Let's say I add 5 numbers together and provide only the result. Each of the 5 numbers are random and range from -10 to 10. I do that 100,000 times, each time with different random numbers, but in the same range. Then I go back and add in a 6th number to some of the totals. This 6th number ranges from -2 to 2.

Your task is to go through those 100,000 sums and tell me which ones had the 6th number added in. You might get lucky very, very rarely and end up with a sum that is greater than 50 or less than -50, when all of the first 5 numbers were right at their maximum/minimum and the 6th number also had the same sign and near maximum magnitude and so adding in the 6th caused the sum to go over 50 or under -50. You can play around with statistics and make something more than a wild guess at which of the sums might contain the extra number summed in, but nothing you do is going to provide any reasonable level of accuracy.

It's a roughly similar problem to what we're talking about. There are factors that affect the outcome much more strongly than terminal effect. So no matter how many shootings you look at, these other factors are still going to dominate. Once in awhile you might get lucky and be able to tell that terminal effect difference due to caliber was a deciding factor, but the vast majority of the time it's going to be impossible. Just as it is impossible to "unsum" a number to determine which numbers were added together to make the sum.
Assigning a numerical value to the data sets, is the only useful way to compare them.
It's a very useful way to compare data sets, but the fact that it's useful doesn't imply that it's always possible to assign a single, accurate, representative numerical value to a data set. This is one of the situations where it does not appear to be possible.
With a large enough sampling, and data sets equally based on specific criteria (which we could discuss), the differences between the individual data sets can tell us something. In the case of the M&S OSS data, it can give us a numerical comparison between two loadings, to tell us which may be more likely to result in a fast incapacitation/stop.
It does provide a numerical comparison. It's not clear that it provides any useful information about stopping power differences in real-world shootings. Nor is it clear that their data collection methodology was statistically proper.
 
  • Like
Reactions: jar
Let's say I add 5 numbers together and provide only the result. Each of the 5 numbers are random and range from -10 to 10. I do that 100,000 times, each time with different random numbers, but in the same range. Then I go back and add in a 6th number to some of the totals. This 6th number ranges from -2 to 2.

Your task is to go through those 100,000 sums and tell me which ones had the 6th number added in. You might get lucky very, very rarely and end up with a sum that is greater than 50 or less than -50, when all of the first 5 numbers were right at their maximum/minimum and the 6th number also had the same sign and near maximum magnitude and so adding in the 6th caused the sum to go over 50 or under -50. You can play around with statistics and make something more than a wild guess at which of the sums might contain the extra number summed in, but nothing you do is going to provide any reasonable level of accuracy.

It's a roughly similar problem to what we're talking about. There are factors that affect the outcome much more strongly than terminal effect. So no matter how many shootings you look at, these other factors are still going to dominate. Once in awhile you might get lucky and be able to tell that terminal effect difference due to caliber was a deciding factor, but the vast majority of the time it's going to be impossible. Just as it is impossible to "unsum" a number to determine which numbers were added together to make the sum.

But they weren't trying to "unsum" numbers. The criteria is included with the data, and is clearly explained. Here is a summation from my copy of Stopping Power:

They included only instances where the subject was hit somewhere in the torso just one time, and not in any other body part. The results were sorted into two groups. Either the attacker was "stopped" or they weren't. A "stop" was defined as follows: if the subject was shooting, he stopped shooting, period. If the subject ran after being hit, he ran no more than 10 feet. If the attacker stopped shooting or ran less than 10 feet, the load was considered to have stopped the attack, which is the purpose of police defensive ammo.

So I guess you could say sometimes a person ran 11 feet but it was written up as 9 feet in the report. And I suppose in some cases a subject may have stopped shooting because they were out of ammo or their gun jammed, rather than because they were "stopped" by the shot. But as those things would seem to be equally likely to happen regardless of the cartridge or the particular load used to shoot the subject, it does not seem terribly relevant.

One could also argue that the torso is a large target area, and where a bullet hits within it can make a big difference to how likely it is that a person would be "stopped" (as per Marshall's definition). But again, those differences are likely to be seen across all data sets fairly equally.

One could also argue that some or even many of the subjects gave up voluntarily after being shot, in what we call a psychological stop. Now perhaps that has something to do with increased muzzle blast, or because a larger energy transfer is more likely to be immediately noticed, which would mean they would know they were shot much quicker. However, if the goal is to stop the attack, psychological stops still count as stops, because that's the goal. So I'm still not seeing a big problem there.

So yes there are variables, but I don't think it's nearly as complicated as you're trying to make it out to be.
 
Last edited:
They included only instances where the subject was hit somewhere in the torso just one time, and not in any other body part. The results were sorted into two groups. Either the attacker was "stopped" or they weren't. A "stop" was defined as follows: if the subject was shooting, he stopped shooting, period. If the subject ran after being hit, he ran no more than 10 feet. If the attacker stopped shooting or ran less than 10 feet, the load was considered to have stopped the attack, which is the purpose of police defensive ammo.
I've read the book. Used to have a copy. Their assumption was that differences in terminal effect would be detectable once they collected enough data and looked at it carefully enough. That assumption was not valid. There are too many contributors (numbers summed in) that are more significant effects (larger numbers) and that aren't affected or are affected very little by the factor they are interested in to allow them to see (unsum) the effect due to the very small contribution of terminal effect differences due to caliber (small number(s) summed in).
And I suppose in some cases a subject may have stopped shooting because they were out of ammo or their gun jammed, rather than because they were "stopped" by the shot.
Yes. Large contributor that has nothing to do with the caliber of the defender.
One could also argue that the torso is a large target area, and where a bullet hits within it can make a big difference to how likely it is that a person would be "stopped"...
Yes. Large contributor that has very little to do with the caliber used by the defender.
One could also argue that some or even many of the subjects gave up voluntarily after being shot, in what we call a psychological stop.
Yes, Huge contributor--much larger than any of the others.
However, if the goal is to stop the attack, psychological stops still count as stops, because that's the goal.
But there's no reason to assume that the frequency of psychological stops have anything to do caliber of the defender which means that the largest contributor to the "sum" isn't based on terminal effect differences due to caliber.

So now you have three large contributors that are essentially noise. They contribute nothing of value and, in fact, actually work to obscure any differences in outcome due to terminal effect from caliber selection. This kind of thing is why it's nearly impossible to see the effects due to terminal effect from caliber differences in the results of shooting outcomes. This is why when you look at Ellefritz's data the results indicate that some of the pocket pistol calibers significantly outperform the .45ACP. It's not that those calibers offer better terminal effect than the .45ACP, it's because there are contributors which affect the outcomes much more strongly than terminal effect due to caliber and they are dominating the results.
 
  • Like
Reactions: jar
I've read the book. Used to have a copy. Their assumption was that differences in terminal effect would be detectable once they collected enough data and looked at it carefully enough. That assumption was not valid. There are too many contributors (numbers summed in) that are more significant effects (larger numbers) into their results to be able to see (unsum) the effect due to the very small contribution of terminal effect differences due to caliber (small number summed in).

Proved by whom, and how? This is the root issue I see with all the arguments that turn back to Fackler's assertions about handgun bullet efficacy and mechanisms and gelatin testing; no one seems to be actually disproving any of the data produced that provides findings to the contrary.

If all the data sets collected from various places lined up with Fackler's hypothesis that only penetration and expansion matter, I'm sure the data would be looked upon as excellent examples of how the real world results match the "scientific research" (of shooting gelatin). But that's not what all the other data sets show us, and that's where the problem lies isn't it.
 
This is the root issue I see with all the arguments that turn back to Fackler's assertions about handgun bullet efficacy and mechanisms and gelatin testing...
Fackler's analysis is similarly flawed and for the same reason. Because terminal effect differences due to caliber selection have very little effect on the outcomes of real world shooting, it makes no sense to dissect what are, in effect, very small contributors to the desired outcome. The gel testing lets you see easily measured differences in terminal effect performance, but then when you try to make sense of how those differences relate to the outcome of a real-world defensive encounter, things break down in a huge way.
Proved by whom, and how?
It's proved by the data. Look at Ellifritz's data set and explain how .380ACP outperformed .45ACP in some categories and outperformed the 9mm in virtually every category. Are we really supposed to believe that .380ACP is superior to 9mm in terminal effect? Of course not. What the data is telling us is that there are other contributors that aren't affected by terminal effect due to caliber that are dominating the results.

For decades, large amounts of effort and money and effort have been expended trying to tie terminal effect differences due to caliber to the actual outcomes of real-world shooting but without being able to do so. Here we are 4 decades after the Miami shootout, the FBI has gone back to 9mm, and there's still no definitive, scientifically acceptable result out there that anyone can hang their hat on. Is it because there have been no shootings to analyze? Obviously not.

At some point it's time to realize that the basic premise (terminal effect differences due to caliber selection are a significant contributor to the outcome of defensive shootings) is flawed and to move on.
 
  • Like
Reactions: jar
Fackler's analysis is similarly flawed and for the same reason. Because terminal effect differences due to caliber selection have very little effect on the outcomes of real world shooting, it makes no sense to dissect what are, in effect, very small contributors to the desired outcome. The gel testing lets you see easily measured differences in terminal effect performance, but then when you try to make sense of how those differences relate to the outcome of a real-world defensive encounter, things break down in a huge way.

It's proved by the data. Look at Ellifritz's data set and explain how .380ACP outperformed .45ACP in some categories and outperformed the 9mm in virtually every category. Are we really supposed to believe that .380ACP is superior to 9mm in terminal effect? Of course not. What the data is telling us is that there are other contributors that aren't affected by terminal effect due to caliber that are dominating the results.

Well, at least we're on the same page about Fackler's research.

Regarding the Ellifritz data versus the M&S, there are some massive differences between the two. The most important one being that Ellifritz included shots to the head. Obviously a shot that hits the cranial vault will have a very different effect than one to the torso. That's the first failure of his data. The second is that he failed to separate data sets by loading. We all know a 9mm 115gr FMJ will perform differently to a 9mm 115gr JHP that expands as designed. He makes no attempt to discriminate between projectile designs, weights, or velocities. That too is a huge failure. The Ellifritz and the M&S data sets aren't on the same level, and that is immediately to obvious to anyone who understands this paragraph.
 
The Ellifritz and the M&S data sets aren't on the same level...
Sure, I agree with that, but probably not for the same reasons you do.
The most important one being that Ellifritz included shots to the head.
Shots to the head happen in real-world shootings. So we should exclude them because a shot to the head tends to be very effective regardless of caliber? It's extremely poor form to exclude real-world data simply because it doesn't agree with one's premise.

It's going to be very difficult to prove that M&S data is better than Ellifritz's because they threw away more relevant data than he did on the front end of their process. That's not how you do science or statistics.

Remember this:
WrongHanded said:
However, if the goal is to stop the attack, psychological stops still count as stops, because that's the goal.
If headshots still count as stops, and that's the goal, how do you justify throwing them out of the data set? The simple answer is that it can't be justified.
That's the first failure of his data.
As far as I can see, the only failure of his data is that it doesn't provide the results you think it should. Similarly, the only success of M&S's is that it does.

Go back to the very beginning and start looking at this problem without a starting premise. It gets very clear if one does that. There's no longer any difficulty resolving the apparent contradiction between gelatin testing and real-world shootings data. Ellifritz's data all makes sense. M&S data makes sense as do the complaints of their critics.

The only thing confusing about this topic is if one starts with a particular premise and refuses to accept any data that suggests that premise might not be valid.
 
By the way, I'm not claiming that ALL handgun calibers are equal or that there shouldn't be ANY selection criteria related to caliber.

The FBI's minimum penetration figure is something that should be kept in mind. Not as a hard and fast threshold, because calibers that can't reliably meet it with expanding rounds still can put up impressive performance numbers in the real world, but rather to help one understand the limitations of the system they are relying on for self-defense.

What I'm saying is that when comparing apples to apples*, it's very unlikely that success or failure in a defensive encounter is going to be related to terminal effect differences due to caliber choice. Other factors are going to be much more likely to have an effect and are going to have much more significant effects.

*Apples to apples means two very roughly similar loadings within a given performance class**. Say, comparing an expanding round from one service pistol caliber to another expanding round from a different service pistol caliber. Or a non-expanding round from one pocket pistol caliber to a non-expanding round from another pocket pistol caliber.

**With the understanding that there aren't really hard boundaries between handgun performance classes. :D For example, .22LR is clearly in a different performance class than .44Mag, but, regardless of convention, it might be reasonable to put .38SPL and .380ACP in the same performance class for the purposes of some comparisons even though one is typically thought of as a service pistol caliber and one is typically thought of as a pocket pistol caliber.
 
Shots to the head happen in real-world shootings. So we should exclude them because a shot to the head tends to be very effective regardless of caliber? It's extremely poor form to exclude real-world data simply because it doesn't agree with one's premise.

The pragmatic answer to this question would be to separate these instances and have separate data sets for single hits to the head. Not include them in the data sets for hits to the torso, for reasons which should be obvious.

And you are absolutely right that data should not be excluded because it "doesn't agree with one's premise". But it's perfectly acceptable to exclude data that does not fit the testing criteria, provided there is an unbiased and objective reason for that criteria. This is what M&S did by excluding shooting with more than one hit, or a hit to anywhere but the torso.

It's going to be very difficult to prove that M&S data is better than Ellifritz's because they threw away more relevant data than he did on the front end of their process. That's not how you do science or statistics.

Did they? By not including shots to the head? Well...

Remember this: If headshots still count as stops, and that's the goal, how do you justify throwing them out of the data set?

Headshots can and should be discounted because a shot to the head seems to be far more effective than a shot to the body at causing immediate incapacitation. And the particulars of the projectile seem to matter far less due to the nature of brain tissue and how even minor damage to it can cause incapacitation or death. A .22lr will immediately kill a horse or cow if the bullet placement is made to the brain, so clearly caliber is not a major factor for headshots providing they penetrate into the brain. It's not the point of such data collection, and is better excluded than included, where it only causes obvious inconsistencies such as suggesting a .32acp is better than a service caliber. This seems obvious to me.

As far as I can see, the only failure of his data is that it doesn't provide the results you think it should. Similarly, the only success of M&S's is that it does.

Then you see wrong. The criteria and separation methods of the data sets in Ellifritz's study are obviously flawed in the ways I have laid out. And the M&S data doesn't tell me what I think it should. It tells me what it tells me. Such as, up to 2001, the best 10mm load was less effective than the best 9mm load. That's not what I had expected, nor what I'd prefer be true. And yet that's what it says.

Go back to the very beginning and start looking at this problem without a starting premise. It gets very clear if one does that. There's no longer any difficulty resolving the apparent contradiction between gelatin testing and real-world shootings data. Ellifritz's data all makes sense. M&S data makes sense as do the complaints of their critics.

The only thing confusing about this topic is if one starts with a particular premise and refuses to accept any data that suggests that premise might not be valid.

It may surprise you to know that all of this professional debate originally happened before I had ever fired a gun (though I started later than most). I didn't see it unfold from the beginning. I actually came upon it after even the Courtney's had finished their papers. And I came upon it with precious little in the way of preconceptions about what might be more effective and why.

The only preexisting premise I have had is that I was raised by a parent with a PhD who was actively engaged in experimentation as a profession. So I was taught the scientific method early on, and it was reinforced constantly. When I see assertions made about the limits of a handgun bullet's potential on the human body, I question the validity of such definite assumptions unbacked by any scientific testing. When evidence is presented suggesting a different concept, and is then dismissed but not disproved I ask, what is the scientific basis for such dismissal? There is none forthcoming.

What is clear, is how much remains unclear.
 
Such as, up to 2001, the best 10mm load was less effective than the best 9mm load. That's not what I had expected, nor what I'd prefer be true. And yet that's what it says.
This is exactly what I meant when I said it's proved by the data and that you discount data that doesn't agree with your starting premise.

Ok. You agree we can't exclude this result because that's not how science works. That means we are left with the following.

Either what we all know to be true about 10mm vs. 9mm terminal performance is false, the OSS results are not providing useful information, your starting premise that terminal effects due to caliber differences are a significant contributor to real world "stopping power" is invalid, or some combination of those things must be true.

If the scientific method is really your guide, now that you've tested your starting premise against data and found the results are inconsistent with your starting premise you only have the following options. The scientific method says that at this point you must either discard the OSS results, or modify your starting premise or do both and begin again.

But that's not what you are going to do. You are going to keep trying to make the data fit your idea of what you think it should tell you.

The 10mm vs 9mm data in the OSS results is an anomaly--we don't need to re-evaluate our starting premise in spite of the obvious contradiction. The Ellifritz data doesn't tell us what it should. Throw it out. Headshots don't fit the starting premise--throw them out. In fact, throw out any CNS hits--same thing, caliber doesn't matter with a CNS hit. And so on and so on.
When evidence is presented suggesting a different concept, and is then dismissed but not disproved I ask, what is the scientific basis for such dismissal?
Try starting from the beginning instead of trying to uphold or disprove some existing theory of stopping power.

What evidence is there that terminal performance effects due to caliber difference (within a given performance class) have a significant effect on the outcome of real-world shootings?

If the effect is significant, it should be easy to prove that it exists. If you find that it is not the case, then that is pretty solid proof that, at best, it is insignificant.
 
This is exactly what I meant when I said it's proved by the data and that you discount data that doesn't agree with your starting premise.

Ok. You agree we can't exclude this result because that's not how science works. That means we are left with the following.

Either what we all know to be true about 10mm vs. 9mm terminal performance is false, the OSS results are not providing useful information, your starting premise that terminal effects due to caliber differences are a significant contributor to real world "stopping power" is invalid, or some combination of those things must be true.

If the scientific method is really your guide, now that you've tested your starting premise against data and found the results are inconsistent with your starting premise you only have the following options. The scientific method says that at this point you must either discard the OSS results, or modify your starting premise or do both and begin again.

But that's not what you are going to do. You are going to keep trying to make the data fit your idea of what you think it should tell you.

Firstly, I don't think you'll find any trace of me ever suggesting that all 10mm loads are better than all 9mm loads. Because I've never made that assertion. I have however made generalization based on the capabilities of a cartridge from the perspective of MAP, case volume, and projectile diameter.

So what exactly DO we know about the terminal performance difference between 9mm and 10mm? That depends on the load on question right? We can certainly see (in the M&S data) a difference between those 10mm loads and the top of the chart 9mm loads. Are the 10mm loads full pressure? Unknown. The 9mm load in question is a +p+ with a 115gr JHP. Could those things be relevant? Perhaps. Many .40S&W cartridges also come in ahead of the 10mm cartridges in the data sets, even at similar grain weights.

But you're not giving me enough credit. I got this book a week ago, and based on what I am seeing, I do have to change my way of thinking. I can't simply discount this data because it doesn't fit what I expected to see. There's much more that I may be able to learn from it.
 
Back
Top