Jump to content

My Golf Spy Ball Test - General Discussion


rkelso184

Recommended Posts

As I approach retirement age, I am contemplating the adage "youth is wasted on the young". It's like right when the prospect of playing with the old guys 4-5 days a week is in sight, my back and legs are start to kill me every time I play two days in a row. D'Oh!

 

But yeah, I'm lucky to get to play 3x/week most weeks year round. Sometimes a bit more.

Link to comment
Share on other sites

Yes. There is a golf course I can see from my office. And I know one day I’ll be able to play more... and I pray I’m not too tired to do it.

Ping G430 10k - 9* - Ventus TR Black 6x

Callaway Apex UW - 19* - Ventus Black 7x

PXG 0311P Gen6 - 5i-GW - DG x100

Vokey SM9 - 52.12F, 56.14F - DG x100ss

Vokey SM9 - 60.08M - KBS Hi-Rev 2.0

Callaway PM Grind 64 - KBS C-Taper 130x

L.A.B Link.1
Callaway Chrome Soft X LS
Vessel Player III - Citrine/White/Black (Riding)
Vessel VLS DXR - Grey/Orange (Walking/half-bag)
Link to comment
Share on other sites

> @caloge said:

> The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

>

> Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

 

You nailed every issue I had with this test. And lmao at metal gear solid.

Link to comment
Share on other sites

> @caloge said:

> The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

>

> Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

 

It's almost like they are doing this stuff for views and money

Link to comment
Share on other sites

> @caloge said:

> The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

>

> Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

 

agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

Link to comment
Share on other sites

> @mattdubya said:

> > @caloge said:

> > The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

> >

> > Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

>

> agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

 

I agree! I was really surprised by their tone.

  • Like 1

PING G425 9* Tour 65 X - PING G400 Stretch 13* HZRDUS Yellow - PING G425 17* Hybrid Tour 85 X
PING I210 4-U Project X 6.5 - PING Glide 2.0 Stealth 55 & 60* Project X 6.0
PING VALOR CB 35"

Link to comment
Share on other sites

> @jamie said:

> > @mattdubya said:

> > > @caloge said:

> > > The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

> > >

> > > Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

> >

> > agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

>

> I agree! I was really surprised by their tone.

 

I'd really like to see an in-depth study, along with methodology, by someone reputable. It's not that a certain website isn't, rather I feel like what they did was more of a conversation starter than anything. The problem is that golf is a mental game, and now it's in a lot of people's heads. I have three or four dozen Snell MTB's sitting on the shelf and a doze Chrome Softs that I likely won't use now. It's not that they're bad balls and it might well be that they're actually great and the study is to blame in some way, but now that it's in my head, it's bothering me. As all of us know, when you step on that tee, you have to have 100% confidence in the equipment, because goodness knows this game is hard enough with all of the other variables. Sighs.

 

Edit to add: I've not noticed shorter distance (Callaway) and poor dispersion (Snell), but in a real-world setting, it's hard to tell whether it's you or the ball. I know that everyone on here is a plus 4, but honestly how many of us are good enough to notice the difference, or would we simply think it was swing-related. Lots of questions without easy answers.

 

Perhaps the Golfwrx guys could do a podcast discussing balls (insert your own joke, here).

Link to comment
Share on other sites

Where can I buy these Metal Gear Solid golf balls?! I will pay anything!

Driver (9.0) - Cobra LTDx Aldila Rogue Silver 70 S, 44.5"
Wood (14.5) - Ping G425 MAX Alta CB 65 Slate S

Wood (17.5) - Ping G425 MAX Alta CB 65 Slate S
Driving Iron (20) - Srixon U65 Project X 5.5
Irons (5-6) - Srixon Z565 Project X 5.5
Irons (7-P) - Srixon Z765 Project X 5.5
Wedges - Vokey SM-7 Jet Black / 50.08 F / 54.08 M / 58.08 M DG S300
Putter - Edel E-1
Ball - Titleist Prov1x
ZGrip Midsized Grips

Link to comment
Share on other sites

I heard a speaker at a conference one time talking about trial results. His quote was “In God I trust, everybody else show me your data “

  • Like 1

Driver: Titleist  Tsi2
FW's: Callaway Rogue 4w
Hybrids: X2 Hot 3&4
Irons: Ping G410 Reg Graphite 5-UW
Wedges: Callaway MD3 56
Handicap 9
Putter: Zing2 BeCu among others

Link to comment
Share on other sites

> @mattdubya said:

> > @caloge said:

> > The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

> >

> > Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

>

> agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

 

I came here to post this. The world must look mighty small from their high horse. Even to a degree slandering TXG who's results (ball speed and spin) largely collaborate with their own.

Link to comment
Share on other sites

> @Doyouevenblade said:

> > @mattdubya said:

> > > @caloge said:

> > > The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

> > >

> > > Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

> >

> > agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

>

> I came here to post this. The world must look mighty small from their high horse. Even to a degree slandering TXG who's results (ball speed and spin) largely collaborate with their own.

 

TXG guys don’t push anything or make ridiculous claims. The skinny Italian looking one actually knows more about golf clubs than golf club designers. I’d listen to what TXG says any day of the week, they are polite and rational.

  • Like 2
Link to comment
Share on other sites

> @4puttJohnny said:

> Really amazing how few people there are who understand anything about interpreting test results. Most simply try to convert it to a personal affront based on emotion if it doesn't say the ball THEY play is the best! Too funny.

 

Well, or folks like me. I have no idea what the hell I’m looking at. At least I’ll admit it.

  • Like 1
Link to comment
Share on other sites

> @GDTBATH said:

> > @4puttJohnny said:

> > Really amazing how few people there are who understand anything about interpreting test results. Most simply try to convert it to a personal affront based on emotion if it doesn't say the ball THEY play is the best! Too funny.

>

> Well, or folks like me. I have no idea what the **** I’m looking at. At least I’ll admit it.

 

I’m in the same camp. I looked at ball speed, launch, spin, peak height and descent. All the other numbers make me go cross eyed!

Ping G430 Max 10K 10.5° driver - Diamana GT 60S

Ping G430 Max 15° #3 fairway - Diamana TB 70S

Ping G430 Max 21° #7 fairway - Diamana TB 80S

Ping G430 Max 26° #5 hybrid - MMTh 90S

Mizuno Pro 243 4-PW irons - MMT 105S

Mizuno T24 Raw 48°-10S wedge - MMT 105S

Mizuno T24 Raw 54°-10S and 60°-06X wedges - MMT Scoring Wedge 105S

Ping PLD Ally Blue 4

Titleist Pro V1x

Link to comment
Share on other sites

> @noodle3872 said:

> > @GDTBATH said:

> > > @4puttJohnny said:

> > > Really amazing how few people there are who understand anything about interpreting test results. Most simply try to convert it to a personal affront based on emotion if it doesn't say the ball THEY play is the best! Too funny.

> >

> > Well, or folks like me. I have no idea what the **** I’m looking at. At least I’ll admit it.

>

> I’m in the same camp. I looked at ball speed, launch, spin, peak height and descent. All the other numbers make me go cross eyed!

Agree. Distance and shot area are important to me. I want a ball that goes where it is supposed to. Too many numbers that no one really understands or wants to accept.

Driver _____ Ping G400 Max
Woods ____ Ping G410 3 & 5, Cleveland XL HALO 7
Hybrids ___ Titleist 818H1 5H
Irons ______ Titleist T300 6-GW
Wedges ___ Titleist Vokey SM9 52.08F & 56.10S
Putter _____ Odyssey Dual Force Rossie 2 or Rife 2-Bar w/ Nickel Putter Golf Ball Pick-Up
Ball _______  Titleist ProV1 Yellow
Distance __ GPS:  Bushnell Phantom 2,  Rangefinder:  Precision Pro NX7 Pro
GHIN ______ HCP floats between 10 and 12

Link to comment
Share on other sites

Valid, Legitimate and Unbiased results? I have my doubts and it sounds like a good many here do as well

 

re: Kirkland 3pc Golf Ball as it is mentioned on *** buyers guide - " _While that’s perfectly fine for a Mickey Mouse ball, with the price holding at 2/$30, the next Costco Kirkland Signature Golf Ball is shaping up to be a hell of a lot less exciting and impactful than the original._ " An article which appeared on *** August 2017, authored by the **same** person that authored the current *** golf ball buyer's guide. If you need to read article google *** and Kirkland I would read it to get the full gist of how the author felt about this ball. (Dear Moderators I'm only listing link in case it's ok: https://Not allowed because of spam.com/2018-costco-signature-golf-ball/ )

 

AND then, I found this to be the oddest thing about the buyer's guide. How on one hand do you dis the Kirkland golf ball and then later list it as a "Top Performers - Value" at #2. Not only that but it's singled out several times in the buyer's guide as "shorter" than most but still #2 value ball. So on one hand the author thought this was a Mickey Mouse ball but on the other hand... you fill in the blank.

 

OP/ED: the choice of Kirkland (I've gamed this ball and it's good ball) as **#2** is not by accident, was a well thought out and **conceived** marketing plan. When you read it all, put it together, the #1 ball looks like a no brainer?

 

I don't think so, your heart and eyes can be fooled but never your brain...

 

Link to comment
Share on other sites

> @arbeck said:

> Was the test perfect? No. Was the test really good? Yes. They could have done the test all with GC Quad. If so, they would have received similar results to TXG. However, that wouldn't tell them anything about aerodynamic differences in the ball and/or manufacturing defects that the balls had. Those two things are important and you need full radar testing to do that. Of course then you have the elements affecting the balls. You can minimize this by doing the test on the calmest day possible, randomizing the shot order, and taking enough shots that the environmental effects normalize across the sample of shots.

>

> All that being said, no test is going to be perfect, and unless you are doing this test in a glass bubble, you are going to have to deal with the elements. And even then, isn't how the ball deals with wind a valid part of the test?

>

> That being said, lets talk about the Chrome Soft X specifically. Nothing in it's numbers suggest it losing 18 yards of carry distance to the MTB-X. But there's some interesting things about the ball. Not only is the Chrome Soft X shorter than the MTB-X at high swing speeds. It's shorter than the regular Chrome Soft at both high and low swing speeds. Assuming they didn't just test all the Chrome Soft X balls back to back at both swing speeds, something weird is up with that ball. It could be that it just got really unlucky and had more shots into a puff of wind than other balls. It could be that it's aerodynamics don't handle shots into the wind as other balls as well and those two things combine to make it drop off in distance. It could be manufacturing flaws in the dimple pattern of a few balls that were hurting distance.

>

> If you take a look at all the X balls you'll see that their launch conditions are all almost identical. The Chrome Soft X does have the lowest ball speed but everything is within 3 miles an hour. All the spin numbers are very close save the Mizuno RB Tour X which is a few hundred RPM higher. Still, all these balls should be landing fairly close to each other. I'd expect maybe 5-6 yards of carry distance between the best and worst just based on launch conditions. There are three basic outliers though. The Snell MTB-X, Taylormade TP5X, and Chrome Soft X. The Taylormade and Chrome Soft are both shorter than you would expect given launch conditions, and the Snell was longer. If you look closely, those three balls also have the largest land area. So again, this points to either something environmental effecting these balls more or some flaws in the manufacturing of these three. It could also be a combination of these things. We know that the Snell had at least some defects (as shown by the ball that went way offline). It's not crazy to think that the other two balls had some defects as well. It also could be that the Snell on average got a couple of puffs of wind more behind it and the Taylormade and Chrome Soft had a couple of puffs of wind into.

>

> Ideally you'd want to do another test of just those balls to figure out what caused the outliers. I'd probably hit 100 shots which each ball from at least 4 or 5 boxes of balls. I'd randomize the shot order. Any ball that showed up as an outlier would be marked and the shot would be noted. On subsequent shots with that ball you'd check to see if it behaved the same. That would hopefully normalize the environmental conditions and let you find any flawed balls.

>

> None of that of course means the test was flawed. It could have done better to drill down to find why those outliers existed. But that's not usually how studies like this work. Big studies like this expect outliers and further work should be done to dig into why they are there. What people seem to be missing (and the authors are guilty of publicizing the number too much) is that carry distance with the driver is probably the least important thing in the test. Balls speed, launch angle, and spin are all more important things with the driver. 7i spin is probably a more important number than those. And wedge spin is probably more important than that. If it was just driver distance problems with the Chrome Soft balls you wouldn't dock them that much. But they have worse ball speed with the driver. They all have less 7i spin than their competitors. They have less wedge spin than their competitors. That doesn't mean it is a bad ball for everyone, but it doesn't compare great to the other balls in it's class.

 

This is such a great post. Very well said!

Titleist TSR 1 GD Di 5  Stiff

Titleist TSR 1  15 & 18* Adilia Speed Mesh R

Titleist TSR 1  21* Hybrid Kuro Kage R 
Titlesit T350 6-P 43 STeelFiber I80
Vokey
SM 46/54/58  Scotty Cameron Special Select 5.5 Flowback 35" 

 

 


 

 

Link to comment
Share on other sites

> @arbeck said:

> Was the test perfect? No. Was the test really good? Yes. They could have done the test all with GC Quad. If so, they would have received similar results to TXG. However, that wouldn't tell them anything about aerodynamic differences in the ball and/or manufacturing defects that the balls had. Those two things are important and you need full radar testing to do that. Of course then you have the elements affecting the balls. You can minimize this by doing the test on the calmest day possible, randomizing the shot order, and taking enough shots that the environmental effects normalize across the sample of shots.

>

> All that being said, no test is going to be perfect, and unless you are doing this test in a glass bubble, you are going to have to deal with the elements. And even then, isn't how the ball deals with wind a valid part of the test?

>

> That being said, lets talk about the Chrome Soft X specifically. Nothing in it's numbers suggest it losing 18 yards of carry distance to the MTB-X. But there's some interesting things about the ball. Not only is the Chrome Soft X shorter than the MTB-X at high swing speeds. It's shorter than the regular Chrome Soft at both high and low swing speeds. Assuming they didn't just test all the Chrome Soft X balls back to back at both swing speeds, something weird is up with that ball. It could be that it just got really unlucky and had more shots into a puff of wind than other balls. It could be that it's aerodynamics don't handle shots into the wind as other balls as well and those two things combine to make it drop off in distance. It could be manufacturing flaws in the dimple pattern of a few balls that were hurting distance.

>

> If you take a look at all the X balls you'll see that their launch conditions are all almost identical. The Chrome Soft X does have the lowest ball speed but everything is within 3 miles an hour. All the spin numbers are very close save the Mizuno RB Tour X which is a few hundred RPM higher. Still, all these balls should be landing fairly close to each other. I'd expect maybe 5-6 yards of carry distance between the best and worst just based on launch conditions. There are three basic outliers though. The Snell MTB-X, Taylormade TP5X, and Chrome Soft X. The Taylormade and Chrome Soft are both shorter than you would expect given launch conditions, and the Snell was longer. If you look closely, those three balls also have the largest land area. So again, this points to either something environmental effecting these balls more or some flaws in the manufacturing of these three. It could also be a combination of these things. We know that the Snell had at least some defects (as shown by the ball that went way offline). It's not crazy to think that the other two balls had some defects as well. It also could be that the Snell on average got a couple of puffs of wind more behind it and the Taylormade and Chrome Soft had a couple of puffs of wind into.

>

> Ideally you'd want to do another test of just those balls to figure out what caused the outliers. I'd probably hit 100 shots which each ball from at least 4 or 5 boxes of balls. I'd randomize the shot order. Any ball that showed up as an outlier would be marked and the shot would be noted. On subsequent shots with that ball you'd check to see if it behaved the same. That would hopefully normalize the environmental conditions and let you find any flawed balls.

>

> None of that of course means the test was flawed. It could have done better to drill down to find why those outliers existed. But that's not usually how studies like this work. Big studies like this expect outliers and further work should be done to dig into why they are there. What people seem to be missing (and the authors are guilty of publicizing the number too much) is that carry distance with the driver is probably the least important thing in the test. Balls speed, launch angle, and spin are all more important things with the driver. 7i spin is probably a more important number than those. And wedge spin is probably more important than that. If it was just driver distance problems with the Chrome Soft balls you wouldn't dock them that much. But they have worse ball speed with the driver. They all have less 7i spin than their competitors. They have less wedge spin than their competitors. That doesn't mean it is a bad ball for everyone, but it doesn't compare great to the other balls in it's class.

 

Great post.

Link to comment
Share on other sites

I have asked a few times for them to clarify whether their anomolous shots were taken out of the datasets or were they left in. The study says they were removed. But logic says they were left in - otherwise there is simply no way to create some of these shot dispersion areas on a robot test, how else do you get a 40 yard by 40 yard shot box - thats insanely big.

 

By way of comparison I went back to my driver fitting data from a couple months ago. There were only 4-6 recorded shots per combo after we removed the really bad ones but my "shot areas" were mostly in the 1200-1600 sq yd range.

 

I am guessing the bad shots were left in to add to the story and because it does create those shot areas that are so big. Otherwise - why not release the shot by shot data?

 

Oh and for what its worth, my questions were not only not answered - they were deleted altogether.

 

BTW if you are "surprised" at the level of snark they bring to the table, you must be new here lol.

  • Like 1

RIP TM Stealth2 10.5*  Welcome back OG SIM 10.5* - Ventus Black 6x

BRNR 13.5 / Kaili Red 75s // TM Qi10 5W/ 7W Ventus Blue 6s

Irons TM P7MC 5-7 / P7MB 8-P // PXG Gen6 XP
Vokey SM8 50*/54*/58*

Cody James custom / TM Spider // Left Dash

Link to comment
Share on other sites

 

> @caloge said:

> The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

>

> Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

 

Conceptually that sounds like a good idea I doubt they possess the know how or equipment to go that far. I wouldn’t be surprised if less than 0.001” is significant ball asymmetry. You can’t measure that by cutting a ball in half with a pipe cutter. You would need pretty specialized equipment and processes.

 

 

 

Hitting some balls with a machine and applying some light statistics versus dissecting balls as if you’re a subject matter expect is a vastly different scope.

Link to comment
Share on other sites

> @4puttJohnny said:

> Really amazing how few people there are who understand anything about interpreting test results. Most simply try to convert it to a personal affront based on emotion if it doesn't say the ball THEY play is the best! Too funny.

 

Well, I guess it's a given that you are one of enlightened few who understand anything about interpreting test results. Amirite?

Link to comment
Share on other sites

I know Crossfield isn't the most popular here, but I went and dug up this [old video](

"old video") where he talked to the aerodynamics guy at Titleist. The entire video is worth a watching but if you start at about 7:10 you can see tests where they work with a ball that has dimples on one side that are less than a thousandth of an inch different than the dimples on the other side. The Titleist guy claims that they have found balls that exhibit these characteristics in the wild. They could be made by something as simple as an uneven paint spray. But depending on how you tee it up it will fly higher, lower, left, or right. It's all really an advertisement for the QC and manufacturing tolerances at Titleist to be fair, but give the MGS test and how Titleist performed, there might be something to that.

 

I can't say with certainty that this caused most of or even any of the carry distance and ball flight oddities, but I wager it's probably a good shout. Combine that with cores being off by thousands of an inch or balls not being perfectly round, and you could see how tolerances could really come in to play. And for all the accolades the Snell MTB-X got for being the longest, if you go with this hypothesis, it was also the ball with some of the worst tolerances.

 

Even if we knew with 100% certainty that the carry distance and flight discrepancies were caused by QC problems, we can't really say much about that in terms of ball brands. If every brand had (and this number is pulled out of my butt) 1 ball that was off in every 5 dozen, in a test like this you would expect certain brands to not show any effects but another might show 1 or 2 balls that were off just through a random distribution. Depending on how prevalent the problem was with a brand, you might need to hundreds of boxes before you got an idea of the number of balls that were really off.

 

However, keep this in mind. If you tee up a ball and it goes 20 yards right and the next time you tee up the ball it goes 20 yards left (which could happen if you aligned bad dimples in opposite ways), how many of you would think it was your swing and not the ball? As long as the ball maker can keep this occurrence low enough for you to think it's you and not the balls it might not make sense for them to enforce the tolerances that would keep it from happening. In my mind that's why I'm hesitant to play balls like the Q-Star Tour and Wilson Duo Pro now. Those balls aren't going to be cheaper to make than the Z-Star and FG Tour. So how can they sell them at a lower price? Most likely the tolerances are weaker. They're also marketed to a less skilled golfer who might not notice the effects of the balls and assume their swing was the cause. For the online brands, how strict can their tolerances be and still make them unable to undercut the big guys? Costco probably can because they probably aren't making any money on the ball and they don't care. Snell might be because he has enough ball experience to enforce tolerances through the line and my guess is he is selling his ball at pretty close to the price Titleist does to wholesalers. Other than that though, I don't know. What about bigger brands that might be willing to weaken their tolerances to pad their profit margin just a bit? Again we don't know, but they are worth thinking about.

 

If I were going to do a follow up test, I would like the same test done but only using these balls Tour BX, Z Star XV, TP5x, Pro V1x, Chrome Soft X, and Snell MTB-X. That basically gives us all the retail 'X' balls that are played on tour, plus throws in the best Direct to Consumer ball. I'd get 100 boxes of each ball. I'd only hit 115MPH driver swings (my thinking is you will see more of an effect of these ball tolerance issues at high swing speeds). Every ball would be marked to be unique. They would all be thrown in one large bucket and hit in a random order. Any ball that exhibited large variance in carry distance or going off line would be pulled aside. Repeat the test with the balls that showed no differences and again pull outliers aside. At this point you should have over 2000 shots with each ball brand. I'd then hit all of the outliers at least 3 more times and collect that as a different data set. If the ball behaved normally during this test, remove the outlier from the first data set. If it continued to show weird behavior, leave the data in the first data set and keep it in the bad ball pile. At the end, you should have a number of balls from each brand who are probably out of tolerance and removed any crazy environmental outliers from your original data set. You would also see if your balls in the bad ball pile for anyone brand exceeded the number of the margin of error. Depending on what you find, you might be able to explain some or most of the discrepancies in the original test.

Link to comment
Share on other sites

> @arbeck said:

> I know Crossfield isn't the most popular here, but I went and dug up this [old video](

"old video") where he talked to the aerodynamics guy at Titleist. The entire video is worth a watching but if you start at about 7:10 you can see tests where they work with a ball that has dimples on one side that are less than a thousandth of an inch different than the dimples on the other side. The Titleist guy claims that they have found balls that exhibit these characteristics in the wild. They could be made by something as simple as an uneven paint spray. But depending on how you tee it up it will fly higher, lower, left, or right. It's all really an advertisement for the QC and manufacturing tolerances at Titleist to be fair, but give the **** test and how Titleist performed, there might be something to that.

>

> I can't say with certainty that this caused most of or even any of the carry distance and ball flight oddities, but I wager it's probably a good shout. Combine that with cores being off by thousands of an inch or balls not being perfectly round, and you could see how tolerances could really come in to play. And for all the accolades the Snell MTB-X got for being the longest, if you go with this hypothesis, it was also the ball with some of the worst tolerances.

>

> Even if we knew with 100% certainty that the carry distance and flight discrepancies were caused by QC problems, we can't really say much about that in terms of ball brands. If every brand had (and this number is pulled out of my butt) 1 ball that was off in every 5 dozen, in a test like this you would expect certain brands to not show any effects but another might show 1 or 2 balls that were off just through a random distribution. Depending on how prevalent the problem was with a brand, you might need to hundreds of boxes before you got an idea of the number of balls that were really off.

>

> However, keep this in mind. If you tee up a ball and it goes 20 yards right and the next time you tee up the ball it goes 20 yards left (which could happen if you aligned bad dimples in opposite ways), how many of you would think it was your swing and not the ball? As long as the ball maker can keep this occurrence low enough for you to think it's you and not the balls it might not make sense for them to enforce the tolerances that would keep it from happening. In my mind that's why I'm hesitant to play balls like the Q-Star Tour and Wilson Duo Pro now. Those balls aren't going to be cheaper to make than the Z-Star and FG Tour. So how can they sell them at a lower price? Most likely the tolerances are weaker. They're also marketed to a less skilled golfer who might not notice the effects of the balls and assume their swing was the cause. For the online brands, how strict can their tolerances be and still make them unable to undercut the big guys? Costco probably can because they probably aren't making any money on the ball and they don't care. Snell might be because he has enough ball experience to enforce tolerances through the line and my guess is he is selling his ball at pretty close to the price Titleist does to wholesalers. Other than that though, I don't know. What about bigger brands that might be willing to weaken their tolerances to pad their profit margin just a bit? Again we don't know, but they are worth thinking about.

>

> If I were going to do a follow up test, I would like the same test done but only using these balls Tour BX, Z Star XV, TP5x, Pro V1x, Chrome Soft X, and Snell MTB-X. That basically gives us all the retail 'X' balls that are played on tour, plus throws in the best Direct to Consumer ball. I'd get 100 boxes of each ball. I'd only hit 115MPH driver swings (my thinking is you will see more of an effect of these ball tolerance issues at high swing speeds). Every ball would be marked to be unique. They would all be thrown in one large bucket and hit in a random order. Any ball that exhibited large variance in carry distance or going off line would be pulled aside. Repeat the test with the balls that showed no differences and again pull outliers aside. At this point you should have over 2000 shots with each ball brand. I'd then hit all of the outliers at least 3 more times and collect that as a different data set. If the ball behaved normally during this test, remove the outlier from the first data set. If it continued to show weird behavior, leave the data in the first data set and keep it in the bad ball pile. At the end, you should have a number of balls from each brand who are probably out of tolerance and removed any crazy environmental outliers from your original data set. You would also see if your balls in the bad ball pile for anyone brand exceeded the number of the margin of error. Depending on what you find, you might be able to explain some or most of the discrepancies in the original test.

 

The point most are making is that this was not a scientific test and the results should not of been published that were so obviously in error. You have to take statistical sampling of lots, these may be in the 100's of balls per each. Callaway's ball production is equivalent to Titleist, I've been to both factories and Bridgestone's. I've helped to engineer golf balls, specifically the ball CAD models and dimple patterns. It's not the dimples that would cause the deviation, it's always the ball consistency. The cavities of the mold have run rates and wear rates, different depending on temp and material. Paint & finish can be a factor. The dimple design and "aerodynamics" is not.

 

These guys are wannabe pseudo scientific guys without a basic engineering degree. People that try to run test like this that don't have an engineering background have issues with interpreting data. The golden rule is whenever you think you discover something or the data suggests something unusual you re-test, then ask for pier review testing or confirmation. These guys avoid that at all cost. Their equipment is shotty and their basic understanding of engineering is next to crap.

Link to comment
Share on other sites

> @Pittknife said:

> The point most are making is that this was not a scientific test and the results should not of been published that were so obviously in error. You have to take statistical sampling of lots, these may be in the 100's of balls per each. Callaway's ball production is equivalent to Titleist, I've been to both factories and Bridgestone's. I've helped to engineer golf balls, specifically the ball CAD models and dimple patterns. It's not the dimples that would cause the deviation, it's always the ball consistency. The cavities of the mold have run rates and wear rates, different depending on temp and material. Paint & finish can be a factor. The dimple design and "aerodynamics" is not.

>

> These guys are wannabe pseudo scientific guys without a basic engineering degree. People that try to run test like this that don't have an engineering background have issues with interpreting data. The golden rule is whenever you think you discover something or the data suggests something unusual you re-test, then ask for pier review testing or confirmation. These guys avoid that at all cost. Their equipment is shotty and their basic understanding of engineering is next to ****.

 

I don't think I expected a rigorous scientific test by non scientists, and anyone claiming it was is crazy. That doesn't mean the entire test is stupid and invalid. The test did show us a number of things.

 

First, the dry numbers for the balls are good and a list like that has never really been published before. Having the balls speed, launch, and spin for multiple swing speeds off the driver and 7i is a really good thing. Knowing wedge spin for all these balls is also good. That's something we didn't have before and it's it great we have it now. These numbers line up really well with tests I've seen elsewhere, so I have no reason to doubt them.

 

Second, it really did debunk the idea that a soft golf ball would be better for slow swing speeds. I've heard enough people I trust say it wasn't true in interviews to have already doubted it, but you have tons of marketing out there trying to fit swing speed to compression. I don't think you can argue that this test pretty much proved that the idea of that is bunk. It isn't exactly linear with compression to ball speed, but it's fairly close. Now whether or not you are willing to give up a small amount of speed for feel is a personal decision, but at least we aren't fooling ourselves into thinking we're gaining when we're not.

 

Third, it did shed a light on quality control issues and tolerances with balls. This test wasn't rigorous enough to say that any one brand is going to be less consistent than another. But we do know inconsistencies are out there. We know what the actual compression of these balls are now. We can see that compression can vary a ton just in a single box of balls. I already knew that small issues in the consistency of balls could have big effects, but I wasn't aware of how prevalent those inconsistencies actually are. It actually makes me wonder about the times I've hit what felt like really great drives and seen weird ball flight. I'm curious enough about this that I'd really love to see a test done on the consistency of the different brands. It really could be that they are all pretty much the same. You imply that this is the case, and it very well may be. But the idea of a company cutting corners (especially on the balls at a lower price point) wouldn't shock me at all.

 

I don't know if you're going to get a better test than this from anyone. A swing robot and Trackman is about the best we can expect anyone not connected to the industry to be able to use. A very rigorous test would be expensive and without funding from someone is likely never to happen. I still think you could do the more rigorous test on consistency that I described above. But even in that case, you'd be talking about spending $25K or more just on the balls in the test.

 

All that being said, this is way better than most of the anecdotal testing we see here where someone says a ball goes further after playing 9 holes with it. Just concentrating on the dry ball numbers and ignoring any differences after they leave the club face can get most anyone to chose a ball that would suit their game quite well.

 

 

  • Like 2
Link to comment
Share on other sites

I found the results shocking. I'm the typical low handicap who looks for his golf ball when I hit one off line - if I find a tour level ball I keep it, anything else gets left beside the fairway, along the tree line, etc where ever I find them I toss them out so someone else can collect them if want balls. I only keep the prov's, chromes, tp5's, vice's, snells, all the X versions of those, etc - basically if a tour player uses them and they're good enough for them - then I'd keep them. So I end up with maybe 10 different kinds of balls in the bag. WELL this test proves balls vary widely and due to compression and quality control even balls in the same sleeve can vary. So I'm making a conscious effort to only stock prov1 prov1x and haven't tried them yet but if I come across a bridgestone tour b X I'll try it. Those three are IT for me. I'm giving away or using all the others for shag bag material.

 

Remember: Soft = Slow. Since I am not hitting it 350 whenever I want to, I'll use a harder ball that still spins plenty. I'd like a drop n stop instead of a rip back. I have no issues stopping balls on greens and most the time I have issues with them sitting too hard. So I'll take the advice of MGS and use just those 3 exclusively - they're all pretty similar in terms of compression, spin, distance, etc. Titliest and Bridgestone have the most control over the supply chain and highest quality standards. I'm sad because taylormade and callaway (I thought) were right up there but trust me we're not getting the same as the pros play and the quality control over the pros balls are spot on. Not so much for the balls on the shelf.

Ben S
Hailing from N Aurora IL

WITB:
Putter:
Mizuno by Bettinardi BC1 w/SuperStroke MidSlim 2.0 Flamed finish (1 Degree)
Driver:
Ping G – Mitsubishi Diamana Blue 73 X (10.5 Degree)
3 Wood: Callaway Epic Flash – Mitsubishi Tensei AV Blue 75 S (15.5 Degree)
3 Hybrid:
Tour Edge CBX 119 – Project X EvenFlow Black 85 S (18 Degree)
3 Hybrid:
Ping G – Mitsubishi Tensei CK Pro Blue HY 86 S (19 Degree)
4 – GW:
Ping i210 - Oban CT-115 X (22.5 - 50 Degrees)
SW:
Titleist SM7 S Grind - Tour Chrome - Stock S200 (54 Degree)
LW:
Titleist SM7 D Grind - Tour Chrome - Stock S200 (58 Degree)
All Grips: 
Winn Dri-Tec Midsize - Gray/Blue w/ 2 extra wraps low hand
Customizing:
Lime Green/Hot Pink Custom Paintfill - all clubs
White ferrules with Blue Stripes from Cell-Parts.net
Irons fitted & built by True Spec Golf
Custom Headcovers from Sunfish Golf
PING White DLX Cart Bag

Link to comment
Share on other sites

I've been blocked on their Twitter feed(couple years now) for simply mentioning golfwrx as the starting point of the original Kirkland ball hype. There were over a 1000 post here already before they "shut down the internet with the prov 1 test comparison article" lol. They will not accept criticism or anything that doesn't line up with how the think. Um no comment lol.

Srixon ZX7/Cobra Radspeed XB

ZX 3w Riptide/Cobra radspeed motore F3

ZX 19/22 hy Riptide 

Cobra FT 5-Pw,Aw KBS/ Cobra F9 5-pw,aw 

Cbx ZC 54/Cbx FF 58 Dgold

Cobra sport 45/Cleveland elevado

Srixon QST/Titleist Pro V1 or whatever urethane balls I may find 

 

Link to comment
Share on other sites

> @arbeck said:

> > @Pittknife said:

> > The point most are making is that this was not a scientific test and the results should not of been published that were so obviously in error. You have to take statistical sampling of lots, these may be in the 100's of balls per each. Callaway's ball production is equivalent to Titleist, I've been to both factories and Bridgestone's. I've helped to engineer golf balls, specifically the ball CAD models and dimple patterns. It's not the dimples that would cause the deviation, it's always the ball consistency. The cavities of the mold have run rates and wear rates, different depending on temp and material. Paint & finish can be a factor. The dimple design and "aerodynamics" is not.

> >

> > These guys are wannabe pseudo scientific guys without a basic engineering degree. People that try to run test like this that don't have an engineering background have issues with interpreting data. The golden rule is whenever you think you discover something or the data suggests something unusual you re-test, then ask for pier review testing or confirmation. These guys avoid that at all cost. Their equipment is shotty and their basic understanding of engineering is next to ****.

>

> I don't think I expected a rigorous scientific test by non scientists, and anyone claiming it was is crazy. That doesn't mean the entire test is stupid and invalid. The test did show us a number of things.

>

> First, the dry numbers for the balls are good and a list like that has never really been published before. Having the balls speed, launch, and spin for multiple swing speeds off the driver and 7i is a really good thing. Knowing wedge spin for all these balls is also good. That's something we didn't have before and it's it great we have it now. These numbers line up really well with tests I've seen elsewhere, so I have no reason to doubt them.

>

> Second, it really did debunk the idea that a soft golf ball would be better for slow swing speeds. I've heard enough people I trust say it wasn't true in interviews to have already doubted it, but you have tons of marketing out there trying to fit swing speed to compression. I don't think you can argue that this test pretty much proved that the idea of that is bunk. It isn't exactly linear with compression to ball speed, but it's fairly close. Now whether or not you are willing to give up a small amount of speed for feel is a personal decision, but at least we aren't fooling ourselves into thinking we're gaining when we're not.

>

> Third, it did shed a light on quality control issues and tolerances with balls. This test wasn't rigorous enough to say that any one brand is going to be less consistent than another. But we do know inconsistencies are out there. We know what the actual compression of these balls are now. We can see that compression can vary a ton just in a single box of balls. I already knew that small issues in the consistency of balls could have big effects, but I wasn't aware of how prevalent those inconsistencies actually are. It actually makes me wonder about the times I've hit what felt like really great drives and seen weird ball flight. I'm curious enough about this that I'd really love to see a test done on the consistency of the different brands. It really could be that they are all pretty much the same. You imply that this is the case, and it very well may be. But the idea of a company cutting corners (especially on the balls at a lower price point) wouldn't shock me at all.

>

> I don't know if you're going to get a better test than this from anyone. A swing robot and Trackman is about the best we can expect anyone not connected to the industry to be able to use. A very rigorous test would be expensive and without funding from someone is likely never to happen. I still think you could do the more rigorous test on consistency that I described above. But even in that case, you'd be talking about spending $25K or more just on the balls in the test.

>

> All that being said, this is way better than most of the anecdotal testing we see here where someone says a ball goes further after playing 9 holes with it. Just concentrating on the dry ball numbers and ignoring any differences after they leave the club face can get most anyone to chose a ball that would suit their game quite well.

>

>

 

1. A swing robot is not always accurate, ask the guys at the kingdom from Taylormade about that. Overtime and due to the gears, pulleys or whatever it uses it will have start to drift at settings, it's a lot of force. The base must be fixed otherwise you will have different dispersion.

2. These guys present this data like it's gospel or even valid, it's not. There are a lot of people that read that and took it at face value. Any reputable research is posted with all data points and testing method, any challenges are answered and debated.

3. Any invalidation of any data point will cause the whole test to be invalidated until ran again. That's will real testing labs will never publish anything until they are almost certain and have repeated data to prove so, that can be pier reviewed.

4. As I have said, I worked with Dean Snell for 5 years. Dean will tell you his factory isn't making more consistent balls than Callaway's. It's impossible. Callaway own's their factories and has complete control. Direct to market companies contract out to an OS2 or OS3, they have very little control over what these factories do. Contract factories have competing goals than their customers, they want to reduce cost and make more money, even at the cost of quality if they can get away with it. When you own the factory, the goals are in alignment.

 

I understand these guys mentality, they want to be the "a ha" guys that found something and be the first to publish it. It is reckless to post findings like that.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Our picks

    • 2024 Zurich Classic - Discussion and Links to Photos
      Please put any questions or comments here
       
       
       
       
      General Albums
       
      2024 Zurich Classic - Monday #1
      2024 Zurich Classic - Monday #2
       
       
       
      WITB Albums
       
      Alex Fitzpatrick - WITB - 2024 Zurich Classic
      Austin Cook - WITB - 2024 Zurich Classic
      Alejandro Tosti - WITB - 2024 Zurich Classic
      Davis Riley - WITB - 2024 Zurich Classic
      MJ Daffue - WITB - 2024 Zurich Classic
      Nate Lashley - WITB - 2024 Zurich Classic
       
       
       
       
       
      Pullout Albums
       
      MJ Daffue's custom Cameron putter - 2024 Zurich Classic
      Cameron putters - 2024 Zurich Classic
      Swag covers ( a few custom for Nick Hardy) - 2024 Zurich Classic
      Custom Bettinardi covers for Matt and Alex Fitzpatrick - 2024 Zurich Classic
       
       
       
      • 1 reply
    • 2024 RBC Heritage - Discussion and Links to Photos
      Please put any questions or comments here
       
       
       
       
       
      General Albums
       
      2024 RBC Heritage - Monday #1
      2024 RBC Heritage - Monday #2
       
       
       
       
      WITB Albums
       
      Justin Thomas - WITB - 2024 RBC Heritage
      Justin Rose - WITB - 2024 RBC Heritage
      Chandler Phillips - WITB - 2024 RBC Heritage
      Nick Dunlap - WITB - 2024 RBC Heritage
      Thomas Detry - WITB - 2024 RBC Heritage
      Austin Eckroat - WITB - 2024 RBC Heritage
       
       
       
       
       
      Pullout Albums
       
      Wyndham Clark's Odyssey putter - 2024 RBC Heritage
      JT's new Cameron putter - 2024 RBC Heritage
      Justin Thomas testing new Titleist 2 wood - 2024 RBC Heritage
      Cameron putters - 2024 RBC Heritage
      Odyssey putter with triple track alignment aid - 2024 RBC Heritage
      Scotty Cameron The Blk Box putting alignment aid/training aid - 2024 RBC Heritage
       
       
       
       
       
       
        • Like
      • 7 replies
    • 2024 Masters - Discussion and Links to Photos
      Huge shoutout to our member Stinger2irons for taking and posting photos from Augusta
       
       
      Tuesday
       
      The Masters 2024 – Pt. 1
      The Masters 2024 – Pt. 2
      The Masters 2024 – Pt. 3
      The Masters 2024 – Pt. 4
      The Masters 2024 – Pt. 5
      The Masters 2024 – Pt. 6
      The Masters 2024 – Pt. 7
      The Masters 2024 – Pt. 8
      The Masters 2024 – Pt. 9
      The Masters 2024 – Pt. 10
       
       
       
        • Thanks
        • Like
      • 14 replies
    • Rory McIlroy testing a new TaylorMade "PROTO" 4-iron – 2024 Valero Texas Open
      Rory McIlroy testing a new TaylorMade "PROTO" 4-iron – 2024 Valero Texas Open
        • Thanks
        • Like
      • 93 replies
    • 2024 Valero Texas Open - Discussion and Links to Photos
      Please put any questions or Comments here
       
       
       
      General Albums
       
      2024 Valero Texas Open - Monday #1
      2024 Valero Texas Open - Tuesday #1
       
       
       
       
       
      WITB Albums
       
      Ben Taylor - WITB - 2024 Valero Texas Open
      Paul Barjon - WITB - 2024 Valero Texas Open
      Joe Sullivan - WITB - 2024 Valero Texas Open
      Wilson Furr - WITB - 2024 Valero Texas Open
      Ben Willman - SoTex PGA Section Champ - WITB - 2024 Valero Texas Open
      Jimmy Stanger - WITB - 2024 Valero Texas Open
      Rickie Fowler - WITB - 2024 Valero Texas Open
      Harrison Endycott - WITB - 2024 Valero Texas Open
      Vince Whaley - WITB - 2024 Valero Texas Open
      Kevin Chappell - WITB - 2024 Valero Texas Open
      Christian Bezuidenhout - WITB (mini) - 2024 Valero Texas Open
      Scott Gutschewski - WITB - 2024 Valero Texas Open
      Michael S. Kim WITB – 2024 Valero Texas Open
       
       
       
      Pullout Albums
       
      Cameron putter - 2024 Valero Texas Open
      Ben Taylor with new Titleist TRS 2 wood - 2024 Valero Texas Open
      Swag cover - 2024 Valero Texas Open
      Greyson Sigg's custom Cameron putter - 2024 Valero Texas Open
      Davis Riley's custom Cameron putter - 2024 Valero Texas Open
      Josh Teater's custom Cameron putter - 2024 Valero Texas Open
      Hzrdus T1100 is back - - 2024 Valero Texas Open
      Mark Hubbard testing ported Titleist irons – 2024 Valero Texas Open
      Tyson Alexander testing new Titleist TRS 2 wood - 2024 Valero Texas Open
      Hideki Matsuyama's custom Cameron putter - 2024 Valero Texas Open
      Cobra putters - 2024 Valero Texas Open
      Joel Dahmen WITB – 2024 Valero Texas Open
      Axis 1 broomstick putter - 2024 Valero Texas Open
      Rory McIlroy testing a new TaylorMade "PROTO" 4-iron – 2024 Valero Texas Open
      Rory McIlroy's Trackman numbers w/ driver on the range – 2024 Valero Texas Open
       
       
       
        • Like
      • 4 replies

×
×
  • Create New...