Jump to content

My Golf Spy Ball Test - General Discussion


Recommended Posts

As I approach retirement age, I am contemplating the adage "youth is wasted on the young". It's like right when the prospect of playing with the old guys 4-5 days a week is in sight, my back and legs are start to kill me every time I play two days in a row. D'Oh!

 

But yeah, I'm lucky to get to play 3x/week most weeks year round. Sometimes a bit more.

Link to post
Share on other sites
  • Replies 757
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

My goodness there is a lot of hate for the golf spy guys. Why can't you just take their data as a starting point and call it what it is - a very good test with very good data. It was hit with a robo

People are grumbling about Mygolfespionage tests, but if you have looked at their results from the last several years of “most wanted” they haven’t shown any brands favoritism.   They also include a

My guess is they found large differences in the quality and consistency with some companies products.

Yes. There is a golf course I can see from my office. And I know one day I’ll be able to play more... and I pray I’m not too tired to do it.

Taylormade SIM - 9.75* - GD Tour ADTP 6x
Taylormade M5 - 14.25* - Diamana Limited S+ 70x

Taylormaed P790 UDI - PX Hzrdus Black 100g 6.5 (or 3h)
Callaway Apex 3h - 20* - PX Catalyst 6.5 (or UDI)

Calloway Apex 4h - 23* - Rogue Silver 110msi X (or 4i)
PXG 0311p Gen 2 - 4-PW - Dynamic Gold AMT x100

PXG 0311t Gen 3 - GW  - Dynamic Gold x100
Ping Glide 3.0 - 54* (52* bent) - Dynamic Gold x100
Vokey Spin Milled - 62* - Dynamic Gold s400
Scotty Cameron Phantom X 8.5
Titleist Pro V1 Custom #38 with #MLB2MD
Titleist Staff Stand Bag
Link to post
Share on other sites

> @caloge said:

> The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

>

> Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

 

You nailed every issue I had with this test. And lmao at metal gear solid.

Link to post
Share on other sites

> @caloge said:

> The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

>

> Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

 

It's almost like they are doing this stuff for views and money

Link to post
Share on other sites

> @caloge said:

> The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

>

> Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

 

agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

Link to post
Share on other sites

> @mattdubya said:

> > @caloge said:

> > The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

> >

> > Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

>

> agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

 

I agree! I was really surprised by their tone.

  • Like 1

PING G400 9* Tour 65 X - PING G400 Stretch 13* HZRDUS Yellow - PING G410 17* Hybrid Tour 85 X
PING I210 4-U Project X 6.5 - PING Glide 2.0 Stealth 55 & 60* Project X 6.0
PING VALOR CB 35"

Link to post
Share on other sites

> @jamie said:

> > @mattdubya said:

> > > @caloge said:

> > > The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

> > >

> > > Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

> >

> > agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

>

> I agree! I was really surprised by their tone.

 

I'd really like to see an in-depth study, along with methodology, by someone reputable. It's not that a certain website isn't, rather I feel like what they did was more of a conversation starter than anything. The problem is that golf is a mental game, and now it's in a lot of people's heads. I have three or four dozen Snell MTB's sitting on the shelf and a doze Chrome Softs that I likely won't use now. It's not that they're bad balls and it might well be that they're actually great and the study is to blame in some way, but now that it's in my head, it's bothering me. As all of us know, when you step on that tee, you have to have 100% confidence in the equipment, because goodness knows this game is hard enough with all of the other variables. Sighs.

 

Edit to add: I've not noticed shorter distance (Callaway) and poor dispersion (Snell), but in a real-world setting, it's hard to tell whether it's you or the ball. I know that everyone on here is a plus 4, but honestly how many of us are good enough to notice the difference, or would we simply think it was swing-related. Lots of questions without easy answers.

 

Perhaps the Golfwrx guys could do a podcast discussing balls (insert your own joke, here).

Link to post
Share on other sites

Where can I buy these Metal Gear Solid golf balls?! I will pay anything!

Driver (9.0) - Cobra F9 Aldila Rogue Silver 70 S, 44.5"
Wood (14.5) - Cobra RADspeed Aldila Rougue Silver 110msi 70S

Wood (17.5) - G425 MAX Alta CB 65 Slate S
Driving Iron (20) - Srixon U65 Project X 5.5
Irons (5-6) - Srixon Z565 Project X 5.5
Irons (7-P) - Srixon Z765 Project X 5.5
Wedges - Vokey SM-7 Jet Black / 50.08 F / 54.08 M / 58.08 M DG S300
Putter - Edel E-1
Ball - Titleist Prov1x
ZGrip Midsized Grips

Link to post
Share on other sites

> @mattdubya said:

> > @caloge said:

> > The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

> >

> > Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

>

> agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

 

I came here to post this. The world must look mighty small from their high horse. Even to a degree slandering TXG who's results (ball speed and spin) largely collaborate with their own.

Link to post
Share on other sites

> @Doyouevenblade said:

> > @mattdubya said:

> > > @caloge said:

> > > The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

> > >

> > > Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

> >

> > agree with this. in their mind it's reconciled by being snarky dicks to everyone who comments on their instagram who doesn't take what they do as gospel. they're turds.

>

> I came here to post this. The world must look mighty small from their high horse. Even to a degree slandering TXG who's results (ball speed and spin) largely collaborate with their own.

 

TXG guys don’t push anything or make ridiculous claims. The skinny Italian looking one actually knows more about golf clubs than golf club designers. I’d listen to what TXG says any day of the week, they are polite and rational.

  • Like 2
Link to post
Share on other sites

> @4puttJohnny said:

> Really amazing how few people there are who understand anything about interpreting test results. Most simply try to convert it to a personal affront based on emotion if it doesn't say the ball THEY play is the best! Too funny.

 

Well, or folks like me. I have no idea what the hell I’m looking at. At least I’ll admit it.

  • Like 1
Link to post
Share on other sites

> @GDTBATH said:

> > @4puttJohnny said:

> > Really amazing how few people there are who understand anything about interpreting test results. Most simply try to convert it to a personal affront based on emotion if it doesn't say the ball THEY play is the best! Too funny.

>

> Well, or folks like me. I have no idea what the **** I’m looking at. At least I’ll admit it.

 

I’m in the same camp. I looked at ball speed, launch, spin, peak height and descent. All the other numbers make me go cross eyed!

Callaway Epic Speed 9° driver Tensei 1K Pro White 60S

Callaway Epic Flash 15° fairway wood Tensei Pro Blue 70S

Callaway Epic Flash 21° hybrid MMT Hybrid 80S

Mizuno JPX 921 Forged 4-PW & GW Modus3 Tour 120 S flex

Mizuno T20 Raw Wedges 50-07 Modus3 Tour 120 S flex

Mizuno T20 Raw Wedges 55-13 and 60-06 Modus3 125 Wedge

Odyssey RSX Milled V-Line Fang

Callaway Chrome Soft X

Link to post
Share on other sites

> @noodle3872 said:

> > @GDTBATH said:

> > > @4puttJohnny said:

> > > Really amazing how few people there are who understand anything about interpreting test results. Most simply try to convert it to a personal affront based on emotion if it doesn't say the ball THEY play is the best! Too funny.

> >

> > Well, or folks like me. I have no idea what the **** I’m looking at. At least I’ll admit it.

>

> I’m in the same camp. I looked at ball speed, launch, spin, peak height and descent. All the other numbers make me go cross eyed!

Agree. Distance and shot area are important to me. I want a ball that goes where it is supposed to. Too many numbers that no one really understands or wants to accept.

Driver _____ Ping G400 Max
Woods ____ Ping G410 3 & 5 
Hybrids ___ Ping  G410 4H & Titleist 818H1 5H
Irons ______ Titleist 718 AP1 6-W
Wedges ___ Titleist Vokey SM8 52.08F & 56.08S
Putter _____ Rife 400 Mid Mallet
Ball _______ Snell MTB Black - Yellow / Srixon Q-Star Tour - Yellow
Bag _______ Datrek Lite Rider
Distance __ GPS:  Bushnell NEO Ghost,  Rangefinder:  Precision Pro NX7 Pro
GHIN ______ HCP floats between 10 and 12

Link to post
Share on other sites

Valid, Legitimate and Unbiased results? I have my doubts and it sounds like a good many here do as well

 

re: Kirkland 3pc Golf Ball as it is mentioned on *** buyers guide - " _While that’s perfectly fine for a Mickey Mouse ball, with the price holding at 2/$30, the next Costco Kirkland Signature Golf Ball is shaping up to be a hell of a lot less exciting and impactful than the original._ " An article which appeared on *** August 2017, authored by the **same** person that authored the current *** golf ball buyer's guide. If you need to read article google *** and Kirkland I would read it to get the full gist of how the author felt about this ball. (Dear Moderators I'm only listing link in case it's ok: https://Not allowed because of spam.com/2018-costco-signature-golf-ball/ )

 

AND then, I found this to be the oddest thing about the buyer's guide. How on one hand do you dis the Kirkland golf ball and then later list it as a "Top Performers - Value" at #2. Not only that but it's singled out several times in the buyer's guide as "shorter" than most but still #2 value ball. So on one hand the author thought this was a Mickey Mouse ball but on the other hand... you fill in the blank.

 

OP/ED: the choice of Kirkland (I've gamed this ball and it's good ball) as **#2** is not by accident, was a well thought out and **conceived** marketing plan. When you read it all, put it together, the #1 ball looks like a no brainer?

 

I don't think so, your heart and eyes can be fooled but never your brain...

 

Link to post
Share on other sites

> @arbeck said:

> Was the test perfect? No. Was the test really good? Yes. They could have done the test all with GC Quad. If so, they would have received similar results to TXG. However, that wouldn't tell them anything about aerodynamic differences in the ball and/or manufacturing defects that the balls had. Those two things are important and you need full radar testing to do that. Of course then you have the elements affecting the balls. You can minimize this by doing the test on the calmest day possible, randomizing the shot order, and taking enough shots that the environmental effects normalize across the sample of shots.

>

> All that being said, no test is going to be perfect, and unless you are doing this test in a glass bubble, you are going to have to deal with the elements. And even then, isn't how the ball deals with wind a valid part of the test?

>

> That being said, lets talk about the Chrome Soft X specifically. Nothing in it's numbers suggest it losing 18 yards of carry distance to the MTB-X. But there's some interesting things about the ball. Not only is the Chrome Soft X shorter than the MTB-X at high swing speeds. It's shorter than the regular Chrome Soft at both high and low swing speeds. Assuming they didn't just test all the Chrome Soft X balls back to back at both swing speeds, something weird is up with that ball. It could be that it just got really unlucky and had more shots into a puff of wind than other balls. It could be that it's aerodynamics don't handle shots into the wind as other balls as well and those two things combine to make it drop off in distance. It could be manufacturing flaws in the dimple pattern of a few balls that were hurting distance.

>

> If you take a look at all the X balls you'll see that their launch conditions are all almost identical. The Chrome Soft X does have the lowest ball speed but everything is within 3 miles an hour. All the spin numbers are very close save the Mizuno RB Tour X which is a few hundred RPM higher. Still, all these balls should be landing fairly close to each other. I'd expect maybe 5-6 yards of carry distance between the best and worst just based on launch conditions. There are three basic outliers though. The Snell MTB-X, Taylormade TP5X, and Chrome Soft X. The Taylormade and Chrome Soft are both shorter than you would expect given launch conditions, and the Snell was longer. If you look closely, those three balls also have the largest land area. So again, this points to either something environmental effecting these balls more or some flaws in the manufacturing of these three. It could also be a combination of these things. We know that the Snell had at least some defects (as shown by the ball that went way offline). It's not crazy to think that the other two balls had some defects as well. It also could be that the Snell on average got a couple of puffs of wind more behind it and the Taylormade and Chrome Soft had a couple of puffs of wind into.

>

> Ideally you'd want to do another test of just those balls to figure out what caused the outliers. I'd probably hit 100 shots which each ball from at least 4 or 5 boxes of balls. I'd randomize the shot order. Any ball that showed up as an outlier would be marked and the shot would be noted. On subsequent shots with that ball you'd check to see if it behaved the same. That would hopefully normalize the environmental conditions and let you find any flawed balls.

>

> None of that of course means the test was flawed. It could have done better to drill down to find why those outliers existed. But that's not usually how studies like this work. Big studies like this expect outliers and further work should be done to dig into why they are there. What people seem to be missing (and the authors are guilty of publicizing the number too much) is that carry distance with the driver is probably the least important thing in the test. Balls speed, launch angle, and spin are all more important things with the driver. 7i spin is probably a more important number than those. And wedge spin is probably more important than that. If it was just driver distance problems with the Chrome Soft balls you wouldn't dock them that much. But they have worse ball speed with the driver. They all have less 7i spin than their competitors. They have less wedge spin than their competitors. That doesn't mean it is a bad ball for everyone, but it doesn't compare great to the other balls in it's class.

 

This is such a great post. Very well said!

:smilie_titty: TSi2 10.5 GD IZ5 
:smilie_ping: G410 5 & 7 FW Alta CB 65g
:smilie_ping: G410 4 hybrid Alta CB
:smilie_titty:T100S/T200 Jet Black Combo AMT Black R300 
Vokey SM 7 54/58

Vokey SM8 46*
TM Custom My Spider 35" 

 

ProV1X No. 12
 

Link to post
Share on other sites

> @arbeck said:

> Was the test perfect? No. Was the test really good? Yes. They could have done the test all with GC Quad. If so, they would have received similar results to TXG. However, that wouldn't tell them anything about aerodynamic differences in the ball and/or manufacturing defects that the balls had. Those two things are important and you need full radar testing to do that. Of course then you have the elements affecting the balls. You can minimize this by doing the test on the calmest day possible, randomizing the shot order, and taking enough shots that the environmental effects normalize across the sample of shots.

>

> All that being said, no test is going to be perfect, and unless you are doing this test in a glass bubble, you are going to have to deal with the elements. And even then, isn't how the ball deals with wind a valid part of the test?

>

> That being said, lets talk about the Chrome Soft X specifically. Nothing in it's numbers suggest it losing 18 yards of carry distance to the MTB-X. But there's some interesting things about the ball. Not only is the Chrome Soft X shorter than the MTB-X at high swing speeds. It's shorter than the regular Chrome Soft at both high and low swing speeds. Assuming they didn't just test all the Chrome Soft X balls back to back at both swing speeds, something weird is up with that ball. It could be that it just got really unlucky and had more shots into a puff of wind than other balls. It could be that it's aerodynamics don't handle shots into the wind as other balls as well and those two things combine to make it drop off in distance. It could be manufacturing flaws in the dimple pattern of a few balls that were hurting distance.

>

> If you take a look at all the X balls you'll see that their launch conditions are all almost identical. The Chrome Soft X does have the lowest ball speed but everything is within 3 miles an hour. All the spin numbers are very close save the Mizuno RB Tour X which is a few hundred RPM higher. Still, all these balls should be landing fairly close to each other. I'd expect maybe 5-6 yards of carry distance between the best and worst just based on launch conditions. There are three basic outliers though. The Snell MTB-X, Taylormade TP5X, and Chrome Soft X. The Taylormade and Chrome Soft are both shorter than you would expect given launch conditions, and the Snell was longer. If you look closely, those three balls also have the largest land area. So again, this points to either something environmental effecting these balls more or some flaws in the manufacturing of these three. It could also be a combination of these things. We know that the Snell had at least some defects (as shown by the ball that went way offline). It's not crazy to think that the other two balls had some defects as well. It also could be that the Snell on average got a couple of puffs of wind more behind it and the Taylormade and Chrome Soft had a couple of puffs of wind into.

>

> Ideally you'd want to do another test of just those balls to figure out what caused the outliers. I'd probably hit 100 shots which each ball from at least 4 or 5 boxes of balls. I'd randomize the shot order. Any ball that showed up as an outlier would be marked and the shot would be noted. On subsequent shots with that ball you'd check to see if it behaved the same. That would hopefully normalize the environmental conditions and let you find any flawed balls.

>

> None of that of course means the test was flawed. It could have done better to drill down to find why those outliers existed. But that's not usually how studies like this work. Big studies like this expect outliers and further work should be done to dig into why they are there. What people seem to be missing (and the authors are guilty of publicizing the number too much) is that carry distance with the driver is probably the least important thing in the test. Balls speed, launch angle, and spin are all more important things with the driver. 7i spin is probably a more important number than those. And wedge spin is probably more important than that. If it was just driver distance problems with the Chrome Soft balls you wouldn't dock them that much. But they have worse ball speed with the driver. They all have less 7i spin than their competitors. They have less wedge spin than their competitors. That doesn't mean it is a bad ball for everyone, but it doesn't compare great to the other balls in it's class.

 

Great post.

Link to post
Share on other sites

I have asked a few times for them to clarify whether their anomolous shots were taken out of the datasets or were they left in. The study says they were removed. But logic says they were left in - otherwise there is simply no way to create some of these shot dispersion areas on a robot test, how else do you get a 40 yard by 40 yard shot box - thats insanely big.

 

By way of comparison I went back to my driver fitting data from a couple months ago. There were only 4-6 recorded shots per combo after we removed the really bad ones but my "shot areas" were mostly in the 1200-1600 sq yd range.

 

I am guessing the bad shots were left in to add to the story and because it does create those shot areas that are so big. Otherwise - why not release the shot by shot data?

 

Oh and for what its worth, my questions were not only not answered - they were deleted altogether.

 

BTW if you are "surprised" at the level of snark they bring to the table, you must be new here lol.

  • Like 1

SIM Max 12* - HZRDUS Smoke Green 6.5 / SIM 15* 3W - Diamana 75s 
SIM 2i UDI Diamana Thump / 5i Steelfiber 95s
P790 6,7i / P770 8i-GW Steelfiber 95s
Vokey SM8 Raw 54* / 58* / 62*
Putter / TM TP5 Pix

Link to post
Share on other sites

 

> @caloge said:

> The problem with this test, and most of what Metal Gear Solid does, is the results are just there. They like to play scientist, but don't know what it means to produce a study and the results of that study. It is sort of strange that they would put all this time into testing and analysis but end up doing half a job. The idea for the test is great. The results are interesting. However, the study is half finished. At this point, they should have sat down and said why are we getting these results that are entirely inconsistent with what we would expect? Why this large gap distance that does not match ballspeed/spin gaps? Are there additional variables that need to be eliminated? Until they can reconcile that with an explanation, the results of the study at that macro level are worthless. Not to say that all data itself is not worthwhile. It is a handy resource for spin and ballspeed #s. It is also interesting to hear about tangible results, such as the Cut balls durability issues.

>

> Lastly, I will also say that i question the "QC" results. The suggestion that some balls are flying 20 yards off line and this is because of a misaligned core or a ball that isnt perfectly round seems strange. Were these balls cut to examined to determine what was off about them? If only a handful of balls fall into the excellent or very good category of QC, how can any of the numbers produced by the other balls be reliable (spin, ballspeed, anything). If this is the case, Metal Gear Solid should have basically said we tested 33 balls, but 20+ were of such unreliable quality that the results could not be accepted as demonstrative of their expected performance. But that wasnt the case. The actual numbers must have been repeatable or else the data is useless... how is that reconciled?

 

Conceptually that sounds like a good idea I doubt they possess the know how or equipment to go that far. I wouldn’t be surprised if less than 0.001” is significant ball asymmetry. You can’t measure that by cutting a ball in half with a pipe cutter. You would need pretty specialized equipment and processes.

 

 

 

Hitting some balls with a machine and applying some light statistics versus dissecting balls as if you’re a subject matter expect is a vastly different scope.

Link to post
Share on other sites

> @4puttJohnny said:

> Really amazing how few people there are who understand anything about interpreting test results. Most simply try to convert it to a personal affront based on emotion if it doesn't say the ball THEY play is the best! Too funny.

 

Well, I guess it's a given that you are one of enlightened few who understand anything about interpreting test results. Amirite?

Link to post
Share on other sites

I know Crossfield isn't the most popular here, but I went and dug up this [old video](

"old video") where he talked to the aerodynamics guy at Titleist. The entire video is worth a watching but if you start at about 7:10 you can see tests where they work with a ball that has dimples on one side that are less than a thousandth of an inch different than the dimples on the other side. The Titleist guy claims that they have found balls that exhibit these characteristics in the wild. They could be made by something as simple as an uneven paint spray. But depending on how you tee it up it will fly higher, lower, left, or right. It's all really an advertisement for the QC and manufacturing tolerances at Titleist to be fair, but give the MGS test and how Titleist performed, there might be something to that.

 

I can't say with certainty that this caused most of or even any of the carry distance and ball flight oddities, but I wager it's probably a good shout. Combine that with cores being off by thousands of an inch or balls not being perfectly round, and you could see how tolerances could really come in to play. And for all the accolades the Snell MTB-X got for being the longest, if you go with this hypothesis, it was also the ball with some of the worst tolerances.

 

Even if we knew with 100% certainty that the carry distance and flight discrepancies were caused by QC problems, we can't really say much about that in terms of ball brands. If every brand had (and this number is pulled out of my butt) 1 ball that was off in every 5 dozen, in a test like this you would expect certain brands to not show any effects but another might show 1 or 2 balls that were off just through a random distribution. Depending on how prevalent the problem was with a brand, you might need to hundreds of boxes before you got an idea of the number of balls that were really off.

 

However, keep this in mind. If you tee up a ball and it goes 20 yards right and the next time you tee up the ball it goes 20 yards left (which could happen if you aligned bad dimples in opposite ways), how many of you would think it was your swing and not the ball? As long as the ball maker can keep this occurrence low enough for you to think it's you and not the balls it might not make sense for them to enforce the tolerances that would keep it from happening. In my mind that's why I'm hesitant to play balls like the Q-Star Tour and Wilson Duo Pro now. Those balls aren't going to be cheaper to make than the Z-Star and FG Tour. So how can they sell them at a lower price? Most likely the tolerances are weaker. They're also marketed to a less skilled golfer who might not notice the effects of the balls and assume their swing was the cause. For the online brands, how strict can their tolerances be and still make them unable to undercut the big guys? Costco probably can because they probably aren't making any money on the ball and they don't care. Snell might be because he has enough ball experience to enforce tolerances through the line and my guess is he is selling his ball at pretty close to the price Titleist does to wholesalers. Other than that though, I don't know. What about bigger brands that might be willing to weaken their tolerances to pad their profit margin just a bit? Again we don't know, but they are worth thinking about.

 

If I were going to do a follow up test, I would like the same test done but only using these balls Tour BX, Z Star XV, TP5x, Pro V1x, Chrome Soft X, and Snell MTB-X. That basically gives us all the retail 'X' balls that are played on tour, plus throws in the best Direct to Consumer ball. I'd get 100 boxes of each ball. I'd only hit 115MPH driver swings (my thinking is you will see more of an effect of these ball tolerance issues at high swing speeds). Every ball would be marked to be unique. They would all be thrown in one large bucket and hit in a random order. Any ball that exhibited large variance in carry distance or going off line would be pulled aside. Repeat the test with the balls that showed no differences and again pull outliers aside. At this point you should have over 2000 shots with each ball brand. I'd then hit all of the outliers at least 3 more times and collect that as a different data set. If the ball behaved normally during this test, remove the outlier from the first data set. If it continued to show weird behavior, leave the data in the first data set and keep it in the bad ball pile. At the end, you should have a number of balls from each brand who are probably out of tolerance and removed any crazy environmental outliers from your original data set. You would also see if your balls in the bad ball pile for anyone brand exceeded the number of the margin of error. Depending on what you find, you might be able to explain some or most of the discrepancies in the original test.

Link to post
Share on other sites

> @arbeck said:

> I know Crossfield isn't the most popular here, but I went and dug up this [old video](

"old video") where he talked to the aerodynamics guy at Titleist. The entire video is worth a watching but if you start at about 7:10 you can see tests where they work with a ball that has dimples on one side that are less than a thousandth of an inch different than the dimples on the other side. The Titleist guy claims that they have found balls that exhibit these characteristics in the wild. They could be made by something as simple as an uneven paint spray. But depending on how you tee it up it will fly higher, lower, left, or right. It's all really an advertisement for the QC and manufacturing tolerances at Titleist to be fair, but give the **** test and how Titleist performed, there might be something to that.

>

> I can't say with certainty that this caused most of or even any of the carry distance and ball flight oddities, but I wager it's probably a good shout. Combine that with cores being off by thousands of an inch or balls not being perfectly round, and you could see how tolerances could really come in to play. And for all the accolades the Snell MTB-X got for being the longest, if you go with this hypothesis, it was also the ball with some of the worst tolerances.

>

> Even if we knew with 100% certainty that the carry distance and flight discrepancies were caused by QC problems, we can't really say much about that in terms of ball brands. If every brand had (and this number is pulled out of my butt) 1 ball that was off in every 5 dozen, in a test like this you would expect certain brands to not show any effects but another might show 1 or 2 balls that were off just through a random distribution. Depending on how prevalent the problem was with a brand, you might need to hundreds of boxes before you got an idea of the number of balls that were really off.

>

> However, keep this in mind. If you tee up a ball and it goes 20 yards right and the next time you tee up the ball it goes 20 yards left (which could happen if you aligned bad dimples in opposite ways), how many of you would think it was your swing and not the ball? As long as the ball maker can keep this occurrence low enough for you to think it's you and not the balls it might not make sense for them to enforce the tolerances that would keep it from happening. In my mind that's why I'm hesitant to play balls like the Q-Star Tour and Wilson Duo Pro now. Those balls aren't going to be cheaper to make than the Z-Star and FG Tour. So how can they sell them at a lower price? Most likely the tolerances are weaker. They're also marketed to a less skilled golfer who might not notice the effects of the balls and assume their swing was the cause. For the online brands, how strict can their tolerances be and still make them unable to undercut the big guys? Costco probably can because they probably aren't making any money on the ball and they don't care. Snell might be because he has enough ball experience to enforce tolerances through the line and my guess is he is selling his ball at pretty close to the price Titleist does to wholesalers. Other than that though, I don't know. What about bigger brands that might be willing to weaken their tolerances to pad their profit margin just a bit? Again we don't know, but they are worth thinking about.

>

> If I were going to do a follow up test, I would like the same test done but only using these balls Tour BX, Z Star XV, TP5x, Pro V1x, Chrome Soft X, and Snell MTB-X. That basically gives us all the retail 'X' balls that are played on tour, plus throws in the best Direct to Consumer ball. I'd get 100 boxes of each ball. I'd only hit 115MPH driver swings (my thinking is you will see more of an effect of these ball tolerance issues at high swing speeds). Every ball would be marked to be unique. They would all be thrown in one large bucket and hit in a random order. Any ball that exhibited large variance in carry distance or going off line would be pulled aside. Repeat the test with the balls that showed no differences and again pull outliers aside. At this point you should have over 2000 shots with each ball brand. I'd then hit all of the outliers at least 3 more times and collect that as a different data set. If the ball behaved normally during this test, remove the outlier from the first data set. If it continued to show weird behavior, leave the data in the first data set and keep it in the bad ball pile. At the end, you should have a number of balls from each brand who are probably out of tolerance and removed any crazy environmental outliers from your original data set. You would also see if your balls in the bad ball pile for anyone brand exceeded the number of the margin of error. Depending on what you find, you might be able to explain some or most of the discrepancies in the original test.

 

The point most are making is that this was not a scientific test and the results should not of been published that were so obviously in error. You have to take statistical sampling of lots, these may be in the 100's of balls per each. Callaway's ball production is equivalent to Titleist, I've been to both factories and Bridgestone's. I've helped to engineer golf balls, specifically the ball CAD models and dimple patterns. It's not the dimples that would cause the deviation, it's always the ball consistency. The cavities of the mold have run rates and wear rates, different depending on temp and material. Paint & finish can be a factor. The dimple design and "aerodynamics" is not.

 

These guys are wannabe pseudo scientific guys without a basic engineering degree. People that try to run test like this that don't have an engineering background have issues with interpreting data. The golden rule is whenever you think you discover something or the data suggests something unusual you re-test, then ask for pier review testing or confirmation. These guys avoid that at all cost. Their equipment is shotty and their basic understanding of engineering is next to crap.

Link to post
Share on other sites

> @Pittknife said:

> The point most are making is that this was not a scientific test and the results should not of been published that were so obviously in error. You have to take statistical sampling of lots, these may be in the 100's of balls per each. Callaway's ball production is equivalent to Titleist, I've been to both factories and Bridgestone's. I've helped to engineer golf balls, specifically the ball CAD models and dimple patterns. It's not the dimples that would cause the deviation, it's always the ball consistency. The cavities of the mold have run rates and wear rates, different depending on temp and material. Paint & finish can be a factor. The dimple design and "aerodynamics" is not.

>

> These guys are wannabe pseudo scientific guys without a basic engineering degree. People that try to run test like this that don't have an engineering background have issues with interpreting data. The golden rule is whenever you think you discover something or the data suggests something unusual you re-test, then ask for pier review testing or confirmation. These guys avoid that at all cost. Their equipment is shotty and their basic understanding of engineering is next to ****.

 

I don't think I expected a rigorous scientific test by non scientists, and anyone claiming it was is crazy. That doesn't mean the entire test is stupid and invalid. The test did show us a number of things.

 

First, the dry numbers for the balls are good and a list like that has never really been published before. Having the balls speed, launch, and spin for multiple swing speeds off the driver and 7i is a really good thing. Knowing wedge spin for all these balls is also good. That's something we didn't have before and it's it great we have it now. These numbers line up really well with tests I've seen elsewhere, so I have no reason to doubt them.

 

Second, it really did debunk the idea that a soft golf ball would be better for slow swing speeds. I've heard enough people I trust say it wasn't true in interviews to have already doubted it, but you have tons of marketing out there trying to fit swing speed to compression. I don't think you can argue that this test pretty much proved that the idea of that is bunk. It isn't exactly linear with compression to ball speed, but it's fairly close. Now whether or not you are willing to give up a small amount of speed for feel is a personal decision, but at least we aren't fooling ourselves into thinking we're gaining when we're not.

 

Third, it did shed a light on quality control issues and tolerances with balls. This test wasn't rigorous enough to say that any one brand is going to be less consistent than another. But we do know inconsistencies are out there. We know what the actual compression of these balls are now. We can see that compression can vary a ton just in a single box of balls. I already knew that small issues in the consistency of balls could have big effects, but I wasn't aware of how prevalent those inconsistencies actually are. It actually makes me wonder about the times I've hit what felt like really great drives and seen weird ball flight. I'm curious enough about this that I'd really love to see a test done on the consistency of the different brands. It really could be that they are all pretty much the same. You imply that this is the case, and it very well may be. But the idea of a company cutting corners (especially on the balls at a lower price point) wouldn't shock me at all.

 

I don't know if you're going to get a better test than this from anyone. A swing robot and Trackman is about the best we can expect anyone not connected to the industry to be able to use. A very rigorous test would be expensive and without funding from someone is likely never to happen. I still think you could do the more rigorous test on consistency that I described above. But even in that case, you'd be talking about spending $25K or more just on the balls in the test.

 

All that being said, this is way better than most of the anecdotal testing we see here where someone says a ball goes further after playing 9 holes with it. Just concentrating on the dry ball numbers and ignoring any differences after they leave the club face can get most anyone to chose a ball that would suit their game quite well.

 

 

  • Like 2
Link to post
Share on other sites

I found the results shocking. I'm the typical low handicap who looks for his golf ball when I hit one off line - if I find a tour level ball I keep it, anything else gets left beside the fairway, along the tree line, etc where ever I find them I toss them out so someone else can collect them if want balls. I only keep the prov's, chromes, tp5's, vice's, snells, all the X versions of those, etc - basically if a tour player uses them and they're good enough for them - then I'd keep them. So I end up with maybe 10 different kinds of balls in the bag. WELL this test proves balls vary widely and due to compression and quality control even balls in the same sleeve can vary. So I'm making a conscious effort to only stock prov1 prov1x and haven't tried them yet but if I come across a bridgestone tour b X I'll try it. Those three are IT for me. I'm giving away or using all the others for shag bag material.

 

Remember: Soft = Slow. Since I am not hitting it 350 whenever I want to, I'll use a harder ball that still spins plenty. I'd like a drop n stop instead of a rip back. I have no issues stopping balls on greens and most the time I have issues with them sitting too hard. So I'll take the advice of MGS and use just those 3 exclusively - they're all pretty similar in terms of compression, spin, distance, etc. Titliest and Bridgestone have the most control over the supply chain and highest quality standards. I'm sad because taylormade and callaway (I thought) were right up there but trust me we're not getting the same as the pros play and the quality control over the pros balls are spot on. Not so much for the balls on the shelf.

Ben S
Hailing from N Aurora IL

WITB:
Putter:
Mizuno by Bettinardi BC1 w/SuperStroke MidSlim 2.0 Flamed finish (1 Degree)
Driver:
Ping G – Mitsubishi Diamana Blue 73 X (10.5 Degree)
3 Wood: Callaway Epic Flash – Mitsubishi Tensei AV Blue 75 S (15.5 Degree)
3 Hybrid:
Tour Edge CBX 119 – Project X EvenFlow Black 85 S (18 Degree)
3 Hybrid:
Ping G – Mitsubishi Tensei CK Pro Blue HY 86 S (19 Degree)
4 – GW:
Ping i210 - Oban CT-115 X (22.5 - 50 Degrees)
SW:
Titleist SM7 S Grind - Tour Chrome - Stock S200 (54 Degree)
LW:
Titleist SM7 D Grind - Tour Chrome - Stock S200 (58 Degree)
All Grips: 
Winn Dri-Tec Midsize - Gray/Blue w/ 2 extra wraps low hand
Customizing:
Lime Green/Hot Pink Custom Paintfill - all clubs
White ferrules with Blue Stripes from Cell-Parts.net
Irons fitted & built by True Spec Golf
Custom Headcovers from Sunfish Golf
PING White DLX Cart Bag

Link to post
Share on other sites

I've been blocked on their Twitter feed(couple years now) for simply mentioning golfwrx as the starting point of the original Kirkland ball hype. There were over a 1000 post here already before they "shut down the internet with the prov 1 test comparison article" lol. They will not accept criticism or anything that doesn't line up with how the think. Um no comment lol.

Link to post
Share on other sites

> @arbeck said:

> > @Pittknife said:

> > The point most are making is that this was not a scientific test and the results should not of been published that were so obviously in error. You have to take statistical sampling of lots, these may be in the 100's of balls per each. Callaway's ball production is equivalent to Titleist, I've been to both factories and Bridgestone's. I've helped to engineer golf balls, specifically the ball CAD models and dimple patterns. It's not the dimples that would cause the deviation, it's always the ball consistency. The cavities of the mold have run rates and wear rates, different depending on temp and material. Paint & finish can be a factor. The dimple design and "aerodynamics" is not.

> >

> > These guys are wannabe pseudo scientific guys without a basic engineering degree. People that try to run test like this that don't have an engineering background have issues with interpreting data. The golden rule is whenever you think you discover something or the data suggests something unusual you re-test, then ask for pier review testing or confirmation. These guys avoid that at all cost. Their equipment is shotty and their basic understanding of engineering is next to ****.

>

> I don't think I expected a rigorous scientific test by non scientists, and anyone claiming it was is crazy. That doesn't mean the entire test is stupid and invalid. The test did show us a number of things.

>

> First, the dry numbers for the balls are good and a list like that has never really been published before. Having the balls speed, launch, and spin for multiple swing speeds off the driver and 7i is a really good thing. Knowing wedge spin for all these balls is also good. That's something we didn't have before and it's it great we have it now. These numbers line up really well with tests I've seen elsewhere, so I have no reason to doubt them.

>

> Second, it really did debunk the idea that a soft golf ball would be better for slow swing speeds. I've heard enough people I trust say it wasn't true in interviews to have already doubted it, but you have tons of marketing out there trying to fit swing speed to compression. I don't think you can argue that this test pretty much proved that the idea of that is bunk. It isn't exactly linear with compression to ball speed, but it's fairly close. Now whether or not you are willing to give up a small amount of speed for feel is a personal decision, but at least we aren't fooling ourselves into thinking we're gaining when we're not.

>

> Third, it did shed a light on quality control issues and tolerances with balls. This test wasn't rigorous enough to say that any one brand is going to be less consistent than another. But we do know inconsistencies are out there. We know what the actual compression of these balls are now. We can see that compression can vary a ton just in a single box of balls. I already knew that small issues in the consistency of balls could have big effects, but I wasn't aware of how prevalent those inconsistencies actually are. It actually makes me wonder about the times I've hit what felt like really great drives and seen weird ball flight. I'm curious enough about this that I'd really love to see a test done on the consistency of the different brands. It really could be that they are all pretty much the same. You imply that this is the case, and it very well may be. But the idea of a company cutting corners (especially on the balls at a lower price point) wouldn't shock me at all.

>

> I don't know if you're going to get a better test than this from anyone. A swing robot and Trackman is about the best we can expect anyone not connected to the industry to be able to use. A very rigorous test would be expensive and without funding from someone is likely never to happen. I still think you could do the more rigorous test on consistency that I described above. But even in that case, you'd be talking about spending $25K or more just on the balls in the test.

>

> All that being said, this is way better than most of the anecdotal testing we see here where someone says a ball goes further after playing 9 holes with it. Just concentrating on the dry ball numbers and ignoring any differences after they leave the club face can get most anyone to chose a ball that would suit their game quite well.

>

>

 

1. A swing robot is not always accurate, ask the guys at the kingdom from Taylormade about that. Overtime and due to the gears, pulleys or whatever it uses it will have start to drift at settings, it's a lot of force. The base must be fixed otherwise you will have different dispersion.

2. These guys present this data like it's gospel or even valid, it's not. There are a lot of people that read that and took it at face value. Any reputable research is posted with all data points and testing method, any challenges are answered and debated.

3. Any invalidation of any data point will cause the whole test to be invalidated until ran again. That's will real testing labs will never publish anything until they are almost certain and have repeated data to prove so, that can be pier reviewed.

4. As I have said, I worked with Dean Snell for 5 years. Dean will tell you his factory isn't making more consistent balls than Callaway's. It's impossible. Callaway own's their factories and has complete control. Direct to market companies contract out to an OS2 or OS3, they have very little control over what these factories do. Contract factories have competing goals than their customers, they want to reduce cost and make more money, even at the cost of quality if they can get away with it. When you own the factory, the goals are in alignment.

 

I understand these guys mentality, they want to be the "a ha" guys that found something and be the first to publish it. It is reckless to post findings like that.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Our picks

    • **GIVEAWAY** Cobra RadSpeed Big Tour Fairway ENTER NOW!
      Cobra RadSpeed Big Tour 14.5* fairway giveaway!!!.To enter reply in this thread that you're IN!
       
      That's it. You'll be entered into the giveaway (one entry per person). Winner chosen at random in two weeks. Be sure to check out the attached pics. Good luck!
       
      If you are not a member her please register here to allow you to reply to this post and enter. Registration is free... https://forums.golfwrx.com/register/
      ======================================================================================================
      We randomize all the number of posts and the #1 number on the top is the winner. Say there is 1,000 replies from members. We will randomize 1 to 1,000 using a website that has a randomizer. It scrambles the numbers and the #1 is the first place and the #2 is the second etc. If the winner has duplicate entries we count the first
       





       
        • Like
      • 1,094 replies
    • 2021 Valero Texas Open - discussion & links
      Please put any questions or comments here. 
       
      This week, the PGA Tour is at TPC San Antonio on the Oaks Course for the Valero Texas Open. GolfWRX was on-site Tuesday to catch a glimpse into the bags of some of the world’s top golfers.
       
      The field of 144 is getting ready to battle starting Thursday for the $7.7 million purse, with $1.386 million going to the winner. The tournament is also the last event where players can qualify for The Masters, just like Canadian Corey Conners did last year.
       
      Check out our "Most interesting photos" Part 1, and Part 2.
       

       
      2021 Valero Texas Open - Tuesday #1 2021 Valero Texas Open - Tuesday #2 2021 Valero Texas Open - Tuesday #3 2021 Valero Texas Open - Tuesday #4 2021 Valero Texas Open - Tuesday #5 2021 Valero Texas Open - Tuesday #6 2021 Valero Texas Open - Tuesday #7 2021 Valero Texas Open - Tuesday #8  


       
      Cameron putters (added Bernd Wiesberger's custom T-11) -2021 Valero Texas Open Piretti putters -2021 Valero Texas Open Branden Grace testing AutoFlex shaft @ 2021 Valero Texas Open  
      Check out our "Most interesting photos" Part 1, and Part 2.
       

       
        • Like
      • 29 replies
    • **GIVEAWAY** Odyssey Ten Triple Track Putter! ENTER NOW!
      NEW Odyssey Ten Triple Track putter giveaway!!!.To enter reply in this thread that you're IN!
       
      That's it. You'll be entered into the giveaway (one entry per person). Winner chosen at random in two weeks. Be sure to check out the attached pics. Good luck!
       
      If you are not a member her please register here to allow you to reply to this post and enter. Registration is free... https://forums.golfwrx.com/register/
      ======================================================================================================
      We randomize all the number of posts and the #1 number on the top is the winner. Say there is 1,000 replies from members. We will randomize 1 to 1,000 using a website that has a randomizer. It scrambles the numbers and the #1 is the first place and the #2 is the second etc. If the winner has duplicate entries we count the first
       





       
        • Thanks
        • Like
      • 1,599 replies
    • 2021 WGC Dell Technologies Match Play - discussion and links
      Please put any questions or comments here
       
       

       
      2021 WGC Dell Technologies Match Play - Monday #1
      2021 WGC Dell Technologies Match Play - Monday #2
      2021 WGC Dell Technologies Match Play - Monday #3
      2021 WGC Dell Technologies Match Play - Monday #4
      2021 WGC Dell Technologies Match Play - Monday #5
      2021 WGC Dell Technologies Match Play - Monday #6
       
       

       
      Hideki testing putters at WGC Dell Technologies Match Play
      Odyssey/Toulon Atlanta putter - 2021 WGC Dell Technologies Match Play
      Cameron putters - 2021 WGC Dell Technologies Match Play
      Patrick Reed testing TPT shafts 2021 WGC Dell Technologies Match Play
       
       
      • 54 replies
    • 2021 Players - discussion and links
      Please post any questions or comments here
       
       

       
      2021 Players - Monday #1
      2021 Players - Monday #2
      2021 Players - Monday #3
      2021 Players - Monday #4
      2021 Players - Tuesday #1
      2021 Players - Tuesday #2
      2021 Players - Tuesday #3
      2021 Players - Tuesday #4
      2021 Players - Tuesday #5
      2021 Players - Tuesday #6
       
       

       
      Maverick McNealy's custom 1 of 1 Callaway Apex MB irons - 2021 Players
      Adam Long's Cameron T-5 proto - 2021 Players
      Abraham Ancer's custom Muira irons and custom MMT shafts - 2021 Players
      Bettinardi putters & cover - 2021 Players
      Jon Rahm's bag - 2021 Players
      Xander Schauffele's bag - 2021 Players
      Sergio with a Cameron putter - 2021 Players
      Cameron T11 & 11.5 putters - 2021 Players
      Sergio Garcia's clubs - 2021 Players
      Scott Brown's Odyssey gamer - 2021 Players
       
       
       
      • 52 replies

×
×
  • Create New...