I got this as part of a note the other day from someone after looking at the Shuttle board:
I read your review of this board and I must admit that I am
disappointed. . . . I really can’t help but wonder if you knew what you were doing? This
board has received nothing but great reviews. Could everyone that reviewed
this board be wrong? . . . I can’t help but question your knowledge.
This person appears to have taken two very common approaches towards evaluating the product:
The Family Feud approach basically says, “If most people say really nice things about this
board, it must be pretty nice.”
The “Thumbs Up, Thumbs Down” approach says, “You don’t need to read the whole review, just look
at an item or two, and the concluding comments.”
I’m sure most of you have taken this approach at least on occasion. I know I do for items I don’t
really need to know all the details about initially.
However, should I need to look deeper into the matter, then I read everything available for
the factual details.
Before deciding to buy and review the board, I did read all the reviews of the Shuttle. I also looked at a number of Epox 8KHA+ reviews, and enough
about the Soyo K7V Dragon Plus to form an opinion. I would do the same were I to buy it for myself.
In the case of the Shuttle, some didn’t actually test the nonfunctional voltage settings,
but a few did. At least someone mentioned all the significant issues I mentioned in the review.
From other comments the person made, it seemed like what he did was go to the Shuttle site, looked at
the reviews featured at the Shuttle site, just looked at some benchmarks or maybe just comments about them,
and then proceded to buy the board. I don’t think he even compared other boards to it.
From other comments he made, he didn’t seem too clear on exactly what I did say, outside of the tone
being nowhere near as praiseworthy as the others. Since I was the odd man out, obviously there was something wrong with me.
The point is that how he went about doing this is probably pretty typical of how many people decide what to buy, and this is something manufacturers take advantage of.
Don’t Look A Gift Horse In The Mouth?
The manufacturers are starting to get shrewd in their attempts to manipulate opinion.
Over the past few months, they’ve been treating hardware reviews like movie reviews, taking the most praiseworthy blurbs and plopping them onto their websites.
Awards have proliferated over the past few months, and, like Will Rogers, some of these places never met a free piece of equipment they didn’t like. In at least some cases, the quid pro quo is “rave review or no more reviews.”
Whether it’s implicit or explicit, at the very least, review sites can just go to the manufacturer’s website to see what the manufacturer wants, and many are inclined to give it to them.
Some a bit more conscientious may point out some flaws, but tend to minimize them, and try to put the best spin possible on a flawed product. Others will point out major flaws, then turn around and say “What a great product!” Anything to get the blurb the manufacturer wants.
Some of the major manufacturers have organized what can only be called PR offensives; arranging to have a dozen reviews all timed to hit at once.
What’s rather dubious about this is that these are often short-term loaners. Places get equipment for twenty-four hours or less.
This does not exactly lend itself to indepth testing, and as a lot of married people can tell you, the first twenty four hours is not necessarily proof of a match made in heaven. 🙂
The less time you have to test, the less likely it is you’re going to find faults. It’s just that simple.
Setting You Up
So the manufacturers make sure they have a bunch of places pretty much guaranteed to look kindly upon the product in general and will at least say something very nice about it that the manufacturer can use.
If you use the Family Feud and Thumbs Up approaches, that pretty much leaves you a dead duck, doesn’t it?
The Emperor Is Wearing Very Fine Clothes, Isn’t He?
There’s an added bonus to all this. Once a manufacturer arranges to get favorable critical mass, anybody who says otherwise runs the risk of being considered somehow stupid and incompetent.
Nor does it necessarily help if you precisely document exactly what the problems are and why you think the way you do. The Thumbs Up people either don’t or can’t actually read all of what you said, they just read the end.
The Family Feud folks just look at the results of the survey, and you’re on the short end of that stick, too.
It’s pretty ironic that what was supposed to be a means of ascertaining the truth of the matter turns into a peer pressure exercise.
What’s A Poor Fellow To Do?
First, let’s call these practices for what they are. Neither you (nor I) are being smart by doing this, we’re being lazy.
Now if it’s something you don’t have much interest in, being lazy makes sense. If you use these approaches to see whether or not you should look further into something, that’s OK.
But you shouldn’t buy something solely based on either approach. You should read what the review has to say, keeping in mind what you want or don’t want in the product. I’ve had so many people mess themselves up because they bought first and looked later.
That takes care of the “Thumbs Up, Thumbs Down” approach, but what about those who use the Family Feud approach?
Time To Start Judging the Judgers?
Right now, all the pressure on a reviewer is in the direction of a rave review. People have short memories, they don’t keep track of how many rave as opposed to non-rave reviews a site writes up.
Maybe that’s what is needed now.
Maybe somebody needs to keep track of this, to distinguish between those who call everything wonderful and those who don’t. Let there be some kind of counterpressure against cheerleader reviews.
My initial thoughts on the matter is to take maybe fifty websites, begin by looking at their reviews the last three-six months, and determine what percentage of products they deemed stellar, what percentage were OK, and what percentage wasn’t very good.
If you asked me, from my experience, very, very roughly, I’d say 10% were stellar, 70% were OK for at least somebody, and 20% weren’t particularly good at all.
Now I’m not saying those percentages are etched in stone by any means, but if a place found 80% of what they reviewed stellar, 20% OK, and nothing bad, I would think that website is a little too easy, wouldn’t you?
I’m sure some places will go stark raving mad over this, but I don’t see any good reason why they should. This isn’t going to judge individual reviews, it will merely be a recording of their judgments for the benefit of the reading audience.
If places say “it’s none of your business what we do,” aren’t they really saying that to you?
What do you think about this idea? Drop me a note.