Sunday, August 21, 2011

It's Amazing What $5 Can Buy You

Is a five-star review really better than a four-star one? What does "excellent" service (as opposed to "very good") mean anyway? And by how much does a rave review on Yelp affect your restaurant choices?

I wrote recently about "anonymous" Amazon reviewers, and about the gifts some receive for positive reviews. If you found that troubling, as I did, you'll find an article in yesterday's New York Times by David Streitfeld even more troubling. It appears that "[as] online retailers increasingly depend on reviews as a sales tool, an industry of fibbers and promoters has sprung up to buy and sell raves for a pittance."

Reviewers-for-hire may receive as little as $5 or $10 for a positive review, a "pittance" indeed, but if you churn out enough of them you can make enough to get by -- without wasting the time, effort, and money of actually reading the book or staying at the hotel.

Streitfeld continues,
The boundless demand for positive reviews has made the review system an arms race of sorts. As more five-star reviews are handed out, even more five-star reviews are needed. Few want to risk being left behind.
Does this sound familiar? We are all experiencing this "arms race", every day. I take my car for a regular service call and am told that the service team needs an "excellent" rating on my "anonymous" customer satisfaction survey call, or they will all be at risk of losing key bonuses. A friend, at the end of a cruise-ship vacation, is told that anything less than an "excellent" rating will ruin the careers of crew members. And, most egregiously, schoolteachers in Atlanta (and elsewhere) are encouraged to correct their students' standardized test scores in order to meet and exceed the No Child Left Behind goals.

Most anything can be measured. Some things should be measured. But before you go crazy on the measurement side, you might want to think about the unintended consequences. The more importance you place on a test, the more likely that you will see this kind of behavior. I am not excusing cheating, but I am explaining it. We should not be surprised by it. And maybe there are other ways of measuring that might provide more useful data -- but no doubt they'd be more expensive.

Consider book reviews, for example. Fewer and fewer newspapers and magazines these days have reviewers on staff, but I value those reviewers' comments. Over time, I get to know their idiosyncrasies ("Oh, no wonder it's a negative review -- she hates hard science fiction."), and so know how to weight a positive or negative review. And I know that the critic's newspaper or magazine has paid for the book, not the reviewer herself. But with Amazon reviewers -- it could be the author in disguise, the author's best friend, the editor's husband, the publisher's enemy, or, in fact, a disinterested reader. But I don't have enough information to make a value judgment of the worth of the review. I don't even know if the reviewer has, in fact, read the book in question (I don't know it about the newspaper or magazine reviewer either, but I consider it far more likely).

"The whole system falls apart if made-up reviews are given the same weight as honest ones," says a Cornell University reviewer that Streitfeld quotes, who is part of a team working on an algorithm to detect fake reviews.

All those "personal" reviews give us a false sense of community. A book recommendation from a friend who knows me well is qualitatively different from a book recommendation in the New York Review of Books. A book recommendation from Amazon purports to be more like a friend's review... but in fact, it's not. Maybe "the whole system" should fall apart.


No comments:

Post a Comment