dall-e-2-with-text-mobile-sepia-cropped.png

Why this study?

Think back to the last time you checked out online reviews. Maybe it was while you were hunting for a great restaurant on Google Maps or deciding whether to buy something on Vinted or Amazon.

Did you ever feel uneasy while reading those reviews? I have. Choosing one place over another just because its rating is 0.1 higher can feel frustrating, especially when you know some reviews are fake or written by people with different standards. There are countless reasons why relying on these reviews can feel unfair to other options that might be just as good, if not better, leaving us with nothing more than a false sense of control.

So, I decided to delve into why reviews don’t always work. I started with the basics—why we read them, why we write them—and then explored what goes wrong at different levels: the reviewer, the reader, the business, and society as a whole.

Let’s kick it off

The main goal of online reviews is to build trust in businesses. However, for this trust to be well-founded, we also need to trust the review process itself. And that's where the cracks begin to show:

95% of people in the US look at online reviews before making a purchase $^1$, but only 50% have left a review for one of their last 10 purchases, and a mere 2% do it for every purchase $^2$. In other countries, it’s estimated to be even less than this. Reviews are heavily relied upon, yet only a small subset of customers actually contribute, often providing minimal context and information.

So, what’s going wrong here?

<aside> 🔑 This meta-analysis provides a comprehensive overview of the principles behind online reviews, what works, what doesn’t, and explores potential solutions. It draws from books, articles, research studies, and statistics—and of course, my own interpretations and sentiments.

This document is open to everyone, and ongoing: please feel free to comment, react, share your opinions, offer confirmations or contradictions, and contribute any other articles that can enrich this exploration. The aim is to create a dynamic THand evolving reference.

I’ll be using the 5-star rating system as the default throughout this document. While other systems exist (NPS, CSAT, Like/Dislike…), the 5-star system is the most prevalent, especially on public platforms, making it the logical focus for this analysis.

</aside>

Table of content

Introductory principles

Why do we look at online reviews?

Why do we leave online reviews?

Why do businesses invest in online reviews?

When requesting a review, the question asked matters

Expectations, subjectivity, standards & risks

What needs to be fixed: at the reviewer level

Unclear scale: when reviewers don’t know what score to choose

Categorization: reviewers should evaluate on specific criteria

Ratings don’t cover the depth and nuances of an experience

People are more likely to express an “extreme” opinion

Customers may feel bad reviewing other people

Some people don’t send reviews out of fear of retaliation

Data privacy is a concern to reviewers

Fake or corrupted reviews flood the web

A review should be quick to give… while remaining qualitative

Businesses often request reviews at the wrong time

Review requests are invasive

People don’t know what to say when asked for a review

‘Will they read it anyway?’: People are concerned their review may be useless

Deciding not to give a review somehow still constitute a review

What needs to be fixed: at the reader level

How rating thresholds shape our purchasing choices

How many reviews are enough? Quantity generates trust

Few reviews lead to instable ratings

All reviews don’t count the same

About unrepresentative reviewers and biased feedback

Sentiment distribution among reviewers matters to readers

The impact of suggestive reviews

Too many reviews to read leads to overload & doubt

Satisfaction vs. Performance: understand the difference

🔜 Threshold effect & psychology of numbers

🔜 The number of reviews is almost as important

🔜 Volatility when too few reviews

🔜 All reviews don’t count the same

🔜 Unrepresentative set of reviewers

🔜 Repartition of reviewers’ sentiment

🔜 Suggestive opinions & choosing out of spite

🔜 Too many reviews to look at

🔜 Is satisfaction the same as performance?

What needs to be fixed: at the business level