When choosing a restaurant on Kakao Map, the first thing most people look at is the star rating. If a place has a 4.8 rating and 500 reviews, most diners decide to visit without much hesitation.

Naver abolished restaurant star ratings in 2023. And with Kakao introducing payment verification and on-site photo verification, it is clear that both platforms recognize the limits of star ratings.

So how accurate are Kakao Map ratings in reality? We tested their reliability by conducting a full analysis of 1.75 million reviews from 84,736 restaurants in Seoul and Gyeonggi.

1

98.9% of Kakao-rated 4.8+ restaurants were classified as great restaurants

For this analysis, we introduced the concept of a 'Gold reviewer.' These are reviewers on Kakao Map who have written at least 50 reviews and whose average rating falls between 2.5 and 4.2—balanced evaluators who are neither overly generous nor excessively harsh. Using these Gold reviewers as the benchmark, we assigned reliability weights to all reviewers and calculated a weighted positive rate. A score of 75% or higher was classified as great restaurant, 50% or higher as decent, and below 30% as caution.

Applying this standard to 8,248 restaurants with a Kakao rating of 4.8 or higher, 98.9% were classified as great restaurants. Only 1.1% were rated decent, and just 0.02% fell into average or below. All 2,995 restaurants with a perfect 5.0 rating were classified as great restaurants.

98.9%
Share of 'great restaurant' verdicts among places with a Kakao rating of 4.8+
Chart 1
Actual verdict distribution of highly rated Kakao restaurants
Verdicts based on weighted positive rate · excluding indeterminate cases · 69,069 restaurants
Great restaurant (75%+) Decent Average Caution 4.5+ 4.8+ 5.0 91.1% 8.5% 98.9% 100% 0.39% 0.02% 0%
Figures on the right: combined disappointment rate for 'average + caution.' Even once a Kakao rating exceeds 4.5, the probability of disappointment was only 0.39%.

If you choose a restaurant with a Kakao rating of 4.5 or higher, the probability of it being judged average or below is 0.39%. At 4.8, that drops to 0.02%. By the numbers alone, Kakao star ratings appear to be highly accurate.

So does that mean there is no deception in star ratings at all? A deeper look at the data tells a different story. The problem was not the star rating itself, but the quality of the reviewers who make up that rating.

2

More than 500 reviews, it is the number of Gold reviewers that determines accuracy

In general, people assume that the more reviews a place has, the more accurate its rating must be. Intuitively, 500 reviews should be more trustworthy than 30. But the data shows a different pattern.

We compared the share of cases where the gap between the Kakao rating and the weighted analysis score exceeded 0.5 points—in other words, the 'rating error rate'—by grouping restaurants not by total review count, but by the number of Gold reviewers.

Chart 2 — Key finding
Rating error rate by number of Gold reviewers
Share of cases where the gap between Kakao rating and weighted analysis score is ≥ 0.5 points
0% 10% 20% 30% 40% Share with an error of 0.5 points or more Gold 0 4,474 restaurants 33.6% Gold 1–2 16,918 restaurants 30.0% Gold 3–4 15,486 restaurants 21.6% Gold 5–9 21,141 restaurants 15.7% Gold 10–19 6,408 restaurants 7.4% Gold 20+ 4,642 restaurants 1.4% Without Gold reviewers, 1 in 3 places is inaccurate Gold 20+ → 1.4% error
If a restaurant has zero Gold reviewers, Kakao ratings are off by 0.5 points or more in 1 out of 3 cases. Once there are 20 or more Gold reviewers, the error rate converges to 1.4%.
33.6%
Rating error rate
when Gold = 0
1.4%
Rating error rate
when Gold = 20+

Among restaurants with not a single Gold reviewer, Kakao ratings were off by 0.5 points or more in 1 out of 3 cases (33.6%). By contrast, among restaurants with 20 or more Gold reviewers, that figure fell to just 1.4%. This quantitatively shows that what determines rating accuracy is not the quantity of reviews, but the quality of the reviewers.

In fact, even restaurants with 500 reviews showed low rating reliability if they had only one or two Gold reviewers, while places with just 50 reviews but 10 or more Gold reviewers had markedly lower error rates.

3

For 'clean' restaurants, Kakao actually underrates them

In this analysis, the level of rating inflation was divided into three stages. If the share of non-discriminating reviews exceeded 40%, or if the gap between the Kakao rating and the weighted score exceeded 0.5 points, the restaurant was labeled 'caution.' At 20% / 0.3 points, it was labeled 'suspicious.' Everything else was classified as 'clean.'

What stands out is the data for restaurants in the 'clean' category.

Chart 3
Kakao rating distortion and disappointment rate by bubble grade
Kakao − weighted score gap + share judged 'average + caution'
Grade Kakao − weighted score gap Average + caution share Number of restaurants Clean Normal −0.39 19.0% 39,743 Suspicious Mild −0.15 15.5% 20,823 Caution Severe +0.48 55.0% 8,503 For 'clean' restaurants, Kakao ratings are actually 0.39 points lower than reality. In the 'caution' grade, Kakao overestimates by +0.48 points, with 55% rated average or below.

For restaurants in the 'clean' category, Kakao ratings were found to be 0.39 points lower than the weighted analysis score. In other words, the tougher, more discerning reviewers were actually more generous than Kakao's simple average. A 'clean' restaurant with a Kakao rating of 3.9 could in reality be closer to a 4.3.

By contrast, in the 'caution' category, Kakao was overestimating by 0.48 points, and the share judged average + caution reached 55%. So even if two restaurants both have a Kakao rating of 4.3, their actual quality can be completely different depending on their bubble grade.

4

There are restaurants with a Kakao rating of 4.5 whose actual score is 1.38

There were 772 restaurants with a Kakao rating of 4.0 or higher that were nevertheless judged average or below. When we break down the gap between their Kakao rating and weighted analysis score by verdict, a consistent pattern emerges.

Chart 4
Kakao inflation and Gold reviewer count by verdict
Kakao 4.0+ restaurants · excluding indeterminate cases
Verdict Kakao vs weighted score gap Average Gold count Number of restaurants Great restaurant −0.07 6.1 26,009 Decent +0.08 7.7 10,492 Average +1.10 2.8 712 Caution +1.32 1.7 60 Restaurants judged 'average'/'caution' have only 1.7–2.8 Gold reviewers on average—less than half that of great restaurants.

Restaurants judged to be great showed a gap of just −0.07 points between the Kakao rating and the weighted score—essentially the same level. Even the decent category was within the margin of error at +0.08 points.

By contrast, restaurants judged 'average' showed Kakao overestimating by 1.1 points, while those judged 'caution' showed an overestimation of 1.32 points. In other words, a restaurant with a Kakao rating of 4.2 could in reality be closer to 2.9. What these places had in common was a near absence of Gold reviewers, averaging only 1.7 to 2.8.

Looking at specific cases makes the pattern even clearer.

Restaurant name Kakao Weighted score Gap Gold Verdict
양*** 삼성역점 (Yang*** Samseong Station Branch) 4.5 1.38 +3.12 1 Average
남****** 양주덕계점 (Nam****** Yangju Deokgye Branch) 4.6 2.24 +2.36 1 Average
바******* 별가람역점 (Ba******* Byeolgaram Station Branch) 4.6 2.67 +1.93 3 Average
당**** 신촌점 (Dang**** Sinchon Branch) 4.6 3.14 +1.46 5 Average
판** 수내직영점 (Pan** Sunae Direct Branch) 4.5 3.33 +1.17 2 Average

In the case of 양*** 삼성역점 (Yang*** Samseong Station Branch), the Kakao rating was 4.5, but the weighted analysis score was 1.38—a huge gap of 3.12 points. It had only one Gold reviewer. 판** 수내직영점 (Pan** Sunae Direct Branch) had plenty of data with 573 reviews, but only two Gold reviewers. Even when the number of reviews is high, the meaning of the star rating becomes diluted if there are no trustworthy reviewers behind it.

The common pattern among the 772 restaurants judged average or below is clear: an average of 2.4 Gold reviewers, and a Kakao overestimation of +1.1 points. It reveals a structural problem—the absence of balanced evaluators capable of validating the rating.

5

More important than the rating is the makeup of the people creating it

We found that Kakao star ratings themselves are not inherently deceptive. The problem lies in a structure where all reviews are reflected equally in the rating. A first-time user with three reviews and an experienced reviewer with more than 200 reviews are contributing to the average with exactly the same weight.

Naver's decision to abolish star ratings, and Kakao's introduction of payment verification and on-site photo verification, appear to stem from the same recognition. The core limitation is not the rating itself, but the absence of a mechanism to evaluate the quality of the reviewers who make up that rating.

Practical checklist for choosing a great restaurant

Bubble grade 'clean' + at least 5 Gold reviewers
A zone where Kakao ratings are actually underestimated. Even a 3.9 rating may in reality be closer to 4.3.
Bubble grade 'suspicious' or fewer than 3 Gold reviewers
Check the weighted positive rate as well. It is better to look at the Gold count than the total number of reviews.
Bubble grade 'caution'
Subtract 0.5 points from the Kakao rating when judging it. In this zone, 55% turned out to be average or below.

The conclusion from analyzing 1.75 million reviews is clear. A Kakao rating of 4.8 was almost always accurate. The distortion did not come from the rating number itself, but from restaurants with high ratings but no Gold reviewers behind them.

For consumers, what matters is not just the rating number itself, but also the makeup of the reviewers who produced it. Instead of simply counting reviews, checking how many experienced and balanced reviewers are included may be the most practical way to avoid being misled by star ratings.