How Wine Scores Work, and Where They Don’t

Understanding Sparkling Wine Scores and the Reviewers Behind Them.

A champagne bottle resting on a bright yellow background. Four white paper stars are placed around it, symbolizing wine scores.

The little numbers associated with your bottle of bubbly can have a huge impact on price and prestige– but should they? Let’s break down the system behind wine scores and officially demystify the numbers that drive the industry. 

The Purpose of Wine Scores

We can’t usually taste a bottle before we buy so we often rely on the opinions of trusted experts who have tasted before pulling the trigger on a purchase. (Ahem, like say, us!) Wine scores are a simple way for critics to convey their opinion on the quality of a particular wine to the public in a way that transcends the bounds of language. 

But here’s the deal – every wine critic has a different palete and a different opinion. Not to mention that wines are often tasted at various stages in their production, some are tasted straight from the barrel, and others in their perfect state of maturity. The point here is that wine is alive– while the best producers maintain strictly measured consistency, there will always be slight differences from bottle to bottle (and even differences in the same bottle at different times of tasting.) This is why wine scores simply can’t be the end-all-be-all for the value and taste of a wine, there’s just no way to encapsulate all that nuance and complexity. The scoring system is simply a benchmark for critics to communicate their thoughts on the quality of a particular wine. 

The first iteration of a wine scoring system was developed in the year 1959 by UC Davis Department of Viticulture and Enology chairman, Dr. Maynard A. Amerine. This was a 20-point system intended for California winemakers to grade the production quality of a wine based on its clarity, flavor, and stability. 

The 100-point rating system was developed in the 1980s by esteemed wine critic Robert Parker while he was writing for Wine Enthusiast Magazine. Parker needed a standard to base his judgements on. Through his systematic approach has received some serious flack from wine professionals, it has become the somewhat-standard system that most critics use today. 

How Wine is Rated

Here’s a common misunderstanding: wine is scored according to its deliciousness. This, of course, cannot be true– but shopping for wine would be as easy as pie if this kind of rating was possible! But, alas, it’s just not the case.

Critics score wines based on their production quality and typicity, which describes how much a wine (or sparkling wine) exemplifies the traits of its style and region. Because of the emphasis on typicity, wine scores are often compared to dog shows. Dog show judges evaluate dogs based on the expectations of their breed. A dog that has physical traits that are uncharacteristic to his breed will probably not win first prize, regardless of how good of a boy he may be. 

The same idea applies to wine. A Napa Cabernet Sauvignon that does not possess the typical qualities of its region will probably not have an exceptional rating, regardless of how drinkable it may be.

Wine scoring is carried out by both professional critics and publications. Unbiased tastings should be blind, but they often are not. Oftentimes, a reviewer is tasting at the estate with the winemaker and vintner. Sometimes wines are opened (and even decanted) before a critic arrives. The door to bias is always open, and some critics are infamous for their apparent quid-pro-quo evaluations. 

While tasting, most professionals use a standard score card, which is used as a checklist to formulate their 100-point score. The scorecard calls the reviewer to examine the wine’s appearance, aroma, flavors, consistency, finish, and varietal representation. This introduces the possibility of bias, but respected reviewers take care to be as objective as possible. 

The 100-Point Scale

When you hear about 90-something point wines, you’re hearing about wines rated with the 100-point system developed by Robert Parker. At its conception, the 100-point system was only used by Parker himself, but has since been adapted as the standard by most wine professionals. 

Here’s how the 100-point scoring guide generally breaks down. 

  • 100–96 – Classic wines of superior quality, ranking at this level is a true achievement.
  • 95–90 – Excellent, highly recommended wines that offer refined complexity and character.
  • 89–85 – Very good and often well-priced. Highly recommended wines that represent qualities of its style and region.
  • 84–80 – Solid wines suitable for everyday enjoyment, often well-priced. 
  • 79–75 – Drinkable for everyday consumption, but lacking in complexity or any remarkable qualities.
  • 74–50 – Not recommended, mediocre and often bearing some critical flaw.  

The score you see when shopping for a bottle can be one of two things: a single critic’s score, or the average of several critics scores. Some wine shops may choose to include the average score to account for the variance between different critics reviews. 

When wines get into the 90+ point range, critics tend to disagree. This disagreement ultimately boils down to preference– which, of course, is not an intended aspect of wine scoring, just one that is near impossible to sever from the experience of tasting wine. 

An Imperfect Model

Try as we might, the business of wine scoring will always, to a certain degree, be subjective. There’s simply no way of completely separating a critic’s bias from their wine evaluation. This is true for any wine rating system, but there are some inconsistencies to the 100-point scoring guide in particular that can be addressed. 

The first and most glaring inconsistency is the weight of points– let’s unpack it.

The 100-point scale starts at 50 instead of at a neutral zero– even still, the median of 75 is anything but a neutral score for most publications. In fact, Wine Enthusiast’s current scale doesn’t even include scores less than 80. Of the wines that are rated, very few receive scores less than 85, and scores less than 80 are often never published. Scores in the 60-50 point range? Almost unheard of. 

The wine score bell curve peaks at about 88 points and nears the x-axis again at around 79 points– which tells us that the wine industry just doesn’t use the full scale. This creates a narrow range for wines to be placed in, and some suggest that this narrow scope devalues a high score. 

It can be argued that these issues change the overall meaning of the 100-point scale. Instead of acting as a tool to measure a wine’s quality, it has become a method to measure a wine’s excellence. 

Bubble Trouble- Issues with Scoring Champagne

Those immersed in the Champagne scene may have already noticed the lack of scores on their bubbly. Submitting scores for Champagne can be tricky for a few reasons. 

Though numbers change from year to year, around 95% Champagnes (and many other sparkling wines) are non-vintage. Non-Vintage, or NV, Champagnes are made from a blend of various years, and are constantly evolving with every batch. Different batches of the same house’s NV can have different aging and bottling times, and even different dosage levels.  But here’s the kicker- despite all of these differences from batch to batch, the label of a house’s non-vintage cuvée will remain the same. 

So two identical bottes of a house’s non-vintage can be wildly different from each other. Yes, the goal of a non-vintage is to show the essence of the estate with consistency across bottlings– but that’s impossible to achieve at 100 percent. 

Even though a house’s non-vintage bottles evolve, critic’s scores are static to the specific bottle that they tasted. They’ll be rating the non-vintage as a whole, not the specific bottle and blend they tasted. With a score alone, there’s no telling what batch they received and how different it could be from the bottle you have… problematic, right? But this isn’t the only problem Champagne and sparkling wine has with wine scores. 

Small Champagne houses have had a tough history with critics, who have been unfairly harsh with their scores despite their exceptional quality. Don’t let the term “small house” mislead you, these houses were producing small batch, stellar sparklers that were in direct competition with the big houses. So, they stopped sending their wines for review with the big critics, opting for lesser known (but often more trustworthy) critics instead. 

With all of this in mind, we have to ask whether wine scores actually matter. The answer? Yes, as long as you use them right. 

How to Use Wine Scores to Your Advantage

Wine ratings are a guide map, not a prophecy. You aren’t destined to love a wine just because of its stellar reviews – and you aren’t destined to hate a wine just because a critic didn’t like it. 

The best course of action is to lead with your palate and find a critic or publication that tends to think like you. Start with tasting, tasting, tasting – explore different styles and regions and, you guessed it, taste even more!!! Our recommendation is to pull up all of the tasting notes and written reviews after you’ve established your own opinion on your glass. When you keep up with this, you’ll notice patterns and find your special critic (how is there not an app for wine critic-matching yet?!) Once you’ve found them, you can use their wine ratings to help you shop sparkling wines that you’re likely to love. 

To give you a head start, here are some notable sparkling wine critics and publications to keep tabs on: Tom Stevenson, James Suckling, Robert Parker Wine Advocate, and Champagne Club by Richard Juhlin.

But remember, even your most believed critic’s words are not creed! When all else fails, filter out the noise and enjoy your bubbly the way YOU like it. 

Staff Writer

Cheers!


We'd love to hear your thoughts! Comments can be sent by email to cheers@lastbubbles.com!