Log In  or  Join

Breaking: Nexus 7 review

Triangle-netbar
FLASH DANCE!

How We Score

The Verge reviews: how we test and review products

Product reviews can be a tricky thing. Every reviewer has a different style and different way of assessing a product, which is why no two reviews of the same product will ever read the same. At The Verge we have built a reviews program that strives to standardize our reviews without abandoning the uniqueness of each of our reviewers. Below is a short guide to our methods and an explanation of our rating practices.

General reviewing

Our reviews are first and foremost centered on real-life experience with the product. They are firmly based around the reviewer's experience of using the product for a substantial amount of time; they are absolutely never written off of a spec sheet or a fleeting experience with the product. Whatever the product is — a phone, laptop, TV, app, etc. — we strive to work it into our everyday lives and give the reader a picture of the product in the real world.

Benchmarks and The Verge Battery Test

In many cases we crossbreed that anecdotal experience with systematic (or synthetic) benchmarks, especially in the realm of performance and battery life. Many of the tests are used industry wide (PCMarkVantage, 3DMarkVantage, SunSpider, etc.) with the exception of The Verge Battery Test.

The Verge Battery Test runs in any browser and thus across operating systems. With brightness set at 65 percent (unless otherwise noted), it cycles through a series of 100 websites while downloading a high resolution image every six sites. No doubt it is a taxing test, but in the absence of a good industry benchmark we decided to take matters into our own hands with a test that we feel simulates real world usage. Shockingly, many of the battery benchmarks out there don't use Wi-Fi or the browser.

Ratings

Every reviewed product (unless otherwise noted) is given an editor's rating. We rate by a series of pre-determined category criteria (display, speakers, performance, etc.), and in most cases, the final score is an average of those subscores — yes, even down to the decimal point. However, since it is not a weighted average, the editor reserves the right to adjust the average to better reflect the overall assessment of the product, including how the price fits. We assume the ten point scale is relatively straightforward, but below is a short guide of how we view the numbers.

  1. Utter garbage.

  2. Slightly better than garbage, but still incredibly bad.

  3. Not a complete disaster, but not something we’d recommend.

  4. Mediocre, but likely has outstanding issues.

  5. Just okay.

  6. Good. There are issues, but also redeeming qualities.

  7. Very good. A solid product with some flaws.

  8. Excellent. A superb product with minor flaws.

  9. Nearly perfect.

  10. The best of the best. Perfect.

Last updated 10.31.2011 by Joanna Stern