How We Test: Our exclusive 5-point rating system explained

In the interest of transparency, we want to disclose how we select weather stations and gadgets and how we test these devices. Many review sites don’t test the products; they rely on customer reviews to form their opinions. We do things a bit differently.

How We Select What to Test

By knowing how we test, consumers can make informed decisions about their chosen weather stations.

Our methodology is transparent: we explain in detail how we test every product we review to ensure transparency and trust.

Weather stations and gadgets selected for testing meet our requirements for a standard feature set that we look for and often have standout features. We do accept devices offered for testing from manufacturers. We do not receive monetary compensation in exchange for reviews from manufacturers. Manufacturers have no say in how we test and are informed of our process ahead of time. Occasionally, manufacturers might purchase advertising through our ad partners or us. We do not actively solicit companies to advertise on their reviews, nor do any ad purchases affect the outcomes of those reviews.

Every product we consider goes through our rigorous testing process.

Our privacy policy includes more information on our advertising policies.

How We Select Products for Our Roundups

We must include products we have not had personal experience with occasionally, as it’s impossible to test every device. These frequently appear in our product roundups. When these products are included, we rely on reviews on retailer websites such as Amazon. Sometimes, we may write a review based on our knowledge of a similar model. For example, Ecowitt stations, for the most part, aren’t sold in the US. However, since Ambient Weather and Ecowitt use the same hardware, we can write a review with a high degree of accuracy even though the product isn’t in our hands.

Most sites mark actual customers with a “verified purchase,” which is the review we use to judge products for selection in a product roundup. We do not use non-verified reviews in our product selections.

how we test

How We Test

Our new rating formula reflects how we test and the importance of accuracy and reliability.

Weather station testing is far from an exact science. However, we do follow a standard method.

All stations tested are placed on the same mount in the exact location at our testing site. Sensors are compared with analog instruments where possible, and when an analog instrument isn’t available, a nearby NOAA weather observing station is used.

The typical test lasts about 2-4 weeks, depending on the weather variability during the testing period. Select stations remain installed past the initial review period, allowing us to provide long-term reviews of these stations. For example, our Davis Vantage Vue has been continuously operating since our initial test in September 2016!

We analyze how we test to ensure that our evaluations are thorough and fair, ultimately benefiting the consumer.

We do not retest updates of a product that we deem similar to the previous version. For example, our WS-2902 review is based on the “A” model, but the station’s operation generally did not change in each successive revision. Our manufacturer policy details this and the reasons why we’ve made this change.

Our New Rating Formula

In the past, we rated devices subjectively. However, we wanted to standardize our ratings to make them more objective. The result is a new rating formula, which took effect across our network on June 1, 2022. As a result, it’s become considerably more challenging to get a “five-star rating,” which should be only reserved for the genuinely top-tier devices in every category.

So how are our ratings determined? We weigh each part based on what we think is most important, with accuracy/performance and affordability making up half of the rating alone. Durability is another crucial factor in a highly rated device. Finally, we consider the feature set and ease of use before calculating the overall rating, which is the star rating you see. 

Since its creation, we’ve adjusted the weighting twice: first in 2023 to account for inflation (affordability became value) and then in December 2024 to give more weight to the feature set.

Here is the current weighting as of December 2024:

Accuracy/Performance: 25%
Value: 25%
Durability: 20%
Feature set: 20%
Ease of use/usability: 10%

We will regularly review our weighting and adjust our model as necessary. Of course, if our weighting changes, we’ll let you know. However, right now, accuracy/performance and cost are two areas on which most of our readers base their purchase decisions.

Accuracy (Or performance, as applicable) (25%)

Judged based on the accuracy or performance of the device. To score high here, we look for highly accurate or high-performing devices. A device must score average or better here to qualify for our “best of” lists.

5 – Pro-grade accuracy or performance (generally suitable for scientific or mission-critical applications).
4 – Above average
3 – Average
2 – Below average 
1 – Poor accuracy or performance

Value (25%)

Judge based on the price and number of features. A score of three or higher is required for award eligibility, and a perfect score is obviously required for the Best Value award.

5 – Outstanding Value
4 – Good value
3 – Average value
2 – Fair value
1 – Poor value

Our affiliate programs do not influence how we test; the integrity of our testing process remains paramount.

Durability (20%)

Most devices we accept for review are extremely durable and well-constructed, so they often score high in this category. However, a device can not score less than three stars to be included in our “best of” lists.

5 – Solid construction, appears very durable
4 –  Generally good construction
3 – Average durability
2 – Below average durability, weak construction in parts
1 – Poor or deficient construction

Feature Set (20%)

Among the various types of devices we review, there is typically a standard set of features that most devices in that category share. Having more than the basic feature set helps a device score high here.

5 – Basic features + additional standard features, expandability
4 – Basic feature set but expandable
3 – Basic Feature Set (for weather stations, must have temp, humidity, wind speed, and direction, rainfall), but not expandable
2 – Some basic features missing
1 – Minimal feature set

Ease of Use/Usability (10%)

Simply put, how easy is it to use? We also call this a usability score.

5 – Best-in-class, intuitive, well-designed UI
4 – Easy to use with few UI issues
3 – Somewhat difficult to use but manageable
2 – Difficult to use
1 – Very difficult to use

Affiliate Disclosure

While The Weather Station Experts does not solicit or accept monetary contributions for reviews, we may sometimes be compensated for purchases made through links on our site. This may occur through a relationship with a retailer offering the product or directly from the manufacturer.

This relationship does not influence how we test or affect our opinions on a product. For more, read our affiliate link policy.

U.S. Federal Trade Commission guidelines require us to disclose this relationship, and where such links are present, we include a disclosure in a prominent location visible to the reader.

0 0 votes
Let us know how we did. Leave a rating.
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments