Preprints.ai
← All evidence pages
Quality measurement

Per-comment thumbs ratings

Every reviewer comment on every assessment now has a useful / not-useful button. We collect the votes anonymously, publish the counts, and refuse to dress them up as a single "X% useful" metric.

live counter public since 2026-04-30 awaiting first ratings
Useful
cumulative
Not useful
cumulative
Comments rated
unique comment ids

Methodology

The widget appears next to every reviewer comment on the assessment report. A click sends a single POST to /v1/feedback/comment with the comment id, the vote (useful or not_useful), and a browser-issued cookie token. The cookie is set on first vote and persists for one year, scoped to the assessment domain.

What we publish

The three counters at the top of this page pull live from /v1/feedback/stats and update on every page load. Because the system is new, those counts may be zero or near-zero for some time. We have deliberately resisted the temptation to seed the counter with internal QA votes — the number you see here is the number real readers have produced.

On Reviewer3's "90.6% useful" figure. A competitor publishes a single percentage describing the share of comments their users rate as useful. That is their self-reported number on their own dataset; it is not a benchmark this product has matched. We will not publish a comparable headline figure until our denominator is large enough — and stratified enough by comment type and field — to mean something. Until then we publish raw counts.

Caveats — what this doesn't measure

Code

Endpoint and storage: api/ · vote widget on the report: landing/assess.html.