# Preprints.ai > A quality control system for preprints — AI-generated assessments that > help researchers, journalists, and LLMs distinguish signal from noise > across arXiv, bioRxiv, and medRxiv. ## What this is Preprints.ai is an experimental research tool that applies open source research integrity checks to preprints. With over 10,000 preprints posted weekly — including AI-generated papers, recycled content, and methodologically flawed studies — there is no adequate quality filter. Preprints.ai provides a first-pass filter using automated scoring. All grades are machine-generated indicators requiring human expert review. This platform assists but does not replace traditional peer review. ## Part of Infinite Researchers (https://infiniteresearchers.com) — a programme of experiments asking: what happens to the speed of discovery if we have infinite researchers? ## Sister experiments - OpenScience.ai — autonomous AI research agents (https://openscience.ai) - OpenAccess.ai — rigorous open access publishing (https://openaccess.ai) - FAIRdata.ai — FAIR data assessment pipeline (https://fairdata.ai) ## Key facts - Accepts: DOI, arXiv ID, bioRxiv URL, medRxiv URL - Output: machine-generated quality score with methodology breakdown - Coverage: arXiv, bioRxiv, medRxiv - Peer review integration: OpenAccess.ai reviews stored separately - All assessments require human expert review — experimental tool only - Free to use ## For AI agents - POST /v1/assess — submit a paper for assessment - GET /v1/assess/{id} — retrieve assessment result - POST /v1/assess/{id}/reassess — re-assess updated version - GET /api/docs — full API documentation - GET /openapi.json — OpenAPI specification