Moderation Pipeline
Every report goes through three stages before and after publication: automated pre-screening, human review where needed, and ongoing monitoring once live. The stages aren't optional - they run in sequence.
Automated Pre-Screening
Each submission is checked against six criteria: spam content, profanity, personal identifying information (names, addresses, government ID numbers), hate speech, duplicate content, and guideline violations. Clear all six and the report publishes immediately. Fail one or more and it's held for a human to look at.
Most submissions clear automated screening without issue. The subset that doesn't includes reports flagged by pattern detection, as well as anything caught up in an active dispute.
Human Review
Moderators handle everything the automated system flags, plus any report that's been disputed. When a moderator's call differs from what the system decided, the human takes precedence - borderline cases don't get resolved by algorithm alone.
Ongoing Monitoring
Publication isn't the end of the process. Reports stay open to review, and are removed or corrected when they violate community guidelines, contain identifying information, or are overturned through a validated dispute. Numbers that suddenly attract a spike in submissions get automatically escalated for human review.
For details on the automated systems involved in moderation, see the AI Transparency Statement.
False Positive Handling
A false positive is when a legitimate number ends up with negative reports - not because it's done anything wrong, but because of circumstances the reporting system can't always distinguish on its own. Four patterns come up regularly:
- A reporter misidentifies the nature of a call
- A number has been spoofed by a third party using it as a caller ID mask
- A number was previously assigned to a different entity (reallocation)
- An organisation's outbound calls are unfamiliar or unexpected to recipients
Three things in the platform design help reduce the impact of false positives:
- Mixed classification display - numbers with both positive and negative reports show the full distribution, not a single verdict, so readers can assess the context themselves
- Report volume context - report counts are displayed alongside classifications so readers can judge confidence levels
- Dispute process - affected parties can request review through the corrections process described below
Corrections & Disputes
Got a number that's been mischaracterised, or data that's outdated after a reallocation? Correction and removal requests can be submitted through the contact page. Each request is assessed on four things:
- Evidence provided (e.g., business registration, carrier reallocation notice)
- Accuracy of the reported information
- Relevance and context within the dataset
- Compliance with community guidelines
Requests get a response within 30 days. Reports found to be inaccurate or in breach of guidelines are corrected or removed. When a reallocation is confirmed, a contextual note goes on the record to reflect the ownership change.
The full data rights process is documented in the Privacy Policy.
Data Refresh Frequency
Different data types update on different schedules. Community reports move fast; external allocation data moves on government release cycles.
- Community reports - processed and published within minutes of submission, subject to the moderation pipeline
- Aggregate classifications - recalculated each time a new report is received for a number
- AI-assisted summaries - regenerated monthly, or earlier when report volume for a number changes significantly
- ACMA allocation data - refreshed quarterly as updated public records become available
- Aggregation pages (state, prefix, service type) - updated as underlying records change
Data Retention
Reports stay in the public database unless removal is requested or legally required. The reason is practical: scam operations frequently reuse numbers over months or years. A record that only reflects the last few weeks misses that pattern - cumulative data is more useful than a rolling window.
That said, older data shouldn't carry the same weight indefinitely. Reports beyond 24 months are downweighted in classification calculations. Australia's number reallocation cycle typically runs 12–18 months, so the 24-month threshold captures the full reallocation lifecycle while letting genuinely stale signals fade. This is consistent with the methodology on the Methodology Overview page.
Downweighting only affects how older reports factor into classifications - they remain visible in the full record for historical reference. Keeping a report in the record isn't an endorsement of its specific claims.
Data retention policies are documented in the Privacy Policy.
Abuse Detection Safeguards
The value of community data depends on it being genuine. Four safeguards are in place to protect against coordinated abuse of the reporting system:
- Rate limiting on report submissions prevents high-volume coordinated attacks
- De-duplication logic identifies and collapses repeated submissions of substantially identical content
- Pattern detection flags suspicious submission activity for human review - the focus is on submission behaviour, not what's being reported about a number
- Spike escalation - numbers that suddenly attract a disproportionate volume of reports are automatically escalated to human moderator review
Platform Limitations
Understanding what this data can and can't tell you matters. Four inherent limitations apply to everything on Reverseau:
- Moderation cannot verify the factual accuracy of every individual report
- Automated screening misses edge cases that require human judgment
- Community data reflects self-selected reporters - not a random or representative sample
- Historical data may not reflect current number usage after reallocation
Reverseau doesn't make legal determinations about fraud, misconduct, or criminal activity. What gets published is community-reported data - organised, moderated, and contextualised, but not an investigative finding or legal assessment.
Interpretation boundaries are documented in detail on the Data Limitations page.
Contact
Enquiries regarding methodology, moderation, data integrity, or corrections may be submitted via the contact page.
Contact UsRelated Documentation
- Community Reporting & Processing Model - submission and processing pipeline
- Data Limitations - interpretation boundaries
- AI Transparency Statement - automated systems disclosure
- Privacy Policy - data rights and retention