The Disclosure Index scores official records against 437 fields defined by DisclosureOS, evaluating evidence quality across five categories with additional analysis against six scientific observables.
Each record is evaluated across five weighted categories. The category weights reflect how critical each dimension is to overall evidence quality — provenance carries the most weight because evidence without verified sourcing cannot be trusted.
Origin and custody chain of evidence sources. Tracks who collected the data, classification history, declassification dates, FOIA documentation, and source credibility assessments. The highest-weighted category because evidence without verified provenance cannot be trusted.
Source references, classification levels, declassification dates, data entry provenance, verification records, Hynek/Vallée/AARO classifications, event typing
Official inquiry status and corroboration. Covers which agencies investigated, their conclusions, confidence assessments, whether the case was corroborated by independent sources, and any related investigations or official inquiries.
Investigating bodies, investigation status, official conclusions, confidence levels, corroboration indicators, quality and completeness scores
Object characteristics, movement behavior, and witness data. Captures what was seen — shape, size, color, emissions, speed, maneuvers, formation behavior — along with detailed witness profiles including credibility assessments, professional backgrounds, and testimony consistency.
Location data, temporal data, object shape/size/color/emissions, movement patterns, aircraft interactions, witness profiles, credibility ratings, testimony records
Sensor readings, detection methods, physical evidence, and analysis results. This category measures whether the event generated data that can be independently tested — radar tracks, photographs, material samples, radiation readings, soil analysis, and chain-of-custody documentation for physical evidence.
Radar confirmation, photo/video evidence, sensor anomalies (radar, radio, GPS, electronics), physical evidence (landing traces, burn marks, soil changes, debris), material analysis, evidence chain of custody
Narrative completeness, media, environmental context, and relational data. Covers the surrounding context that gives an event meaning — weather conditions, aviation data, response/impact details, physiological effects, related events, and media attachments.
Summary/description, media attachments, temporal detail, environmental conditions, aviation context, response/impact data, physiological effects, related events and legislation
The AATIP framework defines six observable characteristics that would constitute evidence of anomalous technology. The scoring engine evaluates each record against these observables by checking whether relevant fields are present, whether the record's text references the observable, whether the data supports measurement, and whether it could support scientific testing.
Object exhibits lift or hovering without conventional aerodynamic surfaces, propulsion exhaust, or visible means of support — defying known gravitational constraints.
Hovering, stationary flight, no propulsion, no exhaust, silent hover, levitation, no wings or rotors
Object accelerates, decelerates, or changes direction at rates far exceeding known aerospace technology — implying forces that would destroy conventional airframes.
Sudden acceleration, right-angle turns, extreme g-forces, immediate reversal, split-second maneuvers
Object travels at speeds exceeding Mach 5 without producing sonic booms, exhaust plumes, heat signatures, or other expected physical effects.
Hypersonic speed, no sonic boom, silent speed, no exhaust, no heat signature, extreme velocity
Object evades or confounds radar, infrared, or other sensor systems despite being visually observed — suggesting active or passive signature management.
Radar evasion, sensor malfunction, stealth, disappeared from radar, jamming, signature management
Object transitions between air, water, and/or space without apparent change in performance characteristics or observable deceleration.
Water-to-air transition, submerged objects, ocean entry, USO, underwater sighting, emergence from water
Witnesses or nearby organisms experience physiological effects (burns, radiation symptoms, temporary paralysis, nausea) correlated with proximity to the object.
Burns, radiation, nausea, paralysis, headache, skin effects, medical attention, cellular damage
Percentage of observable-relevant fields present
Observable keywords found in text fields
Enough data present to support measurement
Data sufficient for scientific hypothesis testing
The final score for each record is a weighted composite of its five category scores. Each field has an importance level that multiplies its weight — critical fields count four times as much as low-importance fields. The numeric score maps to a letter grade.
Must-have fields for any credible evidence record
Strongly expected for thorough documentation
Adds meaningful context and detail
Supplementary data that enriches the record
For each of the 437 fields, the engine checks whether the value is present, partial (object exists but incomplete), or missing. Present fields earn their full weight; partial fields earn half. The weight of each field is its base weight multiplied by its importance level (1x–4x).
Each category score is the earned weight divided by the total possible weight for that category, scaled to 0–100. The overall score is the weighted sum of the five category scores using the category weights (Provenance 25%, Investigation 20%, Observational 20%, Scientific 20%, Documentation 15%).
The scoring engine is built on DisclosureOS, a structured evidence framework designed for open source release. All scoring logic is fully transparent and reproducible.
No widely adopted quantitative scoring framework exists for declassified government records. Archival appraisal (NARA) is qualitative and disposition-focused; classification reviews (ISOO) are compliance-focused; FOIA screening is exemption-focused. The Disclosure Index is, to our knowledge, the first system to apply a structured, field-level completeness rubric to government UAP releases.
Our framework draws on principles from established evidence quality systems and published UAP assessment scholarship, adapted for the unique requirements of evaluating declassified disclosure.