How we audit
A five-stage process that produces reproducible, version-controlled scores — built on the principle that no vendor pays us anything.
Tool registration & selection
We continuously survey the AI tool market, classify candidates by category, and collect baseline information before audit kickoff.
Real-environment testing
We exercise the tool in its actual production environment using a published test protocol — capturing quantitative data on response time, success rate, UI behavior, and Japanese language handling.
AI quality analysis
Qualities that resist purely quantitative measurement — UX coherence, documentation depth, support responsiveness, security posture — are evaluated through AI-driven analysis with human checklists for cross-validation.
Score calculation & grading
Quantitative test results (60%) and qualitative AI analysis (40%) are combined into per-axis scores. The five axes are averaged into an overall score and mapped to an S–D grade.
Database publication
Scores, grades, and detailed analysis are published to the public audit database. Tools are re-audited on a 90-day cycle plus on major version bumps and security incidents.
Methodology versioning
Version Control
Every Aixis score is tied to an explicit methodology version. Major changes (axis additions, weight redistribution) trigger a full re-audit; minor changes touch only the affected category. The full revision history is published in Japanese on our score changelog page, with the same major/minor/patch contract you'd expect from semantic versioning.