Skip to content
Audit Process

How we audit

A five-stage process that produces reproducible, version-controlled scores — built on the principle that no vendor pays us anything.


01

Tool registration & selection

We continuously survey the AI tool market, classify candidates by category, and collect baseline information before audit kickoff.

Market researchCategorizationBaseline metadata
02

Real-environment testing

We exercise the tool in its actual production environment using a published test protocol — capturing quantitative data on response time, success rate, UI behavior, and Japanese language handling.

Live executionLatency measurementLocalization checks
03

AI quality analysis

Qualities that resist purely quantitative measurement — UX coherence, documentation depth, support responsiveness, security posture — are evaluated through AI-driven analysis with human checklists for cross-validation.

AI analysisManual checklistSecurity review
04

Score calculation & grading

Quantitative test results (60%) and qualitative AI analysis (40%) are combined into per-axis scores. The five axes are averaged into an overall score and mapped to an S–D grade.

D
C
B
A
S
05

Database publication

Scores, grades, and detailed analysis are published to the public audit database. Tools are re-audited on a 90-day cycle plus on major version bumps and security incidents.

Public databaseComparison toolsQuarterly re-audit

Methodology versioning

Version Control


Every Aixis score is tied to an explicit methodology version. Major changes (axis additions, weight redistribution) trigger a full re-audit; minor changes touch only the affected category. The full revision history is published in Japanese on our score changelog page, with the same major/minor/patch contract you'd expect from semantic versioning.

See it in action

Browse the public audit database to view real reports.