Methodology

How We Review AI Tools

Every Auldrin review is scored across six weighted criteria using a consistent testing process. Here’s exactly how we do it.

Scoring Criteria

All tools are scored 0–10 across six criteria. The overall score is a weighted average. We chose these weights to reflect what matters most to practitioners building with AI tools day-to-day.

15%Ease of Use

Onboarding speed, UI clarity, quality of documentation, and how quickly a non-expert can achieve results.

20%Feature Depth

Breadth and quality of core features. Depth of configuration options. Does it do what it claims?

25%AI Capability

Quality of AI outputs, benchmark scores where available, reliability, and performance on real tasks.

20%Value for Money

Pricing vs. capability vs. alternatives. Free tier generosity. Total cost of ownership.

10%Reliability & Support

Uptime, API stability, response time to support tickets, quality of the community.

10%Integration Ecosystem

Native integrations, API quality, webhook support, and how well it plays with other tools.

Scores are rounded to one decimal place. A score of 8.0+ is “Excellent”, 6.0–7.9 is “Good”, 4.0–5.9 is “Fair”, and below 4.0 is “Poor”.

Testing Process

We follow a standardised six-step process for every tool review. No shortcuts, no vendor briefings substituting for real testing.

1

Sign up & explore

We create a real account (no vendor-provided access) and follow the documented onboarding flow.

2

Build a real workflow

We complete a representative task; an article, an automation, a code generation challenge to test real-world capability.

3

Run benchmark tests

For LLMs and coding tools, we run standardised prompt suites and compare outputs against competitors.

4

Score independently

Two reviewers score independently across all six criteria. Scores are averaged; large disagreements trigger a third review.

5

Verify pricing

We check the live pricing page on the day of publication and record the date. Pricing changes are the fastest-moving part of any review.

6

Editorial review

A senior editor reads the final review for accuracy, balance, and readability before publication.

Update Policy

Every review displays a “Last Verified” date. Pricing sections are re-checked and updated quarterly at minimum. When a tool ships a major version update, we re-run the full testing process.

If you notice outdated information, email us at corrections@auldrin.ai — we’ll review and update within 48 hours.

Affiliate Disclosure

Auldrin earns affiliate commissions when readers sign up for tools using our links. This is how we fund the publication. Every review that includes an affiliate link is labelled with an Affiliate Disclosure banner at the top of the page.

Affiliate relationships do not influence our editorial scores. We have reviewed and given low scores to tools we have affiliate agreements with, and high scores to tools where we have no financial relationship.

We do not accept payment for positive reviews, sponsored content disguised as editorial, or any other form of pay-to-play coverage.

Stay in the loop

Weekly AI tool reviews, news digests, and how-to guides.