Ether Solutions

AI Tool Taxonomy by Job

This page is generated from the published markdown artifact and keeps navigation inside the site where possible.

Search the site

Client-side search across published titles and page content. No server required.

Type two or more characters to search the published package.

This note defines AI tools at the level of the jobs they perform so the project does not become anchored to a specific vendor list.

For tool-choice criteria and a lightweight decision path, use AI Tool Selection Framework.

Why this note exists

Tool names and market leaders will change.

The project needs a more durable layer that answers:

Stable category set

1. Conversational reasoning partners

Primary job:

Typical workflow fit:

Verification expectation:

2. Artifact drafting assistants

Primary job:

Typical workflow fit:

Verification expectation:

3. IDE copilots and inline completion tools

Primary job:

Typical workflow fit:

Verification expectation:

4. Repository-aware engineering assistants

Primary job:

Typical workflow fit:

Verification expectation:

5. Agentic coding and task-execution tools

Primary job:

Typical workflow fit:

Verification expectation:

Working stance:

6. Retrieval and knowledge access tools

Primary job:

Typical workflow fit:

Verification expectation:

Working stance:

7. Quality, test, and evaluation helpers

Primary job:

Typical workflow fit:

Verification expectation:

8. DevOps, platform, and infrastructure assistants

Primary job:

Typical workflow fit:

Verification expectation:

Working stance:

9. Observability and incident-analysis assistants

Primary job:

Typical workflow fit:

Verification expectation:

Main risk:

10. MLOps and model-lifecycle assistants

Primary job:

Typical workflow fit:

Verification expectation:

Working stance:

11. Planning, meeting, and synthesis tools

Primary job:

Typical workflow fit:

Verification expectation:

Main risk:

12. Local and private model setups

Primary job:

Typical workflow fit:

Verification expectation:

13. Governance, observability, and policy support tools

Primary job:

Typical workflow fit:

Verification expectation:

Analysis dimensions

Working stance

Current product examples can be named as temporary instances of a category, but the category should matter more than the vendor.

Examples can help orientation, but they should not become the taxonomy itself.

Rule for market examples

Current market examples can add value if they are handled as dated illustrations rather than canonical knowledge.

If examples are added later, they should:

How the categories map to lifecycle work

How oversight readiness changes tool fit

Typical meta toolset for a real team

A modern software delivery team will often need a compact stack rather than a single platform:

The right question is not "which AI tool wins?"

The right question is "which small set of tool jobs improves this workflow without creating unsafe hidden work elsewhere?"

Current companion guidance

Optional future extension