4.4K
Tool Use & Environment emerging

Agent-First Tool Discovery

Build search indexes designed for agent consumers, returning structured tool metadata ranked by agent-relevant signals instead of human SEO metrics.

By Shane Cheek (@unitedideas, affiliated with Not Human Search)
Add to Pack
or

Saved locally in this browser for now.

Cite This Pattern
APA
Shane Cheek (@unitedideas, affiliated with Not Human Search) (2026). Agent-First Tool Discovery. In *Awesome Agentic Patterns*. Retrieved April 24, 2026, from https://agentic-patterns.com/patterns/agent-first-tool-discovery
BibTeX
@misc{agentic_patterns_agent-first-tool-discovery,
  title = {Agent-First Tool Discovery},
  author = {Shane Cheek (@unitedideas, affiliated with Not Human Search)},
  year = {2026},
  howpublished = {\url{https://agentic-patterns.com/patterns/agent-first-tool-discovery}},
  note = {Awesome Agentic Patterns}
}
01

Problem

Individual services can declare their agent-readiness via static manifests (llms.txt, ai-plugin.json, OpenAPI specs). But an agent that needs a new capability at runtime has no way to search across services to find, compare, and select the best match. Static manifests describe one service; they do not solve cross-service discovery.

Today, tool catalogs are hardcoded into system prompts, manually curated in static lists, or require human-mediated searches through documentation designed for humans. When an agent needs a capability it does not have -- say, a calendar API or a code review tool -- there is no programmatic search that returns structured, verified results ranked by agent-relevant signals.

02

Solution

Build or use a search index specifically designed for agent consumers. The index catalogs tools, APIs, and MCP servers with structured metadata that agents can parse without HTML scraping or natural-language interpretation. Key components:

  1. Machine-readable search API: A REST or MCP endpoint that returns structured JSON with tool name, description, endpoint URL, protocol, authentication type, and capability tags.

  2. Agentic scoring: Rank results by agent-relevant signals rather than SEO metrics -- API uptime, documentation completeness, MCP compliance, response latency, schema availability.

  3. Protocol-native access: Expose the search itself via the same protocols agents already speak (MCP JSON-RPC, REST with OpenAPI spec, llms.txt), so discovery does not require a different integration path than usage.

  4. Verification layer: Actively probe indexed services to confirm they respond correctly, support claimed protocols, and return valid schemas -- not just trust self-reported metadata.

agent_needs("calendar integration")
  → query tool_discovery_index("calendar API", filters={protocol: "mcp"})
  → receive [{name: "cal-service", url: "...", auth: "api_key", mcp_verified: true, score: 92}]
  → agent evaluates candidates by score, protocol match, auth requirements
  → agent connects to top candidate directly

The workflow replaces the human loop of "search Google → read docs → evaluate → integrate" with a single programmatic query that returns agent-ready results.

03

How to use it

Best for:

  • Autonomous agents that need to acquire new capabilities at runtime without human guidance
  • Agent orchestrators that route tasks to specialized tools based on capability matching
  • Development environments where agents suggest or auto-configure integrations

Implementation considerations:

  • Index should catalog at minimum: service name, description, base URL, supported protocols, authentication method, and a machine-parseable capability schema
  • Active verification (probing endpoints, validating MCP handshakes) dramatically improves result quality over passive catalog approaches
  • Expose discovery via the same protocol the tools use -- if indexing MCP servers, offer discovery as an MCP tool itself
  • Include llms.txt and OpenAPI specs at well-known URLs so agents can discover the discovery service

Relationship to other patterns:

04

Trade-offs

Pros:

  • Removes human from the tool-selection loop entirely
  • Structured results eliminate HTML parsing and prompt-injection risks from web scraping
  • Verification layer filters out dead or non-compliant services before the agent wastes calls
  • Protocol-native access means zero additional integration work for agents already using MCP or REST

Cons:

  • Requires a maintained index with active crawling and verification -- not free to operate
  • Coverage depends on index breadth; niche or private tools may not be indexed
  • Trust model: agents must trust the index operator's scoring and verification methodology
  • Adds a dependency -- if the discovery service is down, agents cannot find new tools
06

References