Overview

Use the Crawl MCP server from your ATXP-powered agent to search and extract information from the web. The Crawl MCP server can be used to:
  • crawl up to a specificied maximum number of pages
  • extract information from websites

Example prompts

Tools

Usage

1

Define the Crawl service

Create a reusable service configuration that points to the MCP server and standardizes how you pass arguments and read results. This lets your agent easily interact with the Crawl tools in a consistent manner.
const crawlService = {
    mcpServer: 'https://crawl.mcp.atxp.ai/',
    scrapeToolName: 'crawl_scrape',
    description: 'ATXP Crawl MCP server',
    getArguments: (url: string) => ({ url }),
    getResult: (result: any) => {
      const jsonResult = result.content[0].text
      return JSON.parse(jsonResult);
    }
  };
2

Create an ATXP client

Create a client using an ATXP account by importing the ATXP client SDK and other dependencies.
// Import the ATXP client SDK
import { atxpClient, ATXPAccount } from '@atxp/client';

// Read the ATXP account details from environment variables
const atxpConnectionString = process.env.ATXP_CONNECTION;

// Create a client using the `atxpClient` function
const client = await atxpClient({
  mcpServer: crawlService.mcpServer,
  account: new ATXPAccount(atxpConnectionString),
});
3

Use the Crawl service in your agent

Call the Crawl tool by passing your natural‑language instruction as the argument the getArguments method.Read the response using the getResult method.
const url = "https://docs.atxp.ai";

try {
  const result = await client.callTool({
      name: crawlService.scrapeToolName,
      arguments: crawlService.getArguments(url),
  });
  const result = crawlService.getResult(result);
  console.log('Status:', result.status);
  console.log('HTML:', result.html);
} catch (error) {
  console.error(`Error with ${crawlService.description}:`, error);
  process.exit(1);
}
You should see the content of the crawled pages printed in your console.