Documentation Index
Fetch the complete documentation index at: https://docs.atxp.ai/llms.txt
Use this file to discover all available pages before exploring further.
Set up your application to interact with an LLM.
Set up your account
Vist your ATXP account dashboard and copy your account connection string into an environment variable. The best way to do this is to create a .env file in the root of your project and add the following line:ATXP_CONNECTION=https://accounts.atxp.ai?connection_token=<random_string>&account_id=<random_string>
Never commit your .env file to version control. It is a good idea to add your .env to your .gitignore file to prevent it from being committed. Use the gateway
JS OpenAI SDK
JS Vercel SDK
Python OpenAI SDK
HTTP
Install the OpenAI SDK in your project.Create a client using an OpenAI’s SDK and use it.// Import the OpenAI SDK
const { OpenAI } = await import('openai');
// Create the client
const openai = new OpenAI({
apiKey: process.env.ATXP_CONNECTION,
baseURL: 'https://llm.atxp.ai/v1'
});
// Chat with the LLM
const result = await openai.chat.completions.create({
model: 'gpt-4.1',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is 2+2?' }
],
});
// Process the result
console.log(result.choices[0].message.content);
Install the Vercel AI SDK and openai-compatible package in your project.npm install ai @ai-sdk/openai-compatible
Create an openai-compatible client using an Vercel’s SDK and use it.// Import Vercel's SDK and openai-compatible adapter
const { createOpenAICompatible } = await import('@ai-sdk/openai-compatible');
const { generateText } = await import('ai');
// Create the client
const atxp = createOpenAICompatible({
name: 'atxp-llm',
apiKey: process.env.ATXP_CONNECTION,
baseURL: 'https://llm.atxp.ai/v1'
});
// Chat with the LLM
const { text } = await generateText({
model: atxp('gpt-4.1'),
prompt: 'What is the capital of France?',
});
// Process the result
console.log(text);
Install the OpenAI SDK and requests package in your project.pip install openai requests
Create an OpenAI client and use it.# Import the OpenAI SDK
from openai import OpenAI
# Create the client
client = OpenAI(
api_key=os.environ.get("ATXP_CONNECTION"),
base_url=API_BASE_URL,
)
# Chat with the LLM
completion = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
)
# Process the result
print(completion.choices[0].message.content)
Call the API directly via HTTP requests. The structure is the same as the OpenAI chat completions API.curl -X POST "https://llm.atxp.ai/v1/chat/completions" \
-H "Authorization: Bearer $ATXP_CONNECTION" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "Say hello!"}
],
"stream": false
}'