Set up your application to interact with an LLM.
1

Set up your account

Vist your ATXP account dashboard and copy your account connection string into an environment variable. The best way to do this is to create a .env file in the root of your project and add the following line:
.env
ATXP_CONNECTION=https://accounts.atxp.ai?connection_token=<random_string>&account_id=<random_string>
Never commit your .env file to version control. It is a good idea to add your .env to your .gitignore file to prevent it from being committed.
echo .env >> .gitignore
2

Use the gateway

Install the OpenAI SDK in your project.
npm install @openai
Create a client using an OpenAI’s SDK and use it.
// Import the OpenAI SDK
const { OpenAI } = await import('openai');

// Create the client
const openai = new OpenAI({
  apiKey: process.env.ATXP_CONNECTION,
  baseURL: 'https://llm.atxp.ai/v1'
});

// Chat with the LLM
const result = await openai.chat.completions.create({
  model: 'gpt-4.1',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is 2+2?' }
  ],
});

// Process the result
console.log(result.choices[0].message.content);