Skip to main content
This guide covers how to send xAI usage data to Fenra.

Grok API

xAI’s API follows the OpenAI format:
async function chat(messages) {
  const response = await fetch('https://api.x.ai/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${process.env.XAI_API_KEY}`
    },
    body: JSON.stringify({
      model: 'grok-beta',
      messages
    })
  });

  const result = await response.json();

  // Send to Fenra
  await fetch('https://ingest.fenra.io/usage/transactions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'X-Api-Key': process.env.FENRA_API_KEY
    },
    body: JSON.stringify({
      provider: 'xai',
      model: result.model,
      usage: [{
        type: 'tokens',
        metrics: {
          input_tokens: result.usage.prompt_tokens,
          output_tokens: result.usage.completion_tokens,
          total_tokens: result.usage.total_tokens
        }
      }],
      context: {
        billable_customer_id: process.env.BILLABLE_CUSTOMER_ID
      }
    })
  });

  return result;
}

Reasoning Tokens

For Grok-3 and Grok-4 models with reasoning, include reasoning tokens:
usage: [{
  type: 'tokens',
  metrics: {
    input_tokens: result.usage.prompt_tokens,
    output_tokens: result.usage.completion_tokens,
    total_tokens: result.usage.total_tokens,
    reasoning_tokens: result.usage.completion_tokens_details?.reasoning_tokens || 0
  }
}]

Prompt Caching

xAI automatically caches prompt prefixes. Include cached tokens:
usage: [{
  type: 'tokens',
  metrics: {
    input_tokens: result.usage.prompt_tokens,
    output_tokens: result.usage.completion_tokens,
    total_tokens: result.usage.total_tokens,
    cached_tokens: result.usage.prompt_tokens_details?.cached_tokens || 0
  }
}]

Multimodal Tokens

For vision and audio models, track separate token types:
usage: [{
  type: 'tokens',
  metrics: {
    input_tokens: result.usage.prompt_tokens,
    output_tokens: result.usage.completion_tokens,
    total_tokens: result.usage.total_tokens,
    text_tokens: result.usage.prompt_tokens_details?.text_tokens || 0,
    audio_tokens: result.usage.prompt_tokens_details?.audio_tokens || 0,
    image_tokens: result.usage.prompt_tokens_details?.image_tokens || 0
  }
}]

Tool Usage

When using tools like web search, track tool invocations:
const result = await fetch('https://api.x.ai/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${process.env.XAI_API_KEY}`
  },
  body: JSON.stringify({
    model: 'grok-4.1-fast',
    messages,
    tools: [{ type: 'web_search' }]
  })
}).then(r => r.json());

// Count tool invocations from response
const toolCalls = result.choices[0]?.message?.tool_calls || [];

await fetch('https://ingest.fenra.io/usage/transactions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'X-Api-Key': process.env.FENRA_API_KEY
  },
  body: JSON.stringify({
    provider: 'xai',
    model: result.model,
    usage: [
      {
        type: 'tokens',
        metrics: {
          input_tokens: result.usage.prompt_tokens,
          output_tokens: result.usage.completion_tokens,
          total_tokens: result.usage.total_tokens
        }
      },
      {
        type: 'requests',
        metrics: {
          count: toolCalls.filter(t => t.type === 'web_search').length,
          request_type: 'web_search'
        }
      }
    ],
    context: {
      billable_customer_id: process.env.BILLABLE_CUSTOMER_ID
    }
  })
});
For Live Search, track sources used:
usage: [
  {
    type: 'tokens',
    metrics: {
      input_tokens: result.usage.prompt_tokens,
      output_tokens: result.usage.completion_tokens,
      total_tokens: result.usage.total_tokens
    }
  },
  {
    type: 'requests',
    metrics: {
      count: 1,
      request_type: 'live_search',
      sources_used: result.usage.num_sources_used || 0
    }
  }
]

Supported Models

Fenra supports all xAI models. Available models include:
ModelDescription
grok-4.1-fastLatest fast model (generic)
grok-4.1-fast-reasoningLatest fast model with reasoning capabilities
grok-4.1-fast-non-reasoningLatest fast model without reasoning
grok-4-fast-reasoningFast model with reasoning capabilities
grok-4-fast-non-reasoningFast model without reasoning
grok-code-fast-1Code-optimized fast model
grok-4Flagship reasoning model
grok-4-0709Dated variant of Grok-4
grok-3Legacy reasoning model
grok-3-miniCost-efficient small model
grok-betaLegacy beta model
grok-2-vision-1212Vision model
grok-2-image-1212Image generation model

Next Steps