Skip to main content
Credits in EKB are consumed in two ways: Platform Action Credits Charged when users perform specific actions (uploading documents, sending messages, invoking tools). LLM Token Credits Charged based on actual Large Language Model (LLM) usage, calculated from:
  • Input tokens (prompt, document chunks, system context)
  • Output tokens (LLM response)
LLM usage is converted into credits based on our pricing model.

Credit Consumption Table

A. Knowledge Base (KB) Ingestion

Uploading a document to a Knowledge Base triggers a multi-stage ingestion pipeline. Each stage has its own cost structure, and some stages incur additional LLM Token Credits depending on your project configuration. Ingestion Pipeline
StageDescriptionCost
1. Document UploadFile is received and word count is calculated1 credit per 10,000 words
2. LLM Extraction (optional)An LLM parses and extracts content from the documentInput + output tokens × model rate
3. Platform ChunkingTokenization, splitting, and metadata assemblyFree (platform)
4. Chunk Enrichment (optional)An LLM generates a context prefix for each chunkInput + output tokens × model rate
Stages 2 and 4 are only billed when enabled in your Project Settings. Document chunking occurs as part of ingestion and may incur additional costs. See Section B for a detailed breakdown. Example – 100,000 word document (no optional stages enabled)
ItemCredits
100,000 words10 credits
Example – 100,000 word document (all stages enabled)
ItemCredits
100,000 words (upload)10 credits
LLM Extraction tokensVaries by model
Platform ChunkingFree
Chunk Enrichment tokens~10 credits (see Section B)
Estimated Total~20+ credits
The more optional stages you enable, the higher the per-document ingestion cost. Word-based upload cost is always fixed and predictable.

B. Document Chunking

Chunking is the step between document extraction and embedding. It takes the cleaned text from an uploaded document and breaks it into smaller pieces (chunks) that are indexed in the vector store and later retrieved for Chat or Agent queries. Chunking costs fall into two categories:
  • Platform Chunking: Deterministic operations (tokenization, splitting, metadata assembly) that run locally on the server and are not billed as LLM usage.
  • Chunk Enrichment (LLM Token Credits): Billed only when Chunk Enrichment is enabled for the project in Knowledge Base Settings. The LLM generates a short context prefix for each chunk; both input and output tokens are charged.

Example: Document Chunking Costs

The following example assumes a moderately sized document with Chunk Enrichment enabled:
  • Document size: 100,000 words (~133,000 input tokens, at 4 chars/token)
  • Chunk settings: chunk_size=64 tokens, chunk_overlap=10 tokens → ~2,300 chunks
  • Chunk Enrichment: Enabled with gpt-4o-mini
  • Output text (joined chunks incl. enrichment prefixes): ~140,000 output tokens
  • Model pricing (example): input $0.15 per 1M tokens, output $0.60 per 1M tokens
ComponentTokensCost
Input Tokens (document)133,000$0.01995
Output Tokens (chunks)140,000$0.084
Total LLM Cost273,000$0.104
Therefore, $0.104 (total LLM cost) / $0.01 (cost per credit) equals to 10.4 credits, displayed as ~10 credits.
If the same document were uploaded without Chunk Enrichment enabled, chunking would consume 0 LLM Token Credits.

C. Chat / Agent Interaction

Fixed platform credits
ActionCredits
User sends a chat message1–2 (based on configuration)
Tool call invoked1 per call
Variable LLM Token Credits LLM credits are calculated as: (Input Tokens × Input Rate) + (Output Tokens × Output Rate) Example pricing (Claude 4.5 sample model):
  • Input: $3 per 1M tokens
  • Output: $15 per 1M tokens

D. Workflow Executions

ActionCredits
Workflow execution1 per execution
The cost is 1 credit per execution, regardless of the number of steps involved.

Full Chat Example

Scenario User asks: “Explain the attached document.” Platform credits
ComponentCredits
Question asked1
Tool calls (document retrieval)2
Subtotal3
LLM token usage
TypeTokensCost
Input~53,634~$0.161
Output~900~$0.0135
Total~$0.1745
Converted to credits: 17 credits Final total
ComponentCredits
Fixed platform3
LLM usage17
Total20

Cost Calculation Formula Summary

Document Upload Word Credits = Total Words ÷ 10,000 + LLM Token Credits (Parsing) Chat Message Fixed Message Credit + Tool Call Credits + LLM Token Credits (Input + Output)

What Drives LLM Credit Usage?

LLM cost increases when: ▶ Large documents are retrieved into context ▶ Many KB chunks are injected into the prompt ▶ Responses are long or structured ▶ Multiple tool calls are triggered ▶ Higher-cost models are selected

Important Notes for Customers

✓ Word-based ingestion cost is predictable. ✓ Chat costs vary significantly depending on document size and the number of tokens used. ✓ Model pricing is configurable in Super Admin. ✓ LLM credits are consumption-based and cannot be flat-rated. ✓ Final credit total = Platform Credits + LLM Credits.