The SDK ships with a token economy module designed to estimate cost and expose telemetry for MCP operations.Documentation Index
Fetch the complete documentation index at: https://nekzus-32.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Estimators
createTokenEstimator() initializes the best available estimator. By default, it loads the o200k_base BPE encoding scheme, which is the industry standard for modern OpenAI models (GPT-4o, o1, o3-mini) and provides a highly accurate baseline for Anthropic and Google models.
createSyncTokenEstimator() provides an immediate heuristic fallback (chars/4) when asynchronous loading of the heavy BPE tokenizer is not possible.
TokenTelemetryEngine
TokenTelemetryEngine is a singleton collector for operation-level metrics:
- Input/output token estimates
- Operation type (
tools_list,tool_call,resource_read, etc.) - Duration metadata
- Session-level aggregates
LiopOTelBridge
LiopOTelBridge maps token telemetry into OpenTelemetry gen_ai.* semantics so external observability backends can ingest LIOP token data seamlessly.
The bridge automatically binds to your global MeterProvider and emits the following metrics:
gen_ai.client.token.usage(Histogram)gen_ai.client.operation.duration(Histogram)