The Nevermined Payment Libraries include built-in observability capabilities that allow you to monitor, track, and analyze your AI agent’s performance, usage patterns, and costs. This integration provides comprehensive logging and analytics for your AI operations.
Documentation for Manual Operation Logging is coming soon. This section will
cover how to wrap custom operations with observability logging for non-OpenAI
or Langchain services.
Documentation for usage calculation helpers for video and audio operations is
coming soon. This section will cover how to calculate usage metrics for
different types of AI operations.
Regular Requests: Process one request at a time. Each request gets its own unique agent request ID.
Batch Requests: Process multiple requests together using the same agent request ID. This is useful when you need to make multiple AI calls (e.g., multiple OpenAI requests) within a single user request, and you want to redeem credits once at the end for all operations combined.
TypeScript
Python
Copy
Ask AI
// Regular request - process one request at a timeconst agentRequest = await payments.requests.startProcessingRequest( agentId, authHeader, requestedUrl, httpVerb);// Batch request - process multiple requests togetherconst agentRequest = await payments.requests.startProcessingBatchRequest( agentId, authHeader, requestedUrl, httpVerb);
Charges a specific number of credits per request. Useful for predictable, pay-per-use models.The credit amount can be a static value or calculated dynamically (e.g., based on token usage, API calls made, or other metrics).
TypeScript
Python
Copy
Ask AI
// Credit amount can be static or dynamically computedconst creditAmount = BigInt(10); // Static value// ORconst creditAmount = calculateCreditAmount(tokensUsed, maxTokens); // Computed value// Redeem fixed credits from a regular requestconst redemptionResult = await payments.requests.redeemCreditsFromRequest( agentRequestId, requestAccessToken, creditAmount);// Redeem fixed credits from a batch requestconst redemptionResult = await payments.requests.redeemCreditsFromBatchRequest( agentRequestId, requestAccessToken, creditAmount);// Extract credits redeemedconst creditsRedeemed = redemptionResult.data?.amountOfCredits || 0;
Charges the actual API cost plus a margin percentage. Useful for adding a service fee on top of API costs.The margin percentage can be a static value or calculated dynamically (e.g., based on business logic, user tier, or market conditions).For example, if an API call costs 10 cents and you set a 20% margin (0.2), the total charge will be 10 + (10 × 0.2) = 12 cents in dollar-equivalent credits.
TypeScript
Python
Copy
Ask AI
// Margin percentage can be static or dynamically computedconst marginPercent = 0.2; // Static 20% margin// ORconst marginPercent = calculateMargin(userTier, apiCost); // Computed value// Redeem with margin from a regular requestconst redemptionResult = await payments.requests.redeemWithMarginFromRequest( agentRequestId, requestAccessToken, marginPercent);// Redeem with margin from a batch requestconst redemptionResult = await payments.requests.redeemWithMarginFromBatchRequest( agentRequestId, requestAccessToken, marginPercent);// Extract credits redeemedconst creditsRedeemed = redemptionResult.data?.amountOfCredits || 0;
Complete example using batch request processing. This example shows making multiple AI calls within a single batch request, all sharing the same agent request ID:
TypeScript
Python
Copy
Ask AI
// Use batch processing - all operations will share the same agent request IDconst agentRequest = await payments.requests.startProcessingBatchRequest( process.env.AGENT_ID!, req.headers.authorization as string, req.url, req.method);// Check user has sufficient balanceif (!agentRequest.balance.isSubscriber || agentRequest.balance.balance < 1n) { return res.status(402).json({ error: "Payment Required" });}const requestAccessToken = req.headers.authorization?.replace("Bearer ", "")!;// Create OpenAI client with observabilityconst openai = new OpenAI( payments.observability.withOpenAI( process.env.OPENAI_API_KEY!, agentRequest, { operation: "batch_analysis" } ));// Make multiple AI calls in the same batch// Call 1: Analyze user queryconst analysis = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Analyze this investment strategy..." }], max_tokens: 100,});// Call 2: Generate recommendationsconst recommendations = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Provide investment recommendations..." }], max_tokens: 100,});// Call 3: Create summaryconst summary = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Summarize the analysis..." }], max_tokens: 50,});// Calculate total credits for all operationsconst totalCredits = BigInt(30); // Or calculate based on actual token usage// Redeem credits once for all operations in the batchawait payments.requests.redeemCreditsFromBatchRequest( agentRequest.agentRequestId, requestAccessToken, totalCredits);res.json({ analysis: analysis.choices[0]?.message?.content, recommendations: recommendations.choices[0]?.message?.content, summary: summary.choices[0]?.message?.content,});
Margin-based Pricing
Complete example using margin-based credit redemption:
TypeScript
Python
Copy
Ask AI
// Regular request processingconst agentRequest = await payments.requests.startProcessingRequest( process.env.AGENT_ID!, req.headers.authorization as string, req.url, req.method);// ... rest of your agent logic ...// Redeem with margin percentage (e.g., 20% markup on API costs)const marginPercent = 0.2;await payments.requests.redeemWithMarginFromRequest( agentRequest.agentRequestId, requestAccessToken, marginPercent);
Batch Mode + Margin-based Pricing
Complete example combining batch processing with margin-based pricing. All operations share the same agent request ID, and credits are redeemed once at the end based on actual API costs plus margin:
TypeScript
Python
Copy
Ask AI
// Use batch processing - all operations will share the same agent request IDconst agentRequest = await payments.requests.startProcessingBatchRequest( process.env.AGENT_ID!, req.headers.authorization as string, req.url, req.method);// Check user has sufficient balanceif (!agentRequest.balance.isSubscriber || agentRequest.balance.balance < 1n) { return res.status(402).json({ error: "Payment Required" });}const requestAccessToken = req.headers.authorization?.replace("Bearer ", "")!;// Create OpenAI client with observabilityconst openai = new OpenAI( payments.observability.withOpenAI( process.env.OPENAI_API_KEY!, agentRequest, { operation: "batch_analysis" } ));// Make multiple AI calls in the same batch// Call 1: Analyze user queryconst analysis = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Analyze this investment strategy..." }], max_tokens: 100,});// Call 2: Generate recommendationsconst recommendations = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Provide investment recommendations..." }], max_tokens: 100,});// Call 3: Create summaryconst summary = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Summarize the analysis..." }], max_tokens: 50,});// Redeem with margin using batch method// The margin is applied to the total API cost of all three callsconst marginPercent = 0.2; // 20% markup on total API costsawait payments.requests.redeemWithMarginFromBatchRequest( agentRequest.agentRequestId, requestAccessToken, marginPercent);res.json({ analysis: analysis.choices[0]?.message?.content, recommendations: recommendations.choices[0]?.message?.content, summary: summary.choices[0]?.message?.content,});
The frontend provides a comprehensive events log table showing:Screenshot of the events log table showing detailed request information, timestamps, costs, and statusScreenshot of the events log table showing details of a specific request
Once your agent is running with observability enabled, you can view detailed data analytics in the Nevermined dashboard:Screenshot showing the Nevermined analytics dashboardScreenshot showing the Nevermined analytics dashboard cumulative analysisScreenshot showing the Nevermined analytics dashboard summary analysis
Use the built-in filtering capabilities to analyze specific patterns:Screenshot showing filtering options for date range, agent type, user, and moreScreenshot showing filtering options for agent, model, provider, and more