Connection Pooling Guide
On this page
- Overview
- Python SDK
- Basic Pattern
- With Flask
- With Django
- TypeScript SDK
- Basic Pattern
- With Express.js
- With NestJS
- Rust SDK
- Basic Pattern
- With Actix Web
- Best Practices
- 1. Single Client per Application
- 2. Graceful Shutdown
- 3. Health Checks
- 4. Error Recovery
- Multi-Cluster Setup
- Performance Tips
- Monitoring
- See Also
How to efficiently manage connections to Kimberlite clusters across different SDKs.
Overview
Connection pooling improves performance by reusing connections instead of creating new ones for each operation. This is especially important for:
- Web applications with concurrent requests
- Microservices with high throughput
- Long-running batch processors
Python SDK
Python SDK handles connection pooling internally through the FFI layer.
Basic Pattern
# Create a single client instance for your application
=
# Reuse this client across multiple operations
=
With Flask
=
# Initialize client once at startup
= None
global
=
=
=
=
=
return
With Django
# settings.py
= None
global KIMBERLITE_CLIENT
=
return
# views.py
=
# Use client...
TypeScript SDK
TypeScript SDK manages connections at the client level.
Basic Pattern
import { Client, DataClass } from '@kimberlite/client';
// Create client once
const client = await Client.connect({
addresses: ['localhost:5432', 'localhost:5433'],
tenantId: 1n,
authToken: 'token'
});
// Reuse across operations
async function handleRequest(data: Buffer) {
const streamId = await client.createStream('events', DataClass.NonPHI);
await client.append(streamId, [data]);
}
With Express.js
import express from 'express';
import { Client } from '@kimberlite/client';
const app = express();
let client: Client;
// Initialize on startup
async function init() {
client = await Client.connect({
addresses: ['localhost:5432'],
tenantId: 1n
});
}
app.post('/events', async (req, res) => {
try {
const streamId = BigInt(req.body.stream_id);
const events = req.body.events.map((e: string) => Buffer.from(e));
const offset = await client.append(streamId, events);
res.json({ offset: offset.toString() });
} catch (error) {
res.status(500).json({ error: (error as Error).message });
}
});
// Graceful shutdown
process.on('SIGTERM', async () => {
await client.disconnect();
process.exit(0);
});
init().then(() => {
app.listen(3000); // Express.js app port (NOT Kimberlite server - that's :5432)
});
With NestJS
import { Injectable, OnModuleInit, OnModuleDestroy } from '@nestjs/common';
import { Client } from '@kimberlite/client';
@Injectable()
export class KimberliteService implements OnModuleInit, OnModuleDestroy {
private client: Client;
async onModuleInit() {
this.client = await Client.connect({
addresses: ['localhost:5432'],
tenantId: 1n
});
}
async onModuleDestroy() {
await this.client.disconnect();
}
async append(streamId: bigint, events: Buffer[]) {
return await this.client.append(streamId, events);
}
}
Rust SDK
Rust SDK uses synchronous connections with Send + Sync for thread safety.
Basic Pattern
use Kimberlite;
use TenantId;
use Arc;
// Create once and share via Arc
let db = new;
let tenant = db.tenant;
// Clone Arc for each thread
let tenant_clone = tenant.clone;
spawn;
With Actix Web
use ;
use Kimberlite;
use Arc;
async
async
Best Practices
1. Single Client per Application
Create one client instance and reuse it:
# ✅ GOOD
=
# ❌ BAD
=
2. Graceful Shutdown
Always disconnect on application shutdown:
process.on('SIGTERM', async () => {
await client.disconnect();
process.exit(0);
});
3. Health Checks
Implement periodic health checks:
# Ping operation
# Reconnect logic here
4. Error Recovery
Implement retry logic with exponential backoff:
async function withRetry<T>(
fn: () => Promise<T>,
maxRetries = 3
): Promise<T> {
let lastError;
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
lastError = error;
await new Promise(resolve => setTimeout(resolve, Math.pow(2, i) * 100));
}
}
throw lastError;
}
const offset = await withRetry(() => client.append(streamId, events));
Multi-Cluster Setup
For high availability, connect to multiple cluster addresses:
=
The client will automatically:
- Discover the cluster leader
- Failover to a new leader if needed
- Retry on transient failures
Performance Tips
- Batch Operations: Group multiple appends into single calls
- Concurrent Reads: Multiple read operations can run in parallel
- Connection Limits: Don’t create more clients than needed
- Keep-Alive: FFI layer maintains persistent connections
- Resource Cleanup: Always disconnect when done
Monitoring
Track connection metrics:
=
= 0
= 0
=
=
+= 1
return
+= 1
= -
return