Thin, dependency‑free SDKs that wrap LLM and MCP clients to deliver edge caching, metrics, and policy controls.
SDKs for OpenAI, MCP clients and servers that provide automatic caching, request/response instrumentation, scoped privacy, and programmatic purge APIs with TypeScript-first ergonomics.
Drop-in wrapper that enables cache + telemetry instantly. Works with your existing OpenAI client configuration.
import OpenAI from "openai";
import { Alchymos } from '@alchymos/openai';
const openai = new OpenAI(process.env.OPENAI_API_KEY);
const alchymos = new Alchymos(process.env.ALCHYMOS_API_KEY);
alchymos.withLlm(openai);Instrument client-side tool calls for caching and metrics. Seamlessly integrates with the Model Context Protocol.
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { Alchymos } from '@alchymos/mcp';
const client = new Client();
const alchymos = new Alchymos(process.env.ALCHYMOS_API_KEY);
alchymos.withClient(client);Server middleware for edge caching and observability. Add caching capabilities to your tools with one line.
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { Alchymos } from '@alchymos/mcp';
const server = new Server();
const alchymos = new Alchymos(process.env.ALCHYMOS_API_KEY);
alchymos.withServer(server);Our SDK is designed for modern TypeScript environments. It includes full type definitions, zero runtime dependencies, and flexible configuration options for granular control.
Keep your bundle size small and fast.
Full type safety and autocompletion.
Cache scoped to specific orgs or users.
Replay logs locally without hitting API.
import OpenAI from "openai";
import { Alchymos } from '@alchymos/openai';
const openai = new OpenAI(process.env.OPENAI_API_KEY);
const alchymos = new Alchymos(process.env.ALCHYMOS_API_KEY);
alchymos.withLlm(openai);
const response = await openai.chat.completions.create(params);