SDK wrappers (LangChain, LlamaIndex)
One-line wrap for framework users.
If your agent runs through LangChain or LlamaIndex, the SDK wrap is one line around your existing chain or query engine. No base_url change, no configuration file, no proxy in your request path.
What the wrap does
The SDK wrapper hooks into each framework’s native callback system. LangChain exposes callbacks at the chain level; LlamaIndex exposes them at the query engine level. Rendfly registers a callback that captures every conversation — the full message sequence, tool calls, and model responses — and forwards it to the rendfly eval pipeline asynchronously after the response is returned to your application.
Your model calls go directly from your application to your provider. Rendfly never sits in that path. The callback fires after the fact, so the overhead is a small async write rather than an extra network hop in the critical path.
Illustrative usage
Once the SDK ships, here’s what wrapping an existing LangChain or LlamaIndex setup will look like. These imports and function signatures are illustrative — they represent the planned API, not working code today.
LangChain Python:
from rendfly.langchain import wrap
# your existing chain, unchanged
chain = ConversationChain(llm=llm, memory=memory)
# one-line wrap — conversations are forwarded to your rendfly project
monitored_chain = wrap(chain, project_key="pk_live_...") LlamaIndex Python:
from rendfly.llamaindex import wrap
# your existing query engine, unchanged
query_engine = index.as_query_engine()
# one-line wrap
monitored_engine = wrap(query_engine, project_key="pk_live_...") After wrapping, call monitored_chain.run(...) or monitored_engine.query(...) exactly as before. The wrap is transparent at the call site.
Trade-offs vs proxy mode
Neither approach is universally better — it depends on your setup:
SDK wrappers:
- Lower latency overhead (async callback, no extra network hop)
- Tighter framework integration — works even if your framework manages provider routing internally
- Requires installing the rendfly package and a small code change
- Language-specific: Python wrappers first, others based on demand
Proxy mode:
- No SDK install, no code change beyond
base_url - Language-agnostic: works from any HTTP client in any language
- Adds a small network hop (≤30ms p99 target)
- Works with raw API calls, not just framework-level agents
If you’re running a LangChain agent for a Slack help bot or a LlamaIndex agent over a knowledge base, the SDK wrap is likely the cleaner fit. If your agent is a TypeScript service calling OpenAI directly, proxy mode is simpler.
What about other frameworks?
If you’re using Vercel AI SDK, Mastra, AutoGen, CrewAI, or anything else — proxy mode covers you without any SDK dependency. SDK wrappers will expand beyond LangChain and LlamaIndex based on what people are actually using. If you want a specific framework prioritized, email hi@rendfly.com with what you’re running.
Related
- Proxy mode — the language-agnostic alternative that requires no SDK install.