Trace Everything
Capture full execution traces across LLM calls, tool invocations, and custom logic. See exactly what your agent did.
Your AI agent made a bad decision. A customer got a wrong answer. A tool call failed silently. Now what?
Opswald records every step your agent takes — every LLM call, every tool invocation, every decision branch — so you can trace exactly what happened and why.
Trace Everything
Capture full execution traces across LLM calls, tool invocations, and custom logic. See exactly what your agent did.
Replay Failures
Step through agent runs span by span. Find the exact moment things went wrong and understand why.
Decision Graphs
Visualize how your agent made decisions with interactive decision trees in the dashboard.
Zero Code Setup
Use the Opswald proxy to instrument OpenAI and Anthropic calls without changing a single line of code.
Point your LLM client at the Opswald proxy instead of the provider directly. Add your Opswald API key as a header. That’s it.
import openai
client = openai.OpenAI( api_key="sk-your-openai-key", base_url="https://proxy.opswald.com/openai", # swap the URL default_headers={"X-Opswald-Key": "ops_your_key"} # add your Opswald key)
# This call is now automatically traced — same code, full visibilityresponse = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What is the return policy?"}])Open the dashboard and see your trace appear in real time.