RuntimeUse

Agent Runtime

CLI flags, built-in agent handlers, and custom handler authoring for the runtimeuse server.

The agent runtime is the process that runs inside the sandbox. It exposes a WebSocket server, receives invocations from the Python client, and delegates work to an agent handler.

CLI

npx -y runtimeuse@latest                            # OpenAI handler on port 8080
npx -y runtimeuse@latest --agent claude             # Claude handler
npx -y runtimeuse@latest --port 3000                # custom port
npx -y runtimeuse@latest --handler ./my-handler.js  # custom handler entrypoint

Built-in Handlers

OpenAI Handler

Requires OPENAI_API_KEY to be set in the environment. The handler runs the agent with shell access and web search enabled.

export OPENAI_API_KEY=your_openai_api_key
npx -y runtimeuse@latest

Claude Handler

Requires the @anthropic-ai/claude-code CLI and ANTHROPIC_API_KEY. Always set IS_SANDBOX=1 and CLAUDE_SKIP_ROOT_CHECK=1 in the sandbox environment.

npm install -g @anthropic-ai/claude-code
export ANTHROPIC_API_KEY=your_anthropic_api_key
export IS_SANDBOX=1
export CLAUDE_SKIP_ROOT_CHECK=1
npx -y runtimeuse@latest --agent claude

Programmatic Startup

If you want to embed RuntimeUse directly in your own Node process, start it programmatically:

import { RuntimeUseServer, openaiHandler } from "runtimeuse";

const server = new RuntimeUseServer({
  handler: openaiHandler,
  port: 8080,
});

await server.startListening();

Custom Handlers

When the built-in handlers are not enough, you can pass your own handler to RuntimeUseServer:

import { RuntimeUseServer } from "runtimeuse";
import type {
  AgentHandler,
  AgentInvocation,
  AgentResult,
  MessageSender,
} from "runtimeuse";

const handler: AgentHandler = {
  async run(
    invocation: AgentInvocation,
    sender: MessageSender,
  ): Promise<AgentResult> {
    sender.sendAssistantMessage(["Running agent..."]);

    const output = await myAgent(
      invocation.systemPrompt,
      invocation.userPrompt,
    );

    return {
      type: "structured_output",
      structuredOutput: output,
      metadata: { duration_ms: 1500 },
    };
  },
};

const server = new RuntimeUseServer({ handler, port: 8080 });
await server.startListening();

Handler Contracts

Your handler receives an AgentInvocation with:

FieldTypeDescription
systemPromptstringSystem prompt for the agent.
userPromptstringUser prompt sent from the Python client.
modelstringModel name passed by the client.
outputFormat{ type: "json_schema"; schema: ... } | undefinedPresent when the client requests structured output. Pass to your agent to enforce the schema.
signalAbortSignalFires when the client sends a cancel message. Pass to any async operations that support cancellation.
loggerLoggerUse invocation.logger.log(msg) to emit log lines visible in sandbox logs.
envRecord<string, string> | undefinedEnvironment variables from the client's agent_env. Merge with process.env when spawning subprocesses.

Use MessageSender to stream intermediate output before returning the final result:

  • sendAssistantMessage(textBlocks: string[]): emit text blocks the Python client receives via on_assistant_message.
  • sendErrorMessage(error: string, metadata?: Record<string, unknown>): signal a non-fatal error before aborting.

Return an AgentResult from your handler:

// Text result
return { type: "text", text: "...", metadata: { duration_ms: 100 } };

// Structured output result
return { type: "structured_output", structuredOutput: { file_count: 42 }, metadata: {} };

metadata is optional and is passed through to result.metadata on the Python side.

On this page