Skip to content

AITools API Reference

AI tools for the Bot class using LangChain and OpenRouter: run_ai_with_tools (main LLM with tools), run_ai_simple (cheap LLM, single-turn), and run_ai_simple_with_fallback (cheap-first with sanity check and optional fallback to main LLM). The module also provides _default_sanity_check for validating cheap-LLM output.

tradingbot.utils.aitools

AI tools for the Bot class using LangChain and OpenRouter.

LangSmith tracing (EU): If LANGSMITH_API_KEY is set, tracing is enabled and the EU endpoint (LANGSMITH_ENDPOINT) is used when not set. Set LANGSMITH_TRACING=false to disable.

Two LLMs are supported: - Main LLM (OPENROUTER_MAIN_MODEL, default deepseek/deepseek-v3.2): used by run_ai_with_tools for complex, multi-turn flows with tool use (market data, portfolio, trades). - Cheap LLM (OPENROUTER_CHEAP_MODEL, default openai/gpt-oss-120b): used by run_ai_simple for simple single-turn text tasks that do not need tools, e.g. summarization, extraction, classification, rewriting, or formatting. Prefer run_ai_simple for these to save cost.

Cheap-first with fallback: Use run_ai_simple_with_fallback() (or Bot.run_ai_simple_with_fallback) to try the cheap LLM first, verify output for sanity, and retry with the main LLM if the result fails validation. This keeps cost low while guaranteeing sane results.

Extensibility: Subclasses can override Bot.get_ai_tools() to add custom tools; run_ai() merges them automatically. run_ai_with_tools() accepts extra_tools= and optional tool_names= to whitelist which base tools to include.

run_ai_simple(system_prompt: str, user_message: str, model: Optional[str] = None) -> str

Run the AI for a single-turn, no-tools task (summarization, extraction, classification, rewriting). Uses the cheap LLM (OPENROUTER_CHEAP_MODEL, default openai/gpt-oss-120b). Pass model= to override. Use run_ai_with_tools when you need tool access (market data, portfolio, trades).

Source code in tradingbot/utils/aitools.py
def run_ai_simple(
    system_prompt: str,
    user_message: str,
    model: Optional[str] = None,
) -> str:
    """
    Run the AI for a single-turn, no-tools task (summarization, extraction, classification,
    rewriting). Uses the cheap LLM (OPENROUTER_CHEAP_MODEL, default openai/gpt-oss-120b).
    Pass model= to override. Use run_ai_with_tools when you need tool access (market data,
    portfolio, trades).
    """
    api_key = os.environ.get("OPENROUTER_API_KEY")
    if not api_key:
        raise ValueError(
            "OPENROUTER_API_KEY environment variable is not set. "
            "Set it to your OpenRouter API key."
        )
    model = model or _get_cheap_model()
    logger.debug("run_ai_simple: model=%s prompt_len=%s user_len=%s", model, len(system_prompt), len(user_message))
    llm = _create_llm(model, api_key)
    messages = [
        SystemMessage(content=system_prompt),
        HumanMessage(content=user_message),
    ]
    response = llm.invoke(messages)
    out = response.content if isinstance(response.content, str) else str(response.content)
    logger.debug("run_ai_simple: response_len=%s", len(out))
    return out

run_ai_simple_with_fallback(system_prompt: str, user_message: str, sanity_check: Optional[Callable[[str], bool]] = None, fallback_to_main: bool = True) -> str

Run a simple (no-tools) task with cheap LLM first; verify output for sanity; if validation fails, retry with main LLM. Use this to save cost when the task does not require tools.

Callable that takes the response string and returns True if sane.

If None, uses _default_sanity_check (non-empty, no refusal/error prefix).

fallback_to_main: If True and sanity_check fails, run again with main model (OPENROUTER_MAIN_MODEL) and return that result.

Returns the first sane response, or the main-model response after fallback.

Source code in tradingbot/utils/aitools.py
def run_ai_simple_with_fallback(
    system_prompt: str,
    user_message: str,
    sanity_check: Optional[Callable[[str], bool]] = None,
    fallback_to_main: bool = True,
) -> str:
    """
    Run a simple (no-tools) task with cheap LLM first; verify output for sanity;
    if validation fails, retry with main LLM. Use this to save cost when the task
    does not require tools.

    sanity_check: Callable that takes the response string and returns True if sane.
        If None, uses _default_sanity_check (non-empty, no refusal/error prefix).
    fallback_to_main: If True and sanity_check fails, run again with main model
        (OPENROUTER_MAIN_MODEL) and return that result.

    Returns the first sane response, or the main-model response after fallback.
    """
    check = sanity_check if sanity_check is not None else _default_sanity_check
    response = run_ai_simple(system_prompt, user_message, model=_get_cheap_model())
    sane = check(response)
    logger.debug("run_ai_simple_with_fallback: cheap response_len=%s sane=%s", len(response), sane)
    if sane:
        return response
    if fallback_to_main:
        logger.info("run_ai_simple_with_fallback: sanity check failed, retrying with main model")
        return run_ai_simple(system_prompt, user_message, model=_get_main_model())
    return response

run_ai_with_tools(system_prompt: str, user_message: str, bot: 'Bot', model: Optional[str] = None, max_tool_rounds: int = 5, extra_tools: Optional[List] = None, tool_names: Optional[List[str]] = None) -> str

Run the AI with the given system prompt and user message, using tools bound to the bot. Uses the main LLM (OPENROUTER_MAIN_MODEL, default deepseek/deepseek-v3.2) for complex, multi-turn tool use. Pass model= to override.

Optional list of LangChain tools to add (e.g. from bot.get_ai_tools()).

Tools with the same name as a base tool override the base tool.

tool_names: Optional whitelist of base tool names to include (e.g. ["get_market_data", "get_portfolio_status"]). If None, all base tools are included.

Returns the final model response as a string.

Source code in tradingbot/utils/aitools.py
def run_ai_with_tools(
    system_prompt: str,
    user_message: str,
    bot: "Bot",
    model: Optional[str] = None,
    max_tool_rounds: int = 5,
    extra_tools: Optional[List] = None,
    tool_names: Optional[List[str]] = None,
) -> str:
    """
    Run the AI with the given system prompt and user message, using tools bound to the bot.
    Uses the main LLM (OPENROUTER_MAIN_MODEL, default deepseek/deepseek-v3.2) for
    complex, multi-turn tool use. Pass model= to override.

    extra_tools: Optional list of LangChain tools to add (e.g. from bot.get_ai_tools()).
        Tools with the same name as a base tool override the base tool.
    tool_names: Optional whitelist of base tool names to include (e.g. ["get_market_data", "get_portfolio_status"]).
        If None, all base tools are included.

    Returns the final model response as a string.
    """
    api_key = os.environ.get("OPENROUTER_API_KEY")
    if not api_key:
        raise ValueError(
            "OPENROUTER_API_KEY environment variable is not set. "
            "Set it to your OpenRouter API key."
        )
    model = model or _get_main_model()
    llm = _create_llm(model, api_key)
    base_tools = _build_tools(bot)
    if tool_names is not None:
        base_tools = [t for t in base_tools if t.name in tool_names]
    tools = base_tools + (extra_tools or [])
    tools_by_name = {t.name: t for t in tools}
    llm_with_tools = llm.bind_tools(tools)

    messages = [
        SystemMessage(content=system_prompt),
        HumanMessage(content=user_message),
    ]
    response = None
    logger.info("run_ai_with_tools: model=%s max_tool_rounds=%s tool_count=%s", model, max_tool_rounds, len(tools_by_name))
    for round_num in range(max_tool_rounds):
        response = llm_with_tools.invoke(messages)
        tool_calls = getattr(response, "tool_calls", None) or []
        if not tool_calls:
            logger.debug("run_ai_with_tools: round %s no tool_calls, done", round_num + 1)
            break
        logger.info("run_ai_with_tools: round %s tool_calls=%s", round_num + 1, [t.get("name") for t in tool_calls])
        messages.append(response)
        for tool_call in tool_calls:
            name = tool_call.get("name")
            args = tool_call.get("args") or {}
            tid = tool_call.get("id", "")
            logger.debug("run_ai_with_tools: tool name=%s args=%s", name, args)
            if name not in tools_by_name:
                logger.warning("run_ai_with_tools: unknown tool name=%s", name)
                messages.append(
                    ToolMessage(content=f"Unknown tool: {name}", tool_call_id=tid)
                )
                continue
            try:
                result = tools_by_name[name].invoke(args)
                content = result if isinstance(result, str) else str(result)
                logger.info("run_ai_with_tools: tool name=%s result_len=%s preview=%s", name, len(content), (content[:80] + "..." if len(content) > 80 else content))
            except Exception as e:
                logger.exception("run_ai_with_tools: tool name=%s error", name)
                content = f"Tool error: {e!s}"
            messages.append(ToolMessage(content=content, tool_call_id=tid))
    if response is None:
        logger.warning("run_ai_with_tools: no response after %s rounds", max_tool_rounds)
        return ""
    out = response.content if isinstance(response.content, str) else str(response.content)
    logger.debug("run_ai_with_tools: final response len=%s", len(out))
    return out