tooluniverse.llm_clients moduleΒΆ

class tooluniverse.llm_clients.BaseLLMClient[source]ΒΆ

Bases: object

test_api()[source]ΒΆ
infer(messages, temperature, max_tokens, return_json, custom_format=None, max_retries=5, retry_delay=5)[source]ΒΆ
infer_stream(messages, temperature, max_tokens, return_json, custom_format=None, max_retries=5, retry_delay=5)[source]ΒΆ

Default streaming implementation falls back to regular inference.

class tooluniverse.llm_clients.AzureOpenAIClient[source]ΒΆ

Bases: BaseLLMClient

DEFAULT_MODEL_LIMITS: Dict[str, Dict[str, int]] = {'embedding-ada': {'context_window': 8192, 'max_output': 8192}, 'gpt-4.1': {'context_window': 1047576, 'max_output': 32768}, 'gpt-4.1-mini': {'context_window': 1047576, 'max_output': 32768}, 'gpt-4.1-nano': {'context_window': 1047576, 'max_output': 32768}, 'gpt-4o': {'context_window': 128000, 'max_output': 16384}, 'gpt-4o-0806': {'context_window': 128000, 'max_output': 16384}, 'gpt-4o-1120': {'context_window': 128000, 'max_output': 16384}, 'gpt-4o-mini-0718': {'context_window': 128000, 'max_output': 16384}, 'o3-mini': {'context_window': 200000, 'max_output': 100000}, 'o3-mini-0131': {'context_window': 200000, 'max_output': 100000}, 'o4-mini': {'context_window': 200000, 'max_output': 100000}, 'o4-mini-0416': {'context_window': 200000, 'max_output': 100000}, 'text-embedding-3-large': {'context_window': 8192, 'max_output': 8192}, 'text-embedding-3-small': {'context_window': 8192, 'max_output': 8192}}ΒΆ
__init__(model_id, api_version, logger)[source]ΒΆ
test_api()[source]ΒΆ
infer(messages, temperature, max_tokens, return_json, custom_format=None, max_retries=5, retry_delay=5)[source]ΒΆ
infer_stream(messages, temperature, max_tokens, return_json, custom_format=None, max_retries=5, retry_delay=5)[source]ΒΆ

Default streaming implementation falls back to regular inference.

class tooluniverse.llm_clients.GeminiClient[source]ΒΆ

Bases: BaseLLMClient

__init__(model_name, logger)[source]ΒΆ
test_api()[source]ΒΆ
infer(messages, temperature, max_tokens, return_json, custom_format=None, max_retries=5, retry_delay=5)[source]ΒΆ
infer_stream(messages, temperature, max_tokens, return_json, custom_format=None, max_retries=5, retry_delay=5)[source]ΒΆ

Default streaming implementation falls back to regular inference.

class tooluniverse.llm_clients.OpenRouterClient[source]ΒΆ

Bases: BaseLLMClient

OpenRouter client using OpenAI SDK with custom base URL. Supports models from OpenAI, Anthropic, Google, Qwen, and many other providers.

DEFAULT_MODEL_LIMITS: Dict[str, Dict[str, int]] = {'anthropic/claude-sonnet-4.5': {'context_window': 1000000, 'max_output': 16384}, 'google/gemini-2.5-flash': {'context_window': 1000000, 'max_output': 65536}, 'google/gemini-2.5-pro': {'context_window': 1000000, 'max_output': 65536}, 'openai/gpt-5': {'context_window': 400000, 'max_output': 128000}, 'openai/gpt-5-codex': {'context_window': 400000, 'max_output': 128000}}ΒΆ
__init__(model_id, logger)[source]ΒΆ
test_api()[source]ΒΆ

Test API connectivity with minimal token usage.

infer(messages, temperature, max_tokens, return_json, custom_format=None, max_retries=5, retry_delay=5)[source]ΒΆ

Execute inference using OpenRouter.

class tooluniverse.llm_clients.VLLMClient[source]ΒΆ

Bases: BaseLLMClient

__init__(model_name, server_url, logger)[source]ΒΆ
test_api()[source]ΒΆ
infer(messages, temperature, max_tokens, return_json, custom_format=None, max_retries=5, retry_delay=5)[source]ΒΆ