tooluniverse.smcp module

Scientific Model Context Protocol (SMCP) - Enhanced MCP Server with ToolUniverse Integration

SMCP is a sophisticated MCP (Model Context Protocol) server that bridges the gap between AI agents and scientific tools. It seamlessly integrates ToolUniverse’s extensive collection of 350+ scientific tools with the MCP protocol, enabling AI systems to access scientific databases, perform complex analyses, and execute scientific workflows.

The SMCP module provides a complete solution for exposing scientific computational resources through the standardized MCP protocol, making it easy for AI agents to discover, understand, and execute scientific tools in a unified manner.

Usage Patterns:

Quick Start:

# High-performance server with custom configuration
server = SMCP(
    name="Production Scientific API",
    tool_categories=["uniprot", "ChEMBL", "opentarget", "hpa"],
    max_workers=20,
    search_enabled=True
)
server.run_simple(
    transport="http",
    host="0.0.0.0",
    port=7000
)

Client Integration: .. code-block:: python

# Using MCP client to discover and use tools import json

# Discover protein analysis tools response = await client.call_tool(“find_tools”, {

“query”: “protein structure analysis”, “limit”: 5

})

# Use discovered tool result = await client.call_tool(“UniProt_get_entry_by_accession”, {

“arguments”: json.dumps({“accession”: “P05067”})

})

Architecture:

┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐ │ MCP Client │◄──►│ SMCP │◄──►│ ToolUniverse │ │ (AI Agent) │ │ Server │ │ (350+ Tools) │ └─────────────────┘ └──────────────────┘ └─────────────────┘

│ ▼

┌──────────────────┐ │ Scientific │ │ Databases & │ │ Services │ └──────────────────┘

The SMCP server acts as an intelligent middleware layer that: 1. Receives MCP requests from AI agents/clients 2. Translates requests to ToolUniverse tool calls 3. Executes tools against scientific databases/services 4. Returns formatted results via MCP protocol 5. Provides intelligent tool discovery and recommendation

Integration Points:

MCP Protocol Layer:
  • Standard MCP methods (tools/list, tools/call, etc.)

  • Custom scientific methods (tools/find, tools/search)

  • Transport-agnostic communication (stdio, HTTP, SSE)

  • Proper error codes and JSON-RPC 2.0 compliance

ToolUniverse Integration:
  • Dynamic tool loading and configuration

  • Schema transformation and validation

  • Execution wrapper with error handling

  • Category-based tool organization

AI Agent Interface:
  • Natural language tool discovery

  • Contextual tool recommendations

  • Structured parameter schemas

  • Comprehensive tool documentation

class tooluniverse.smcp.ThreadPoolExecutor(max_workers=None, thread_name_prefix='', initializer=None, initargs=())[source][source]

Bases: Executor

__init__(max_workers=None, thread_name_prefix='', initializer=None, initargs=())[source][source]

Initializes a new ThreadPoolExecutor instance.

Parameters:
  • max_workers – The maximum number of threads that can be used to execute the given calls.

  • thread_name_prefix – An optional name prefix to give our threads.

  • initializer – A callable used to initialize worker threads.

  • initargs – A tuple of arguments to pass to the initializer.

submit(fn, /, *args, **kwargs)[source][source]

Submits a callable to be executed with the given arguments.

Schedules the callable to be executed as fn(*args, **kwargs) and returns a Future instance representing the execution of the callable.

Returns:

A Future representing the given call.

shutdown(wait=True, *, cancel_futures=False)[source][source]

Clean-up the resources associated with the Executor.

It is safe to call this method several times. Otherwise, no other methods can be called after this one.

Parameters:
  • wait – If True then shutdown will not return until all running futures have finished executing and the resources used by the executor have been reclaimed.

  • cancel_futures – If True then shutdown will cancel all pending futures. Futures that are completed or running will not be cancelled.

class tooluniverse.smcp.FastMCP(name: str | None = None, instructions: str | None = None, *, version: str | None = None, auth: AuthProvider | None | NotSetT = Ellipsis, middleware: list[Middleware] | None = None, lifespan: Callable[[FastMCP[LifespanResultT]], AbstractAsyncContextManager[LifespanResultT]] | None = None, dependencies: list[str] | None = None, resource_prefix_format: Literal['protocol', 'path'] | None = None, mask_error_details: bool | None = None, tools: list[Tool | Callable[..., Any]] | None = None, tool_transformations: dict[str, ToolTransformConfig] | None = None, tool_serializer: Callable[[Any], str] | None = None, include_tags: set[str] | None = None, exclude_tags: set[str] | None = None, include_fastmcp_meta: bool | None = None, on_duplicate_tools: DuplicateBehavior | None = None, on_duplicate_resources: DuplicateBehavior | None = None, on_duplicate_prompts: DuplicateBehavior | None = None, log_level: str | None = None, debug: bool | None = None, host: str | None = None, port: int | None = None, sse_path: str | None = None, message_path: str | None = None, streamable_http_path: str | None = None, json_response: bool | None = None, stateless_http: bool | None = None, sampling_handler: ServerSamplingHandler[LifespanResultT] | None = None, sampling_handler_behavior: Literal['always', 'fallback'] | None = None)[source][source]

Bases: Generic[LifespanResultT]

__init__(name: str | None = None, instructions: str | None = None, *, version: str | None = None, auth: AuthProvider | None | NotSetT = Ellipsis, middleware: list[Middleware] | None = None, lifespan: Callable[[FastMCP[LifespanResultT]], AbstractAsyncContextManager[LifespanResultT]] | None = None, dependencies: list[str] | None = None, resource_prefix_format: Literal['protocol', 'path'] | None = None, mask_error_details: bool | None = None, tools: list[Tool | Callable[..., Any]] | None = None, tool_transformations: dict[str, ToolTransformConfig] | None = None, tool_serializer: Callable[[Any], str] | None = None, include_tags: set[str] | None = None, exclude_tags: set[str] | None = None, include_fastmcp_meta: bool | None = None, on_duplicate_tools: DuplicateBehavior | None = None, on_duplicate_resources: DuplicateBehavior | None = None, on_duplicate_prompts: DuplicateBehavior | None = None, log_level: str | None = None, debug: bool | None = None, host: str | None = None, port: int | None = None, sse_path: str | None = None, message_path: str | None = None, streamable_http_path: str | None = None, json_response: bool | None = None, stateless_http: bool | None = None, sampling_handler: ServerSamplingHandler[LifespanResultT] | None = None, sampling_handler_behavior: Literal['always', 'fallback'] | None = None)[source][source]
__repr__() str[source][source]

Return repr(self).

property settings: Settings[source]
property name: str[source]
property instructions: str | None[source]
property version: str | None[source]
async run_async(transport: Literal['stdio', 'http', 'sse', 'streamable-http'] | None = None, show_banner: bool = True, **transport_kwargs: Any) None[source][source]

Run the FastMCP server asynchronously.

Parameters:

transport – Transport protocol to use (“stdio”, “sse”, or “streamable-http”)

run(transport: Literal['stdio', 'http', 'sse', 'streamable-http'] | None = None, show_banner: bool = True, **transport_kwargs: Any) None[source][source]

Run the FastMCP server. Note this is a synchronous function.

Parameters:

transport – Transport protocol to use (“stdio”, “sse”, or “streamable-http”)

add_middleware(middleware: Middleware) None[source][source]
async get_tools() dict[str, Tool][source][source]

Get all registered tools, indexed by registered key.

async get_tool(key: str) Tool[source][source]
async get_resources() dict[str, Resource][source][source]

Get all registered resources, indexed by registered key.

async get_resource(key: str) Resource[source][source]
async get_resource_templates() dict[str, ResourceTemplate][source][source]

Get all registered resource templates, indexed by registered key.

async get_resource_template(key: str) ResourceTemplate[source][source]

Get a registered resource template by key.

async get_prompts() dict[str, Prompt][source][source]

List all available prompts.

async get_prompt(key: str) Prompt[source][source]
custom_route(path: str, methods: list[str], name: str | None = None, include_in_schema: bool = True) Callable[[Callable[[Request], Awaitable[Response]]], Callable[[Request], Awaitable[Response]]][source][source]

Decorator to register a custom HTTP route on the FastMCP server.

Allows adding arbitrary HTTP endpoints outside the standard MCP protocol, which can be useful for OAuth callbacks, health checks, or admin APIs. The handler function must be an async function that accepts a Starlette Request and returns a Response.

Parameters:
  • path – URL path for the route (e.g., “/auth/callback”)

  • methods – List of HTTP methods to support (e.g., [“GET”, “POST”])

  • name – Optional name for the route (to reference this route with Starlette’s reverse URL lookup feature)

  • include_in_schema – Whether to include in OpenAPI schema, defaults to True

Example

Register a custom HTTP route for a health check endpoint: .. code-block:: python

@server.custom_route(“/health”, methods=[“GET”]) async def health_check(request: Request) -> Response:

return JSONResponse({“status”: “ok”})

add_tool(tool: Tool) Tool[source][source]

Add a tool to the server.

The tool function can optionally request a Context object by adding a parameter with the Context type annotation. See the @tool decorator for examples.

Parameters:

tool – The Tool instance to register

Returns:

The tool instance that was added to the server.

remove_tool(name: str) None[source][source]

Remove a tool from the server.

Parameters:

name – The name of the tool to remove

Raises:

NotFoundError – If the tool is not found

add_tool_transformation(tool_name: str, transformation: ToolTransformConfig) None[source][source]

Add a tool transformation.

remove_tool_transformation(tool_name: str) None[source][source]

Remove a tool transformation.

tool(name_or_fn: Callable[[...], Any], *, name: str | None = None, title: str | None = None, description: str | None = None, tags: set[str] | None = None, output_schema: dict[str, Any] | None | ellipsis = NotSet, annotations: ToolAnnotations | dict[str, Any] | None = None, exclude_args: list[str] | None = None, meta: dict[str, Any] | None = None, enabled: bool | None = None) FunctionTool[source][source]
tool(name_or_fn: str | None = None, *, name: str | None = None, title: str | None = None, description: str | None = None, tags: set[str] | None = None, output_schema: dict[str, Any] | None | ellipsis = NotSet, annotations: ToolAnnotations | dict[str, Any] | None = None, exclude_args: list[str] | None = None, meta: dict[str, Any] | None = None, enabled: bool | None = None) Callable[[Callable[[...], Any]], FunctionTool]

Decorator to register a tool.

Tools can optionally request a Context object by adding a parameter with the Context type annotation. The context provides access to MCP capabilities like logging, progress reporting, and resource access.

This decorator supports multiple calling patterns: - @server.tool (without parentheses) - @server.tool (with empty parentheses) - @server.tool(“custom_name”) (with name as first argument) - @server.tool(name=”custom_name”) (with name as keyword argument) - server.tool(function, name=”custom_name”) (direct function call)

Parameters:
  • name_or_fn – Either a function (when used as @tool), a string name, or None

  • name – Optional name for the tool (keyword-only, alternative to name_or_fn)

  • description – Optional description of what the tool does

  • tags – Optional set of tags for categorizing the tool

  • output_schema – Optional JSON schema for the tool’s output

  • annotations – Optional annotations about the tool’s behavior

  • exclude_args – Optional list of argument names to exclude from the tool schema

  • meta – Optional meta information about the tool

  • enabled – Optional boolean to enable or disable the tool

Examples

Register a tool with a custom name: .. code-block:: python

@server.tool def my_tool(x: int) -> str:

return str(x)

# Register a tool with a custom name @server.tool def my_tool(x: int) -> str:

return str(x)

@server.tool(“custom_name”) def my_tool(x: int) -> str:

return str(x)

@server.tool(name=”custom_name”) def my_tool(x: int) -> str:

return str(x)

# Direct function call server.tool(my_function, name=”custom_name”)

add_resource(resource: Resource) Resource[source][source]

Add a resource to the server.

Parameters:

resource – A Resource instance to add

Returns:

The resource instance that was added to the server.

add_template(template: ResourceTemplate) ResourceTemplate[source][source]

Add a resource template to the server.

Parameters:

template – A ResourceTemplate instance to add

Returns:

The template instance that was added to the server.

add_resource_fn(fn: Callable[[...], Any], uri: str, name: str | None = None, description: str | None = None, mime_type: str | None = None, tags: set[str] | None = None) None[source][source]

Add a resource or template to the server from a function.

If the URI contains parameters (e.g. “resource://{param}”) or the function has parameters, it will be registered as a template resource.

Parameters:
  • fn – The function to register as a resource

  • uri – The URI for the resource

  • name – Optional name for the resource

  • description – Optional description of the resource

  • mime_type – Optional MIME type for the resource

  • tags – Optional set of tags for categorizing the resource

resource(uri: str, *, name: str | None = None, title: str | None = None, description: str | None = None, mime_type: str | None = None, tags: set[str] | None = None, enabled: bool | None = None, annotations: Annotations | dict[str, Any] | None = None, meta: dict[str, Any] | None = None) Callable[[Callable[[...], Any]], Resource | ResourceTemplate][source][source]

Decorator to register a function as a resource.

The function will be called when the resource is read to generate its content. The function can return: - str for text content - bytes for binary content - other types will be converted to JSON

Resources can optionally request a Context object by adding a parameter with the Context type annotation. The context provides access to MCP capabilities like logging, progress reporting, and session information.

If the URI contains parameters (e.g. “resource://{param}”) or the function has parameters, it will be registered as a template resource.

Parameters:
  • uri – URI for the resource (e.g. “resource://my-resource” or “resource://{param}”)

  • name – Optional name for the resource

  • description – Optional description of the resource

  • mime_type – Optional MIME type for the resource

  • tags – Optional set of tags for categorizing the resource

  • enabled – Optional boolean to enable or disable the resource

  • annotations – Optional annotations about the resource’s behavior

  • meta – Optional meta information about the resource

Examples

Register a resource with a custom name: .. code-block:: python

@server.resource(“resource://my-resource”) def get_data() -> str:

return “Hello, world!”

@server.resource(“resource://my-resource”) async get_data() -> str:

data = await fetch_data() return f”Hello, world! {data}”

@server.resource(“resource://{city}/weather”) def get_weather(city: str) -> str:

return f”Weather for {city}”

@server.resource(“resource://{city}/weather”) async def get_weather_with_context(city: str, ctx: Context) -> str:

await ctx.info(f”Fetching weather for {city}”) return f”Weather for {city}”

@server.resource(“resource://{city}/weather”) async def get_weather(city: str) -> str:

data = await fetch_weather(city) return f”Weather for {city}: {data}”

add_prompt(prompt: Prompt) Prompt[source][source]

Add a prompt to the server.

Parameters:

prompt – A Prompt instance to add

Returns:

The prompt instance that was added to the server.

prompt(name_or_fn: Callable[[...], Any], *, name: str | None = None, title: str | None = None, description: str | None = None, tags: set[str] | None = None, enabled: bool | None = None, meta: dict[str, Any] | None = None) FunctionPrompt[source][source]
prompt(name_or_fn: str | None = None, *, name: str | None = None, title: str | None = None, description: str | None = None, tags: set[str] | None = None, enabled: bool | None = None, meta: dict[str, Any] | None = None) Callable[[Callable[[...], Any]], FunctionPrompt]

Decorator to register a prompt.

Prompts can optionally request a Context object by adding a parameter with the Context type annotation. The context provides access to MCP capabilities like logging, progress reporting, and session information.

This decorator supports multiple calling patterns: - @server.prompt (without parentheses) - @server.prompt() (with empty parentheses) - @server.prompt(“custom_name”) (with name as first argument) - @server.prompt(name=”custom_name”) (with name as keyword argument) - server.prompt(function, name=”custom_name”) (direct function call)

Args:

name_or_fn: Either a function (when used as @prompt), a string name, or None name: Optional name for the prompt (keyword-only, alternative to name_or_fn) description: Optional description of what the prompt does tags: Optional set of tags for categorizing the prompt enabled: Optional boolean to enable or disable the prompt meta: Optional meta information about the prompt

Examples:

```python @server.prompt def analyze_table(table_name: str) -> list[Message]:

schema = read_table_schema(table_name) return [

{

“role”: “user”, “content”: f”Analyze this schema:

{schema}”

}

]

@server.prompt() async def analyze_with_context(table_name: str, ctx: Context) -> list[Message]:

await ctx.info(f”Analyzing table {table_name}”) schema = read_table_schema(table_name) return [

{

“role”: “user”, “content”: f”Analyze this schema:

{schema}”

}

]

@server.prompt(“custom_name”) async def analyze_file(path: str) -> list[Message]:

content = await read_file(path) return [

{

“role”: “user”, “content”: {

“type”: “resource”, “resource”: {

“uri”: f”file://{path}”, “text”: content

}

}

}

]

@server.prompt(name=”custom_name”) def another_prompt(data: str) -> list[Message]:

return [{“role”: “user”, “content”: data}]

# Direct function call server.prompt(my_function, name=”custom_name”) ```

async run_stdio_async(show_banner: bool = True) None[source][source]

Run the server using stdio transport.

async run_http_async(show_banner: bool = True, transport: Literal['http', 'streamable-http', 'sse'] = 'http', host: str | None = None, port: int | None = None, log_level: str | None = None, path: str | None = None, uvicorn_config: dict[str, Any] | None = None, middleware: list[Middleware] | None = None, stateless_http: bool | None = None) None[source][source]

Run the server using HTTP transport.

Parameters:
  • transport – Transport protocol to use - either “streamable-http” (default) or “sse”

  • host – Host address to bind to (defaults to settings.host)

  • port – Port to bind to (defaults to settings.port)

  • log_level – Log level for the server (defaults to settings.log_level)

  • path – Path for the endpoint (defaults to settings.streamable_http_path or settings.sse_path)

  • uvicorn_config – Additional configuration for the Uvicorn server

  • middleware – A list of middleware to apply to the app

  • stateless_http – Whether to use stateless HTTP (defaults to settings.stateless_http)

async run_sse_async(host: str | None = None, port: int | None = None, log_level: str | None = None, path: str | None = None, uvicorn_config: dict[str, Any] | None = None) None[source][source]

Run the server using SSE transport.

sse_app(path: str | None = None, message_path: str | None = None, middleware: list[Middleware] | None = None) StarletteWithLifespan[source][source]

Create a Starlette app for the SSE server.

Parameters:
  • path – The path to the SSE endpoint

  • message_path – The path to the message endpoint

  • middleware – A list of middleware to apply to the app

streamable_http_app(path: str | None = None, middleware: list[Middleware] | None = None) StarletteWithLifespan[source][source]

Create a Starlette app for the StreamableHTTP server.

Parameters:
  • path – The path to the StreamableHTTP endpoint

  • middleware – A list of middleware to apply to the app

http_app(path: str | None = None, middleware: list[Middleware] | None = None, json_response: bool | None = None, stateless_http: bool | None = None, transport: Literal['http', 'streamable-http', 'sse'] = 'http') StarletteWithLifespan[source][source]

Create a Starlette app using the specified HTTP transport.

Parameters:
  • path – The path for the HTTP endpoint

  • middleware – A list of middleware to apply to the app

  • transport – Transport protocol to use - either “streamable-http” (default) or “sse”

Returns:

A Starlette application configured with the specified transport

async run_streamable_http_async(host: str | None = None, port: int | None = None, log_level: str | None = None, path: str | None = None, uvicorn_config: dict[str, Any] | None = None) None[source][source]
mount(server: FastMCP[LifespanResultT], prefix: str | None = None, as_proxy: bool | None = None, *, tool_separator: str | None = None, resource_separator: str | None = None, prompt_separator: str | None = None) None[source][source]

Mount another FastMCP server on this server with an optional prefix.

Unlike importing (with import_server), mounting establishes a dynamic connection between servers. When a client interacts with a mounted server’s objects through the parent server, requests are forwarded to the mounted server in real-time. This means changes to the mounted server are immediately reflected when accessed through the parent.

When a server is mounted with a prefix: - Tools from the mounted server are accessible with prefixed names.

Example: If server has a tool named “get_weather”, it will be available as “prefix_get_weather”.

  • Resources are accessible with prefixed URIs. Example: If server has a resource with URI “weather://forecast”, it will be available as “weather://prefix/forecast”.

  • Templates are accessible with prefixed URI templates. Example: If server has a template with URI “weather://location/{id}”, it will be available as “weather://prefix/location/{id}”.

  • Prompts are accessible with prefixed names. Example: If server has a prompt named “weather_prompt”, it will be available as “prefix_weather_prompt”.

When a server is mounted without a prefix (prefix=None), its tools, resources, templates, and prompts are accessible with their original names. Multiple servers can be mounted without prefixes, and they will be tried in order until a match is found.

There are two modes for mounting servers: 1. Direct mounting (default when server has no custom lifespan): The parent server

directly accesses the mounted server’s objects in-memory for better performance. In this mode, no client lifecycle events occur on the mounted server, including lifespan execution.

  1. Proxy mounting (default when server has a custom lifespan): The parent server treats the mounted server as a separate entity and communicates with it via a Client transport. This preserves all client-facing behaviors, including lifespan execution, but with slightly higher overhead.

Parameters:
  • server – The FastMCP server to mount.

  • prefix – Optional prefix to use for the mounted server’s objects. If None, the server’s objects are accessible with their original names.

  • as_proxy – Whether to treat the mounted server as a proxy. If None (default), automatically determined based on whether the server has a custom lifespan (True if it has a custom lifespan, False otherwise).

  • tool_separator – Deprecated. Separator character for tool names.

  • resource_separator – Deprecated. Separator character for resource URIs.

  • prompt_separator – Deprecated. Separator character for prompt names.

async import_server(server: FastMCP[LifespanResultT], prefix: str | None = None, tool_separator: str | None = None, resource_separator: str | None = None, prompt_separator: str | None = None) None[source][source]

Import the MCP objects from another FastMCP server into this one, optionally with a given prefix.

Note that when a server is imported, its objects are immediately registered to the importing server. This is a one-time operation and future changes to the imported server will not be reflected in the importing server. Server-level configurations and lifespans are not imported.

When a server is imported with a prefix: - The tools are imported with prefixed names

Example: If server has a tool named “get_weather”, it will be available as “prefix_get_weather”

  • The resources are imported with prefixed URIs using the new format Example: If server has a resource with URI “weather://forecast”, it will be available as “weather://prefix/forecast”

  • The templates are imported with prefixed URI templates using the new format Example: If server has a template with URI “weather://location/{id}”, it will be available as “weather://prefix/location/{id}”

  • The prompts are imported with prefixed names Example: If server has a prompt named “weather_prompt”, it will be available as “prefix_weather_prompt”

When a server is imported without a prefix (prefix=None), its tools, resources, templates, and prompts are imported with their original names.

Parameters:
  • server – The FastMCP server to import

  • prefix – Optional prefix to use for the imported server’s objects. If None, objects are imported with their original names.

  • tool_separator – Deprecated. Separator for tool names.

  • resource_separator – Deprecated and ignored. Prefix is now applied using the protocol://prefix/path format

  • prompt_separator – Deprecated. Separator for prompt names.

classmethod from_openapi(openapi_spec: dict[str, Any], client: httpx.AsyncClient, route_maps: list[RouteMap] | list[RouteMapNew] | None = None, route_map_fn: OpenAPIRouteMapFn | OpenAPIRouteMapFnNew | None = None, mcp_component_fn: OpenAPIComponentFn | OpenAPIComponentFnNew | None = None, mcp_names: dict[str, str] | None = None, tags: set[str] | None = None, **settings: Any) FastMCPOpenAPI | FastMCPOpenAPINew[source][source]

Create a FastMCP server from an OpenAPI specification.

classmethod from_fastapi(app: Any, name: str | None = None, route_maps: list[RouteMap] | list[RouteMapNew] | None = None, route_map_fn: OpenAPIRouteMapFn | OpenAPIRouteMapFnNew | None = None, mcp_component_fn: OpenAPIComponentFn | OpenAPIComponentFnNew | None = None, mcp_names: dict[str, str] | None = None, httpx_client_kwargs: dict[str, Any] | None = None, tags: set[str] | None = None, **settings: Any) FastMCPOpenAPI | FastMCPOpenAPINew[source][source]

Create a FastMCP server from a FastAPI application.

classmethod as_proxy(backend: Client[ClientTransportT] | ClientTransport | FastMCP[Any] | AnyUrl | Path | MCPConfig | dict[str, Any] | str, **settings: Any) FastMCPProxy[source][source]

Create a FastMCP proxy server for the given backend.

The backend argument can be either an existing fastmcp.client.Client instance or any value accepted as the transport argument of fastmcp.client.Client. This mirrors the convenience of the fastmcp.client.Client constructor.

classmethod from_client(client: Client[ClientTransportT], **settings: Any) FastMCPProxy[source][source]

Create a FastMCP proxy server from a FastMCP client.

classmethod generate_name(name: str | None = None) str[source][source]
class tooluniverse.smcp.ToolUniverse(tool_files={'ChEMBL': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/chembl_tools.json', 'EFO': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/efo_tools.json', 'Enrichr': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/enrichr_tools.json', 'EuropePMC': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/europe_pmc_tools.json', 'HumanBase': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/humanbase_tools.json', 'OpenAlex': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/openalex_tools.json', 'admetai': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/admetai_tools.json', 'adverse_event': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/adverse_event_tools.json', 'agents': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/agentic_tools.json', 'alphafold': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/alphafold_tools.json', 'clinical_trials': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/clinicaltrials_gov_tools.json', 'compose': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/compose_tools.json', 'dailymed': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/dailymed_tools.json', 'dataset': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/dataset_tools.json', 'disease_target_score': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/disease_target_score_tools.json', 'embedding': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/embedding_tools.json', 'fda_drug_adverse_event': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/fda_drug_adverse_event_tools.json', 'fda_drug_label': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/fda_drug_labeling_tools.json', 'go': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/gene_ontology_tools.json', 'gwas': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/gwas_tools.json', 'hpa': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/hpa_tools.json', 'idmap': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/idmap_tools.json', 'mcp_auto_loader_boltz': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/boltz_tools.json', 'mcp_auto_loader_expert_feedback': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/expert_feedback_tools.json', 'mcp_auto_loader_txagent': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/txagent_client_tools.json', 'mcp_auto_loader_uspto_downloader': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/uspto_downloader_tools.json', 'medlineplus': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/medlineplus_tools.json', 'monarch': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/monarch_tools.json', 'odphp': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/odphp_tools.json', 'opentarget': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/opentarget_tools.json', 'output_summarization': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/output_summarization_tools.json', 'pubchem': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/pubchem_tools.json', 'pubtator': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/pubtator_tools.json', 'rcsb_pdb': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/rcsb_pdb_tools.json', 'reactome': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/reactome_tools.json', 'semantic_scholar': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/semantic_scholar_tools.json', 'software_bioinformatics': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/bioinformatics_core_tools.json', 'software_cheminformatics': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/cheminformatics_tools.json', 'software_earth_sciences': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/earth_sciences_tools.json', 'software_genomics': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/genomics_tools.json', 'software_image_processing': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/image_processing_tools.json', 'software_machine_learning': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/machine_learning_tools.json', 'software_neuroscience': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/neuroscience_tools.json', 'software_physics_astronomy': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/physics_astronomy_tools.json', 'software_scientific_computing': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/scientific_computing_tools.json', 'software_single_cell': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/single_cell_tools.json', 'software_structural_biology': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/structural_biology_tools.json', 'software_visualization': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/visualization_tools.json', 'special_tools': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/special_tools.json', 'tool_composition': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/tool_composition_tools.json', 'tool_finder': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/finder_tools.json', 'uniprot': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/uniprot_tools.json', 'url': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/url_fetch_tools.json', 'uspto': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/uspto_tools.json', 'xml': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/xml_tools.json'}, keep_default_tools=True, log_level: str | None = None, hooks_enabled: bool = False, hook_config: dict | None = None, hook_type: str | None = None)[source][source]

Bases: object

A comprehensive tool management system for loading, organizing, and executing various scientific and data tools.

The ToolUniverse class provides a centralized interface for managing different types of tools including GraphQL tools, RESTful APIs, MCP clients, and specialized scientific tools. It handles tool loading, filtering, caching, and execution.

all_tools[source]

List of all loaded tool configurations

Type:

list

all_tool_dict[source]

Dictionary mapping tool names to their configurations

Type:

dict

tool_category_dicts[source]

Dictionary organizing tools by category

Type:

dict

tool_files[source]

Dictionary mapping category names to their JSON file paths

Type:

dict

callable_functions[source]

Cache of instantiated tool objects

Type:

dict

__init__(tool_files={'ChEMBL': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/chembl_tools.json', 'EFO': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/efo_tools.json', 'Enrichr': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/enrichr_tools.json', 'EuropePMC': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/europe_pmc_tools.json', 'HumanBase': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/humanbase_tools.json', 'OpenAlex': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/openalex_tools.json', 'admetai': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/admetai_tools.json', 'adverse_event': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/adverse_event_tools.json', 'agents': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/agentic_tools.json', 'alphafold': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/alphafold_tools.json', 'clinical_trials': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/clinicaltrials_gov_tools.json', 'compose': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/compose_tools.json', 'dailymed': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/dailymed_tools.json', 'dataset': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/dataset_tools.json', 'disease_target_score': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/disease_target_score_tools.json', 'embedding': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/embedding_tools.json', 'fda_drug_adverse_event': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/fda_drug_adverse_event_tools.json', 'fda_drug_label': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/fda_drug_labeling_tools.json', 'go': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/gene_ontology_tools.json', 'gwas': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/gwas_tools.json', 'hpa': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/hpa_tools.json', 'idmap': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/idmap_tools.json', 'mcp_auto_loader_boltz': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/boltz_tools.json', 'mcp_auto_loader_expert_feedback': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/expert_feedback_tools.json', 'mcp_auto_loader_txagent': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/txagent_client_tools.json', 'mcp_auto_loader_uspto_downloader': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/uspto_downloader_tools.json', 'medlineplus': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/medlineplus_tools.json', 'monarch': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/monarch_tools.json', 'odphp': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/odphp_tools.json', 'opentarget': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/opentarget_tools.json', 'output_summarization': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/output_summarization_tools.json', 'pubchem': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/pubchem_tools.json', 'pubtator': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/pubtator_tools.json', 'rcsb_pdb': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/rcsb_pdb_tools.json', 'reactome': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/reactome_tools.json', 'semantic_scholar': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/semantic_scholar_tools.json', 'software_bioinformatics': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/bioinformatics_core_tools.json', 'software_cheminformatics': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/cheminformatics_tools.json', 'software_earth_sciences': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/earth_sciences_tools.json', 'software_genomics': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/genomics_tools.json', 'software_image_processing': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/image_processing_tools.json', 'software_machine_learning': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/machine_learning_tools.json', 'software_neuroscience': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/neuroscience_tools.json', 'software_physics_astronomy': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/physics_astronomy_tools.json', 'software_scientific_computing': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/scientific_computing_tools.json', 'software_single_cell': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/single_cell_tools.json', 'software_structural_biology': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/structural_biology_tools.json', 'software_visualization': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/packages/visualization_tools.json', 'special_tools': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/special_tools.json', 'tool_composition': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/tool_composition_tools.json', 'tool_finder': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/finder_tools.json', 'uniprot': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/uniprot_tools.json', 'url': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/url_fetch_tools.json', 'uspto': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/uspto_tools.json', 'xml': '/home/runner/work/bioagent/bioagent/ToolUniverse/src/tooluniverse/data/xml_tools.json'}, keep_default_tools=True, log_level: str | None = None, hooks_enabled: bool = False, hook_config: dict | None = None, hook_type: str | None = None)[source][source]

Initialize the ToolUniverse with tool file configurations.

Parameters:
  • tool_files (dict, optional) – Dictionary mapping category names to JSON file paths. Defaults to default_tool_files.

  • keep_default_tools (bool, optional) – Whether to keep default tools when custom tool_files are provided. Defaults to True.

  • log_level (str, optional) – Log level for this instance. Can be ‘DEBUG’, ‘INFO’, ‘WARNING’, ‘ERROR’, ‘CRITICAL’. If None, uses global setting.

  • hooks_enabled (bool, optional) – Whether to enable output hooks. Defaults to False.

  • hook_config (dict, optional) – Configuration for hooks. If None, uses default config.

  • hook_type (str or list, optional) – Simple hook type selection. Can be ‘SummarizationHook’, ‘FileSaveHook’, or a list of both. Defaults to ‘SummarizationHook’. If both hook_config and hook_type are provided, hook_config takes precedence.

register_custom_tool(tool_class, tool_name=None, tool_config=None)[source][source]

Register a custom tool class at runtime.

Parameters:
  • tool_class – The tool class to register

  • tool_name (str, optional) – Name to register under. Uses class name if None.

  • tool_config (dict, optional) – Tool configuration dictionary to add to all_tools

Returns:

The name the tool was registered under

Return type:

str

force_full_discovery()[source][source]

Force full tool discovery, importing all tool modules immediately.

This can be useful when you need to ensure all tools are available immediately, bypassing lazy loading.

Returns:

Updated tool registry with all discovered tools

Return type:

dict

get_lazy_loading_status()[source][source]

Get information about lazy loading status and available tools.

Returns:

Dictionary with lazy loading status and tool counts

Return type:

dict

get_tool_types()[source][source]

Get the types of tools available in the tool files.

Returns:

A list of tool type names (category keys).

Return type:

list

generate_env_template(all_missing_keys, output_file: str = '.env.template')[source][source]

Generate a template .env file with all required API keys

load_tools(tool_type=None, exclude_tools=None, exclude_categories=None, include_tools=None, tool_config_files=None, tools_file=None, include_tool_types=None, exclude_tool_types=None)[source][source]

Loads tool definitions from JSON files into the instance’s tool registry.

If tool_type is None, loads all available tool categories from self.tool_files. Otherwise, loads only the specified tool categories.

After loading, deduplicates tools by their ‘name’ field and updates the internal tool list. Also refreshes the tool name and description mapping.

Parameters:
  • tool_type (list, optional) – List of tool category names to load. If None, loads all categories.

  • exclude_tools (list, optional) – List of specific tool names to exclude from loading.

  • exclude_categories (list, optional) – List of tool categories to exclude from loading.

  • include_tools (list or str, optional) – List of specific tool names to include, or path to a text file containing tool names (one per line). If provided, only these tools will be loaded regardless of categories.

  • tool_config_files (dict, optional) – Additional tool configuration files to load. Format: {“category_name”: “/path/to/config.json”}

  • tools_file (str, optional) – Path to a text file containing tool names to include (one per line). Alternative to include_tools when providing a file path.

  • include_tool_types (list, optional) – List of tool types to include (e.g., [“OpenTarget”, “ChEMBLTool”]). If provided, only tools with these types will be loaded.

  • exclude_tool_types (list, optional) – List of tool types to exclude (e.g., [“ToolFinderEmbedding”]). Tools with these types will be excluded.

Side Effects:
  • Updates self.all_tools with loaded and deduplicated tools.

  • Updates self.tool_category_dicts with loaded tools per category.

  • Calls self.refresh_tool_name_desc() to update tool name/description mapping.

  • Prints the number of tools before and after loading.

Examples

# Load specific tools by name tu.load_tools(include_tools=[“UniProt_get_entry_by_accession”, “ChEMBL_get_molecule_by_chembl_id”])

# Load tools from a file tu.load_tools(tools_file=”/path/to/tool_names.txt”)

# Include only specific tool types tu.load_tools(include_tool_types=[“OpenTarget”, “ChEMBLTool”])

# Exclude specific tool types tu.load_tools(exclude_tool_types=[“ToolFinderEmbedding”, “Unknown”])

# Load additional config files tu.load_tools(tool_config_files={“custom_tools”: “/path/to/custom_tools.json”})

# Combine multiple options tu.load_tools(

tool_type=[“uniprot”, “ChEMBL”], exclude_tools=[“problematic_tool”], exclude_tool_types=[“Unknown”], tool_config_files={“custom”: “/path/to/custom.json”}

)

select_tools(include_names=None, exclude_names=None, include_categories=None, exclude_categories=None)[source][source]

Select tools based on tool names and/or categories (tool_files keys).

Parameters:
  • include_names (list, optional) – List of tool names to include. If None, include all.

  • exclude_names (list, optional) – List of tool names to exclude.

  • include_categories (list, optional) – List of categories (tool_files keys) to include. If None, include all.

  • exclude_categories (list, optional) – List of categories (tool_files keys) to exclude.

Returns:

List of selected tool configurations.

Return type:

list

filter_tool_lists(tool_name_list, tool_desc_list, include_names=None, exclude_names=None, include_categories=None, exclude_categories=None)[source][source]

Directly filter tool name and description lists based on names and/or categories.

This method takes existing tool name and description lists and filters them according to the specified criteria using the select_tools method for category-based filtering.

Parameters:
  • tool_name_list (list) – List of tool names to filter.

  • tool_desc_list (list) – List of tool descriptions to filter (must correspond to tool_name_list).

  • include_names (list, optional) – List of tool names to include.

  • exclude_names (list, optional) – List of tool names to exclude.

  • include_categories (list, optional) – List of categories to include.

  • exclude_categories (list, optional) – List of categories to exclude.

Returns:

A tuple containing (filtered_tool_name_list, filtered_tool_desc_list).

Return type:

tuple

return_all_loaded_tools()[source][source]

Return a deep copy of all loaded tools.

Returns:

A deep copy of the all_tools list to prevent external modification.

Return type:

list

list_built_in_tools(mode='config', scan_all=False)[source][source]

List all built-in tool categories and their statistics with different modes.

This method provides a comprehensive overview of all available tools in the ToolUniverse, organized by categories. It reads directly from the default tool files to gather statistics, so it works even before calling load_tools().

Parameters:
  • mode (str, optional) – Organization mode for tools. Defaults to ‘config’. - ‘config’: Organize by config file categories (original behavior) - ‘type’: Organize by tool types (implementation classes) - ‘list_name’: Return a list of all tool names - ‘list_spec’: Return a list of all tool specifications

  • scan_all (bool, optional) – Whether to scan all JSON files in data directory recursively. If True, scans all JSON files in data/ and its subdirectories. If False (default), uses predefined tool file mappings.

Returns:

  • For ‘config’ and ‘type’ modes: A dictionary containing tool statistics

  • For ‘list_name’ mode: A list of all tool names

  • For ‘list_spec’ mode: A list of all tool specifications

Return type:

dict or list

Example

>>> tool_universe = ToolUniverse()
>>> # Group by config file categories (predefined files only)
>>> stats = tool_universe.list_built_in_tools(mode='config')
>>> # Scan all JSON files in data directory recursively
>>> stats = tool_universe.list_built_in_tools(mode='config', scan_all=True)
>>> # Get all tool names from all JSON files
>>> tool_names = tool_universe.list_built_in_tools(mode='list_name', scan_all=True)

Note

  • This method reads directly from tool files and works without calling load_tools()

  • Tools are deduplicated across categories, so the same tool won’t be counted multiple times

  • The summary is automatically printed to console when this method is called (except for list_name and list_spec modes)

  • When scan_all=True, all JSON files in data/ and subdirectories are scanned

refresh_tool_name_desc(enable_full_desc=False, include_names=None, exclude_names=None, include_categories=None, exclude_categories=None)[source][source]

Refresh the tool name and description mappings with optional filtering.

This method rebuilds the internal tool dictionary and generates filtered lists of tool names and descriptions based on the provided filter criteria.

Parameters:
  • enable_full_desc (bool, optional) – If True, includes full tool JSON as description. If False, uses “name: description” format. Defaults to False.

  • include_names (list, optional) – List of tool names to include.

  • exclude_names (list, optional) – List of tool names to exclude.

  • include_categories (list, optional) – List of categories to include.

  • exclude_categories (list, optional) – List of categories to exclude.

Returns:

A tuple containing (tool_name_list, tool_desc_list) after filtering.

Return type:

tuple

prepare_one_tool_prompt(tool)[source][source]

Prepare a single tool configuration for prompt usage by filtering to essential keys.

Parameters:

tool (dict) – Tool configuration dictionary.

Returns:

Tool configuration with only essential keys for prompting.

Return type:

dict

prepare_tool_prompts(tool_list)[source][source]

Prepare a list of tool configurations for prompt usage.

Parameters:

tool_list (list) – List of tool configuration dictionaries.

Returns:

List of tool configurations with only essential keys for prompting.

Return type:

list

remove_keys(tool_list, invalid_keys)[source][source]

Remove specified keys from a list of tool configurations.

Parameters:
  • tool_list (list) – List of tool configuration dictionaries.

  • invalid_keys (list) – List of keys to remove from each tool configuration.

Returns:

Deep copy of tool list with specified keys removed.

Return type:

list

prepare_tool_examples(tool_list)[source][source]

Prepare tool configurations for example usage by keeping extended set of keys.

This method is similar to prepare_tool_prompts but includes additional keys useful for examples and documentation.

Parameters:

tool_list (list) – List of tool configuration dictionaries.

Returns:

Deep copy of tool list with only example-relevant keys.

Return type:

list

get_tool_specification_by_names(tool_names, format='default')[source][source]

Retrieve tool specifications by their names using tool_specification method.

Parameters:
  • tool_names (list) – List of tool names to retrieve.

  • format (str, optional) – Output format. Options: ‘default’, ‘openai’. If ‘openai’, returns OpenAI function calling format. Defaults to ‘default’.

Returns:

List of tool specifications for the specified names.

Tools not found will be reported but not included in the result.

Return type:

list

get_tool_by_name(tool_names, format='default')[source][source]

Retrieve tool configurations by their names.

Parameters:
  • tool_names (list) – List of tool names to retrieve.

  • format (str, optional) – Output format. Options: ‘default’, ‘openai’. If ‘openai’, returns OpenAI function calling format. Defaults to ‘default’.

Returns:

List of tool configurations for the specified names.

Tools not found will be reported but not included in the result.

Return type:

list

get_one_tool_by_one_name(tool_name, return_prompt=True)[source][source]

Retrieve a single tool specification by name, optionally prepared for prompting.

This is a convenience method that calls get_one_tool_by_one_name.

Parameters:
  • tool_name (str) – Name of the tool to retrieve.

  • return_prompt (bool, optional) – If True, returns tool prepared for prompting. If False, returns full tool configuration. Defaults to True.

Returns:

Tool configuration if found, None otherwise.

Return type:

dict or None

tool_specification(tool_name, return_prompt=False, format='default')[source][source]

Retrieve a single tool configuration by name.

Parameters:
  • tool_name (str) – Name of the tool to retrieve.

  • return_prompt (bool, optional) – If True, returns tool prepared for prompting. If False, returns full tool configuration. Defaults to False.

  • format (str, optional) – Output format. Options: ‘default’, ‘openai’. If ‘openai’, returns OpenAI function calling format. Defaults to ‘default’.

Returns:

Tool configuration if found, None otherwise.

Return type:

dict or None

get_tool_description(tool_name)[source][source]

Get the description of a tool by its name.

This is a convenience method that calls get_one_tool_by_one_name.

Parameters:

tool_name (str) – Name of the tool.

Returns:

Tool configuration if found, None otherwise.

Return type:

dict or None

get_tool_type_by_name(tool_name)[source][source]

Get the type of a tool by its name.

Parameters:

tool_name (str) – Name of the tool.

Returns:

The type of the tool.

Return type:

str

Raises:

KeyError – If the tool name is not found in loaded tools.

tool_to_str(tool_list)[source][source]

Convert a list of tool configurations to a formatted string.

Parameters:

tool_list (list) – List of tool configuration dictionaries.

Returns:

JSON-formatted string representation of the tools, with each tool

separated by double newlines.

Return type:

str

extract_function_call_json(lst, return_message=False, verbose=True, format='llama')[source][source]

Extract function call JSON from input data.

This method delegates to the utility function extract_function_call_json.

Parameters:
  • lst – Input data containing function call information.

  • return_message (bool, optional) – Whether to return message along with JSON. Defaults to False.

  • verbose (bool, optional) – Whether to enable verbose output. Defaults to True.

  • format (str, optional) – Format type for extraction. Defaults to ‘llama’.

Returns:

Function call JSON, optionally with message if return_message is True.

Return type:

dict or tuple

call_id_gen()[source][source]

Generate a random call ID for function calls.

Returns:

A random 9-character string composed of letters and digits.

Return type:

str

run(fcall_str, return_message=False, verbose=True, format='llama')[source][source]

Execute function calls from input string or data.

This method parses function call data, validates it, and executes the corresponding tools. It supports both single function calls and multiple function calls in a list.

Parameters:
  • fcall_str – Input string or data containing function call information.

  • return_message (bool, optional) – Whether to return formatted messages. Defaults to False.

  • verbose (bool, optional) – Whether to enable verbose output. Defaults to True.

  • format (str, optional) – Format type for parsing. Defaults to ‘llama’.

Returns:

  • For multiple function calls: List of formatted messages with tool responses

  • For single function call: Direct result from the tool

  • None: If the input is not a valid function call

Return type:

list or str or None

run_one_function(function_call_json)[source][source]

Execute a single function call.

This method validates the function call, initializes the tool if necessary, and executes it with the provided arguments. If hooks are enabled, it also applies output hooks to process the result.

Parameters:

function_call_json (dict) – Dictionary containing function name and arguments.

Returns:

Result from the tool execution, or error message if validation fails.

Return type:

str or dict

toggle_hooks(enabled: bool)[source][source]

Enable or disable output hooks globally.

This method allows runtime control of the hook system. When enabled, it initializes the HookManager if not already present. When disabled, it deactivates the HookManager.

Parameters:

enabled (bool) – True to enable hooks, False to disable

init_tool(tool=None, tool_name=None, add_to_cache=True)[source][source]

Initialize a tool instance from configuration or name.

This method creates a new tool instance using the tool type mappings and optionally caches it for future use. It handles special cases like the OpentargetToolDrugNameMatch which requires additional dependencies.

Parameters:
  • tool (dict, optional) – Tool configuration dictionary. Either this or tool_name must be provided.

  • tool_name (str, optional) – Name of the tool type to initialize. Either this or tool must be provided.

  • add_to_cache (bool, optional) – Whether to cache the initialized tool. Defaults to True.

Returns:

Initialized tool instance.

Return type:

object

Raises:

KeyError – If the tool type is not found in tool_type_mappings.

check_function_call(fcall_str, function_config=None, format='llama')[source][source]

Validate a function call against tool configuration.

This method checks if a function call is valid by verifying the function name exists and the arguments match the expected parameters.

Parameters:
  • fcall_str – Function call string or data to validate.

  • function_config (dict, optional) – Specific function configuration to validate against. If None, uses the loaded tool configuration.

  • format (str, optional) – Format type for parsing. Defaults to ‘llama’.

Returns:

A tuple of (is_valid, message) where:
  • is_valid (bool): True if the function call is valid, False otherwise

  • message (str): Error message if invalid, empty if valid

Return type:

tuple

export_tool_names(output_file, category_filter=None)[source][source]

Export tool names to a text file (one per line).

Parameters:
  • output_file (str) – Path to the output file

  • category_filter (list, optional) – List of categories to filter by

discover_mcp_tools(server_urls: List[str] | None = None, **kwargs) Dict[str, Any][source]

Discover available tools from MCP servers without loading them.

This method connects to MCP servers to discover what tools are available without actually registering them in ToolUniverse. Useful for exploration and selective tool loading.

Parameters:

server_urlslist of str, optional

List of MCP server URLs to discover from

**kwargs

Additional options: - timeout (int): Connection timeout (default: 30) - include_schemas (bool): Include tool parameter schemas (default: True)

Returns:

dict

Discovery results with tools organized by server

Examples:

tu = ToolUniverse()

# Discover what's available
discovery = tu.discover_mcp_tools([
    "http://localhost:8001",
    "http://ml-server:8002"
])

# Show available tools
for server, info in discovery["servers"].items():
    print(f"\n{server}:")
    for tool in info.get("tools", []):
        print(f"  - {tool['name']}: {tool['description']}")
get_available_tools(category_filter=None, name_only=True)[source][source]

Get available tools, optionally filtered by category.

Parameters:
  • category_filter (list, optional) – List of categories to filter by

  • name_only (bool) – If True, return only tool names; if False, return full configs

Returns:

List of tool names or tool configurations

Return type:

list

list_mcp_connections() Dict[str, Any][source]

List all active MCP connections and loaded tools.

Returns:

dict

Information about MCP connections, auto-loaders, and loaded tools

Examples:

tu = ToolUniverse()
tu.load_mcp_tools(["http://localhost:8001"])

connections = tu.list_mcp_connections()
print(f"Active MCP connections: {len(connections['connections'])}")
load_mcp_tools(server_urls: List[str] | None = None, **kwargs)[source]

Load MCP tools from remote servers into this ToolUniverse instance.

This method automatically discovers tools from MCP servers and registers them as ToolUniverse tools, enabling seamless usage of remote capabilities.

Parameters:

server_urlslist of str, optional

List of MCP server URLs to load tools from. Examples:

If None, attempts to discover from local MCP tool registry.

**kwargs

Additional configuration options:

  • tool_prefix (str): Prefix for loaded tool names (default: “mcp_”)

  • timeout (int): Connection timeout in seconds (default: 30)

  • auto_register (bool): Whether to auto-register discovered tools (default: True)

  • selected_tools (list): Specific tools to load from each server

  • categories (list): Tool categories to filter by

Returns:

dict

Summary of loaded tools with counts and any errors encountered.

Examples:

Load from specific servers: .. code-block:: python

tu = ToolUniverse()

# Load tools from multiple MCP servers result = tu.load_mcp_tools([

http://localhost:8001”, # Local analysis server “http://ml-server:8002”, # Remote ML server “ws://realtime:9000” # WebSocket server

])

print(f”Loaded {result[‘total_tools’]} tools from {result[‘servers_connected’]} servers”)

Load with custom configuration: .. code-block:: python

tu.load_mcp_tools(

server_urls=[”http://localhost:8001”], tool_prefix=”analysis_”, timeout=60, selected_tools=[“protein_analysis”, “drug_interaction”]

)

Auto-discovery from local registry: .. code-block:: python

# If you have registered MCP tools locally, auto-discover their servers tu.load_mcp_tools() # Uses servers from mcp_tool_registry

find_tools_by_pattern(pattern, search_in='name', case_sensitive=False)[source][source]

Find tools matching a pattern in their name or description.

Parameters:
  • pattern (str) – Pattern to search for

  • search_in (str) – Where to search - ‘name’, ‘description’, or ‘both’

  • case_sensitive (bool) – Whether search should be case sensitive

Returns:

List of matching tool configurations

Return type:

list

load_tools_from_names_list(tool_names, clear_existing=True)[source][source]

Load only specific tools by their names.

Parameters:
  • tool_names (list) – List of tool names to load

  • clear_existing (bool) – Whether to clear existing tools first

Returns:

Number of tools successfully loaded

Return type:

int

tooluniverse.smcp.get_logger(name: str | None = None) Logger[source][source]

Get a logger instance

Parameters:

name (str, optional) – Logger name (usually __name__)

Returns:

Logger instance

Return type:

logging.Logger

class tooluniverse.smcp.SMCP(name: str | None = None, tooluniverse_config: ToolUniverse | Dict[str, Any] | None = None, tool_categories: List[str] | None = None, exclude_tools: List[str] | None = None, exclude_categories: List[str] | None = None, include_tools: List[str] | None = None, tools_file: str | None = None, tool_config_files: Dict[str, str] | None = None, include_tool_types: List[str] | None = None, exclude_tool_types: List[str] | None = None, auto_expose_tools: bool = True, search_enabled: bool = True, max_workers: int = 5, hooks_enabled: bool = False, hook_config: Dict[str, Any] | None = None, hook_type: str | None = None, **kwargs)[source][source]

Bases: FastMCP

Scientific Model Context Protocol (SMCP) Server

SMCP is an enhanced MCP (Model Context Protocol) server that seamlessly integrates ToolUniverse’s extensive collection of scientific and scientific tools with the FastMCP framework. It provides a unified, AI-accessible interface for scientific computing, data analysis, and research workflows.

The SMCP server extends standard MCP capabilities with scientific domain expertise, intelligent tool discovery, and optimized configurations for research applications. It automatically handles the complex task of exposing hundreds of specialized tools through a consistent, well-documented interface.

Key Features:

🔬 Scientific Tool Integration: Native access to 350+ specialized tools covering

scientific databases, literature search, clinical data, genomics, proteomics, chemical informatics, and AI-powered analysis capabilities.

🧠 AI-Powered Tool Discovery: Multi-tiered intelligent search system using:
  • ToolFinderLLM: Cost-optimized LLM-based semantic understanding with pre-filtering

  • Tool_RAG: Embedding-based similarity search

  • Keyword Search: Simple text matching as reliable fallback

📡 **Full MCP Protocol Support**: Complete implementation of MCP specification with:
  • Standard methods (tools/list, tools/call, resources/, prompts/)

  • Custom scientific methods (tools/find, tools/search)

  • Multi-transport support (stdio, HTTP, SSE)

  • JSON-RPC 2.0 compliance with proper error handling

High-Performance Architecture: Production-ready features including:
  • Configurable thread pools for concurrent tool execution

  • Intelligent tool loading and caching

  • Resource management and graceful degradation

  • Comprehensive error handling and recovery

🔧 Developer-Friendly: Simplified configuration and deployment with:
  • Sensible defaults for scientific computing

  • Flexible customization options

  • Comprehensive documentation and examples

  • Built-in diagnostic and monitoring tools

Custom MCP Methods:

tools/find:

AI-powered tool discovery using natural language queries. Supports semantic search, category filtering, and flexible response formats.

tools/search:

Alternative endpoint for tool discovery with identical functionality to tools/find, provided for compatibility and convenience.

Parameters:

namestr, optional

Human-readable server name used in logs and identification. Default: “SMCP Server” Examples: “Scientific Research API”, “Drug Discovery Server”

tooluniverse_configToolUniverse or dict, optional

Either a pre-configured ToolUniverse instance or configuration dict. If None, creates a new ToolUniverse with default settings. Allows reuse of existing tool configurations and customizations.

tool_categorieslist of str, optional

Specific ToolUniverse categories to load. If None and auto_expose_tools=True, loads all available tools. Common combinations: - Scientific: [“ChEMBL”, “uniprot”, “opentarget”, “pubchem”, “hpa”] - Literature: [“EuropePMC”, “semantic_scholar”, “pubtator”, “agents”] - Clinical: [“fda_drug_label”, “clinical_trials”, “adverse_events”]

exclude_toolslist of str, optional

Specific tool names to exclude from loading. These tools will not be exposed via the MCP interface even if they are in the loaded categories. Useful for removing specific problematic or unwanted tools.

exclude_categorieslist of str, optional

Tool categories to exclude from loading. These entire categories will be skipped during tool loading. Can be combined with tool_categories to first select categories and then exclude specific ones.

include_toolslist of str, optional

Specific tool names to include. If provided, only these tools will be loaded regardless of categories. Overrides category-based selection.

tools_filestr, optional

Path to a text file containing tool names to include (one per line). Alternative to include_tools parameter. Comments (lines starting with #) and empty lines are ignored.

tool_config_filesdict of str, optional

Additional tool configuration files to load. Format: {“category_name”: “/path/to/config.json”}. These files will be loaded in addition to the default tool files.

include_tool_typeslist of str, optional

Specific tool types to include. If provided, only tools of these types will be loaded. Available types include: ‘OpenTarget’, ‘ToolFinderEmbedding’, ‘ToolFinderKeyword’, ‘ToolFinderLLM’, etc.

exclude_tool_typeslist of str, optional

Tool types to exclude from loading. These tool types will be skipped during tool loading. Useful for excluding entire categories of tools (e.g., all ToolFinder types or all OpenTarget tools).

auto_expose_toolsbool, default True

Whether to automatically expose ToolUniverse tools as MCP tools. When True, all loaded tools become available via the MCP interface with automatic schema conversion and execution wrapping.

search_enabledbool, default True

Enable AI-powered tool search functionality via tools/find method. Includes ToolFinderLLM (cost-optimized LLM-based), Tool_RAG (embedding-based), and simple keyword search capabilities with intelligent fallback.

max_workersint, default 5

Maximum number of concurrent worker threads for tool execution. Higher values allow more parallel tool calls but use more resources. Recommended: 5-20 depending on server capacity and expected load.

hooks_enabledbool, default False

Whether to enable output processing hooks for intelligent post-processing of tool outputs. When True, hooks can automatically summarize long outputs, save results to files, or apply other transformations.

hook_configdict, optional

Custom hook configuration dictionary. If provided, overrides default hook settings. Should contain ‘hooks’ list with hook definitions. Example: {“hooks”: [{“name”: “summarization_hook”, “type”: “SummarizationHook”, …}]}

hook_typestr, optional

Simple hook type selection. Can be ‘SummarizationHook’, ‘FileSaveHook’, or a list of both. Provides an easy way to enable hooks without full configuration. Takes precedence over hooks_enabled when specified.

**kwargs

Additional arguments passed to the underlying FastMCP server instance. Supports all FastMCP configuration options for advanced customization.

Raises:

ImportError

If FastMCP is not installed. FastMCP is a required dependency for SMCP. Install with: pip install fastmcp

Notes:

  • SMCP automatically handles ToolUniverse tool loading and MCP conversion

  • Tool search uses ToolFinderLLM (optimized for cost) when available, gracefully falls back to simpler methods

  • All tools support JSON argument passing for maximum flexibility

  • Server supports graceful shutdown and comprehensive resource cleanup

  • Thread pool execution ensures non-blocking operation for concurrent requests

  • Built-in error handling provides informative debugging information

__init__(name: str | None = None, tooluniverse_config: ToolUniverse | Dict[str, Any] | None = None, tool_categories: List[str] | None = None, exclude_tools: List[str] | None = None, exclude_categories: List[str] | None = None, include_tools: List[str] | None = None, tools_file: str | None = None, tool_config_files: Dict[str, str] | None = None, include_tool_types: List[str] | None = None, exclude_tool_types: List[str] | None = None, auto_expose_tools: bool = True, search_enabled: bool = True, max_workers: int = 5, hooks_enabled: bool = False, hook_config: Dict[str, Any] | None = None, hook_type: str | None = None, **kwargs)[source][source]
add_custom_tool(name: str, function: Callable, description: str | None = None, **kwargs)[source][source]

Add a custom Python function as an MCP tool to the SMCP server.

This method provides a convenient way to extend SMCP functionality with custom tools beyond those provided by ToolUniverse. Custom tools are automatically integrated into the MCP interface and can be discovered and used by clients alongside existing tools.

Parameters:

namestr

Unique name for the tool in the MCP interface. Should be descriptive and follow naming conventions (lowercase with underscores preferred). Examples: “analyze_protein_sequence”, “custom_data_processor”

functionCallable

Python function to execute when the tool is called. The function: - Can be synchronous or asynchronous - Should have proper type annotations for parameters - Should include a comprehensive docstring - Will be automatically wrapped for MCP compatibility

descriptionstr, optional

Human-readable description of the tool’s functionality. If provided, this will be set as the function’s __doc__ attribute. If None, the function’s existing docstring will be used.

**kwargs

Additional FastMCP tool configuration options: - parameter_schema: Custom JSON schema for parameters - return_schema: Schema for return values - examples: Usage examples for the tool - tags: Categorization tags

Returns:

Callable

The decorated function registered with FastMCP framework.

Usage Examples:

Simple synchronous function: .. code-block:: python

def analyze_text(text: str, max_length: int = 100) -> str:

‘’’Analyze text and return summary.’’’ return text[:max_length] + “…” if len(text) > max_length else text

server.add_custom_tool(

name=”text_analyzer”, function=analyze_text, description=”Analyze and summarize text content”

)

Asynchronous function with complex parameters: .. code-block:: python

async def process_data(

data: List[Dict[str, Any]], processing_type: str = “standard”

) -> Dict[str, Any]:

‘’’Process scientific data with specified method.’’’ # Custom processing logic here return {“processed_items”: len(data), “type”: processing_type}

server.add_custom_tool(

name=”data_processor”, function=process_data

)

Function with custom schema: .. code-block:: python

def calculate_score(values: List[float]) -> float:

‘’’Calculate composite score from values.’’’ return sum(values) / len(values) if values else 0.0

server.add_custom_tool(

name=”score_calculator”, function=calculate_score, parameter_schema={

“type”: “object”, “properties”: {

“values”: {

“type”: “array”, “items”: {“type”: “number”}, “description”: “List of numeric values to process”

}

}, “required”: [“values”]

}

)

Integration with ToolUniverse:

Custom tools work seamlessly alongside ToolUniverse tools: - Appear in tool discovery searches - Follow same calling conventions - Include in server diagnostics and listings - Support all MCP client interaction patterns

Best Practices:

  • Use descriptive, unique tool names

  • Include comprehensive docstrings

  • Add proper type annotations for parameters

  • Handle errors gracefully within the function

  • Consider async functions for I/O-bound operations

  • Test tools thoroughly before deployment

Notes:

  • Custom tools are registered immediately upon addition

  • Tools can be added before or after server startup

  • Function signature determines parameter schema automatically

  • Custom tools support all FastMCP features and conventions

async close()[source][source]

Perform comprehensive cleanup and resource management during server shutdown.

This method ensures graceful shutdown of the SMCP server by properly cleaning up all resources, stopping background tasks, and releasing system resources. It’s designed to be safe to call multiple times and handles errors gracefully.

Cleanup Operations:

Thread Pool Shutdown: - Gracefully stops the ThreadPoolExecutor used for tool execution - Waits for currently running tasks to complete - Prevents new tasks from being submitted - Times out after reasonable wait period to prevent hanging

Resource Cleanup: - Releases any open file handles or network connections - Clears internal caches and temporary data - Stops background monitoring tasks - Frees memory allocated for tool configurations

Error Handling: - Continues cleanup even if individual operations fail - Logs cleanup errors for debugging without raising exceptions - Ensures critical resources are always released

Usage Patterns:

Automatic Cleanup (Recommended): .. code-block:: python

server = SMCP(“My Server”) try:

server.run_simple() # Cleanup happens automatically on exit

except KeyboardInterrupt:

pass # run_simple() handles cleanup

Manual Cleanup: .. code-block:: python

server = SMCP(“My Server”) try:

# Custom server logic here pass

finally:

await server.close() # Explicit cleanup

**Context Manager Pattern:** .. code-block:: python

async with SMCP(“My Server”) as server:

# Server operations pass

# Cleanup happens automatically

Performance Considerations:

  • Cleanup operations are typically fast (< 1 second)

  • Thread pool shutdown may take longer if tasks are running

  • Network connections are closed immediately

  • Memory cleanup depends on garbage collection

Error Recovery:

  • Individual cleanup failures don’t stop the overall process

  • Critical errors are logged but don’t raise exceptions

  • Cleanup is idempotent - safe to call multiple times

  • System resources are guaranteed to be released

Notes:

  • This method is called automatically by run_simple() on shutdown

  • Can be called manually for custom server lifecycle management

  • Async method to properly handle async resource cleanup

  • Safe to call even if server hasn’t been fully initialized

run_simple(transport: Literal['stdio', 'http', 'sse'] = 'http', host: str = '0.0.0.0', port: int = 7000, **kwargs)[source][source]

Start the SMCP server with simplified configuration and automatic setup.

This method provides a convenient way to launch the SMCP server with sensible defaults for different deployment scenarios. It handles transport configuration, logging setup, and graceful shutdown automatically.

Parameters:

transport{“stdio”, “http”, “sse”}, default “http”

Communication transport protocol:

  • “stdio”: Standard input/output communication * Best for: Command-line tools, subprocess integration * Pros: Low overhead, simple integration * Cons: Single client, no network access

  • “http”: HTTP-based communication (streamable-http) * Best for: Web applications, REST API integration * Pros: Wide compatibility, stateless, scalable * Cons: Higher overhead than stdio

  • “sse”: Server-Sent Events over HTTP * Best for: Real-time applications, streaming responses * Pros: Real-time communication, web-compatible * Cons: Browser limitations, more complex

hoststr, default “0.0.0.0”

Server bind address for HTTP/SSE transports: - “0.0.0.0”: Listen on all network interfaces (default) - “127.0.0.1”: localhost only (more secure) - Specific IP: Bind to particular interface

portint, default 7000

Server port for HTTP/SSE transports. Choose ports: - 7000-7999: Recommended range for SMCP servers - Above 1024: No root privileges required - Check availability: Ensure port isn’t already in use

**kwargs

Additional arguments passed to FastMCP’s run() method: - debug (bool): Enable debug logging - access_log (bool): Log client requests - workers (int): Number of worker processes (HTTP only)

Server Startup Process:

  1. Initialization Summary: Displays server configuration and capabilities

  2. Transport Setup: Configures selected communication method

  3. Service Start: Begins listening for client connections

  4. Graceful Shutdown: Handles interrupts and cleanup

Deployment Scenarios:

Development & Testing: .. code-block:: python

server = SMCP(name=”Dev Server”) server.run_simple(transport=”stdio”) # For CLI testing

Local Web Service: .. code-block:: python

server = SMCP(name=”Local API”) server.run_simple(transport=”http”, host=”127.0.0.1”, port=8000)

Production Service: .. code-block:: python

server = SMCP(

name=”Production SMCP”, tool_categories=[“ChEMBL”, “uniprot”, “opentarget”], max_workers=20

) server.run_simple(

transport=”http”, host=”0.0.0.0”, port=7000, workers=4

)

Real-time Applications: .. code-block:: python

server = SMCP(name=”Streaming API”) server.run_simple(transport=”sse”, port=7001)

Error Handling:

  • KeyboardInterrupt: Graceful shutdown on Ctrl+C

  • Port in Use: Clear error message with suggestions

  • Transport Errors: Detailed debugging information

  • **Cleanup**: Automatic resource cleanup on exit

Logging Output:

Provides informative startup messages:

🚀 Starting SMCP server ‘My Server’… 📊 Loaded 356 tools from ToolUniverse 🔍 Search enabled: True 🌐 Server running on http://0.0.0.0:7000

Security Considerations:

  • Use host=”127.0.0.1” for local-only access

  • Configure firewall rules for production deployment

  • Consider HTTPS termination with reverse proxy

  • Validate all client inputs through MCP protocol

Performance Notes:

  • HTTP transport supports multiple concurrent clients

  • stdio transport is single-client but lower latency

  • SSE transport enables real-time bidirectional communication

  • Thread pool size affects concurrent tool execution capacity

tooluniverse.smcp.create_smcp_server(name: str = 'SMCP Server', tool_categories: List[str] | None = None, search_enabled: bool = True, **kwargs) SMCP[source][source]

Create a configured SMCP server with common defaults and best practices.

This convenience function simplifies SMCP server creation by providing sensible defaults for common use cases while still allowing full customization through additional parameters.

Parameters:

namestr, default “SMCP Server”

Human-readable server name used in logs and server identification. Choose descriptive names like: - “Scientific Research API” - “Drug Discovery Server” - “Proteomics Analysis Service”

tool_categorieslist of str, optional

Specific ToolUniverse categories to load. If None, loads all available tools (350+ tools). Common category combinations:

Scientific Research: [“ChEMBL”, “uniprot”, “opentarget”, “pubchem”, “hpa”]

Drug Discovery: [“ChEMBL”, “fda_drug_label”, “clinical_trials”, “pubchem”]

Literature Analysis: [“EuropePMC”, “semantic_scholar”, “pubtator”, “agents”]

Minimal Setup: [“tool_finder_llm”, “special_tools”]

search_enabledbool, default True

Enable AI-powered tool discovery via tools/find method. Recommended to keep enabled unless you have specific performance requirements or want to minimize dependencies.

**kwargs

Additional SMCP configuration options:

  • tooluniverse_config: Pre-configured ToolUniverse instance

  • auto_expose_tools (bool, default True): Auto-expose ToolUniverse tools

  • max_workers (int, default 5): Thread pool size for tool execution

  • Any FastMCP server options (debug, logging, etc.)

Returns:

SMCP

Fully configured SMCP server instance ready to run.

Usage Examples:

Quick Start (all tools): .. code-block:: python

server = create_smcp_server(“Research Server”) server.run_simple()

Focused Server (specific domains): .. code-block:: python

server = create_smcp_server(

name=”Drug Discovery API”, tool_categories=[“ChEMBL”, “fda_drug_label”, “clinical_trials”], max_workers=10

) server.run_simple(port=8000)

Custom Configuration: .. code-block:: python

server = create_smcp_server(

name=”High-Performance Server”, search_enabled=True, max_workers=20, debug=True

) server.run_simple(transport=”http”, host=”0.0.0.0”, port=7000)

Pre-configured ToolUniverse: .. code-block:: python

tu = ToolUniverse() tu.load_tools(tool_type=[“uniprot”, “ChEMBL”]) server = create_smcp_server(

name=”Protein-Drug Server”, tooluniverse_config=tu, search_enabled=True

)

Benefits of Using This Function:

  • Simplified Setup: Reduces boilerplate code for common configurations

  • Best Practices: Applies recommended settings automatically

  • Consistent Naming: Encourages good server naming conventions

  • Future-Proof: Will include new recommended defaults in future versions

  • Documentation: Provides clear examples and guidance

Equivalent Manual Configuration:

This function is equivalent to: .. code-block:: python

server = SMCP(

name=name, tool_categories=tool_categories, search_enabled=search_enabled, auto_expose_tools=True, max_workers=5, **kwargs

)

When to Use Manual Configuration:

  • Need precise control over all initialization parameters

  • Using custom ToolUniverse configurations

  • Implementing custom MCP methods or tools

  • Advanced deployment scenarios with specific requirements