Skip to content

ADK Plugin: LLM

FlotorchADKLLM is an ADK-compatible LLM wrapper that uses FloTorch’s Gateway for model inference. It handles tool calls, response parsing, and provides async generation.


from flotorch.adk.llm import FlotorchADKLLM
API_KEY = "<your_api_key>"
BASE_URL = "https://gateway.flotorch.cloud"
MODEL_ID = "<your_flotorch_model_id>"

FlotorchADKLLM(
model_id: str,
api_key: str,
base_url: str,
)

Creates an ADK-compatible LLM that wraps FloTorch’s Gateway LLM.

generate_content_async(…) -> AsyncGenerator[LlmResponse, None]

Section titled “generate_content_async(…) -> AsyncGenerator[LlmResponse, None]”
async def generate_content_async(
self,
llm_request: LlmRequest,
) -> AsyncGenerator[LlmResponse, None]

Generates content asynchronously with support for:

  • Tool calls and function responses
  • JSON schema responses via response_schema in request config

Automatically handles OpenAI-format tool calls and converts them to ADK types.Part objects.

Supports structured responses by converting Pydantic models to JSON schema format.


from google.adk.models.llm_request import LlmRequest
from google.genai import types
# Create LLM
llm = FlotorchADKLLM(
model_id=MODEL_ID,
api_key=API_KEY,
base_url=BASE_URL
)
# Create request
request = LlmRequest(
content=types.Content(
role="user",
parts=[types.Part(text="What's the weather like?")]
)
)
# Generate response
async for response in llm.generate_content_async(request):
print(response.content.parts[0].text)

  • Uses FloTorch Gateway’s /api/openai/v1/chat/completions endpoint