Model APIs

Model APIs provide your agents with direct access to a wide variety of LLMs (Large Language Models) and multimodal models through a single unified API key.

With Model APIs, developers no longer need to register multiple providers or manage separate keys—ChainOpera handles the integration for you.


What Are Model APIs?

Model APIs allow your agents to:

  • Generate text, summarize content, or answer questions.

  • Perform reasoning and coding tasks with instruction-following LLMs.

  • Use multimodal capabilities, including vision and structured outputs.

  • Access multiple providers and models from a unified interface.

Example Supported Models:

  • OpenAI – GPT‑4o, GPT‑4o‑mini

  • Anthropic – Claude 3, Claude 3.5 Sonnet

  • Groq / DeepSeek – High‑performance inference models like DeepSeek‑R1 or Llama derivatives

  • Qwen (Alibaba / TensorOpera) – Qwen 3 series, Qwen2.5‑32B / 72B‑Instruct

  • xAI – Grok‑2 Vision multimodal model


Accessing Model APIs

  1. Navigate to “Model APIs” from the sidebar.

  2. Browse the available models with details including:

    • Model Name & Version

    • Provider (OpenAI, Anthropic, Groq, etc.)

    • Capabilities (Text, Code, Reasoning, Vision, Multilingual)

  3. Identify the model best suited for your use case.

Example UI View:

  • gpt-4o-mini – Cost-efficient small multimodal LLM by OpenAI

  • claude-3.5-sonnet – High reasoning capability with 200K context

  • deepseek-v3-0324 – Optimized for coding and reasoning tasks


Using Model APIs in Agents

You can integrate model APIs in all creation modes:

  • Prompt-based Agents

    • Select a model in the prompt editor to generate responses.

  • Workflow Agents

    • Add a Model Node to call the model at any step in the workflow.

  • Third-Party API Agents

    • Use model APIs as part of backend orchestration or combine with external calls.

  • Agent Framework

    • Directly call ChainOpera’s unified model endpoint from your code using the provided API key.


API Key Management

  1. Click “Get API Key” at the top-right of the platform.

  2. Use the unified key to access all supported models and endpoints.

  3. Keep your key secure and do not hardcode it into public repositories.

Example API Call (Python):

import requests

url = "<https://api.chainopera.ai/v1/model>"
headers = {"Authorization": "Bearer YOUR_API_KEY"}
payload = {
    "model": "gpt-4o-mini",
    "input": "Explain the difference between supervised and unsupervised learning."
}

response = requests.post(url, json=payload, headers=headers)
print(response.json())

Best Practices

  • Choose the right model based on complexity and cost:

    • Mini models for lightweight tasks

    • Full LLMs for reasoning, analysis, or creative generation

    • Multimodal models if you need vision or structured outputs

  • Use workflows to combine multiple model calls for step-by-step reasoning.

  • Monitor usage to optimize performance and cost.

  • Keep your API key private to prevent unauthorized access.

Last updated

Was this helpful?