Agent Servers

Agent Servers allow developers to deploy and run custom AI agents with dedicated compute resources.

They are designed for advanced use cases, including:

  • Long-running or persistent agents

  • Framework-based custom agents (e.g., AgentOpera, AutoGen)

  • Agents requiring Dockerized environments and external dependencies


What Are Agent Servers?

An Agent Server is a containerized runtime that hosts your agent:

  • Runs continuously, independent of the creation interface

  • Supports Docker images for fully customized runtime environments

  • Suitable for complex logic, multi-step orchestration, or real-time services

Current Resource Limits:

  • Free tier: 2 CPU cores & 4 GB memory per server

  • Higher resource allocations (more CPU & memory) will be available for premium users in the future

Currently Supported Frameworks:

  • AgentOpera

  • AutoGen

    (Future: LangChain, CrewAI)


Creating an Agent Server

  1. Navigate to “Agent Servers” in the sidebar.

  2. Click “Create Agent Server”.

  3. Fill in the fields as shown on the creation page:

Basic Information

  • Name (required) – Clear and descriptive name for the server.

  • Description (optional) – Explains the purpose or functionality.

Docker Image Setup

  • Image (required) – Your custom Docker image for the agent.

    • Example: username/my-agent:latest

  • Server Port (default: 8080) – Port exposed by your container.

  • Container Command & Args (optional)

    • Startup command and arguments if different from the Dockerfile default.

  • Private Image Registry Auth (optional)

    • Enter username/password for private DockerHub repositories.

Agent Framework

  • Select the framework your agent uses (e.g., AgentOpera or AutoGen).

Service Paths

  • Readiness Probe Path (optional) – Checks if your agent is ready.

    • Example: GET /ready

  • Liveness Probe Path (optional) – Checks if the service is still running.

    • Example: GET /ready

  • Main Service Path (required) – Endpoint that processes requests.

    • Example: POST /predict


Deploying Your Agent Server

  1. After completing the form, click “Create & Deploy”.

  2. The platform will:

    • Launch your container with the specified CPU & memory limits

    • Expose the service on the configured port

    • Start real-time logging for monitoring and debugging


5.4 Monitoring and Maintenance

Once deployed, the Agent Servers dashboard allows you to:

  • View live logs to debug initialization and requests

  • Check resource usage (CPU / Memory)

  • Stop / restart the server when necessary


5.5 Publishing Agents with Agent Servers

To make your server-backed agent available to users:

  1. Go to Agent Management and create or select a draft agent.

  2. Link the Agent Server to the agent.

  3. Complete all required agent fields:

    • Icon (avatar)

    • Name

    • Category

    • Short & full descriptions

    • Intent

  4. Submit for review.

  5. Once approved, your agent will be live on ChainOpera AI Terminal and listed in the public marketplace.


5.6 Best Practices

  • Test locally first to verify that the container runs correctly.

  • Provide readiness and liveness probes for stability and uptime monitoring.

  • Keep images lightweight to reduce startup time and resource usage.

  • Log key steps in your code to simplify debugging and troubleshooting.

  • Monitor free resource usage (2 CPU / 4GB memory) to ensure stable performance.

Last updated

Was this helpful?