Overview
The Agents API provides direct access to individual AI agents in your organization. Unlike the Chat Completions API which routes requests through an orchestrator, this API allows you to invoke specific agents directly for targeted tasks.
Direct Agent Invocation : Use this API when you know exactly which agent should handle your request. For intelligent routing across multiple agents, use the Chat Completions API instead.
Authentication
All endpoints require API key authentication with specific scopes:
Endpoint Required Scope GET /agentsagents:listPOST /agents/{id}/completionsagents:invoke
Default Scopes : The agents:list and agents:invoke scopes are enabled by default when creating API keys. No additional configuration is needed.
Include your API key in the X-API-Key header:
X-API-Key: jns_live_YOUR_API_KEY_HERE
See Authentication for details.
Agent Types
Agents in the system are categorized by their execution pattern:
Type Description LLM_AGENTStandard language model agent that processes requests directly SEQUENTIAL_AGENTExecutes sub-agents in sequence, passing output between them PARALLEL_AGENTExecutes sub-agents simultaneously and combines results LOOP_AGENTIteratively executes sub-agents based on conditions
LLM Agents Only : Only LLM_AGENT type agents can be invoked directly via this API. Sequential, Parallel, and Loop agents are container agents that orchestrate other agents.
List Agents
Retrieve a list of available agents in your organization.
Endpoint
GET https://api.junis.ai/api/external/agents
Query Parameters
Parameter Type Required Description agent_typestring No Filter by type: LLM_AGENT, SEQUENTIAL_AGENT, PARALLEL_AGENT, LOOP_AGENT is_activeboolean No Filter by active status (true or false)
Request Example
curl -X GET "https://api.junis.ai/api/external/agents?agent_type=LLM_AGENT&is_active=true" \
-H "X-API-Key: jns_live_YOUR_API_KEY_HERE"
import requests
response = requests.get(
"https://api.junis.ai/api/external/agents" ,
headers = { "X-API-Key" : "jns_live_YOUR_API_KEY_HERE" },
params = {
"agent_type" : "LLM_AGENT" ,
"is_active" : True
}
)
print (response.json())
Response
Status Code: 200 OK
{
"agents" : [
{
"agent_id" : "brand-analyzer" ,
"name" : "Brand Analyzer" ,
"agent_type" : "LLM_AGENT" ,
"description" : "Analyzes brand data and generates insights" ,
"is_active" : true ,
"model" : "anthropic/claude-sonnet-4-5-20250929" ,
"has_tools" : true ,
"has_mcp" : false
},
{
"agent_id" : "content-writer" ,
"name" : "Content Writer" ,
"agent_type" : "LLM_AGENT" ,
"description" : "Creates marketing content based on brand guidelines" ,
"is_active" : true ,
"model" : "anthropic/claude-sonnet-4-5-20250929" ,
"has_tools" : false ,
"has_mcp" : true
}
],
"total" : 2 ,
"organization_id" : "org-abc123"
}
Response Fields
Field Type Description agentsarray List of agent objects totalinteger Total number of agents matching the filter organization_idstring Organization ID
Agent Object Fields
Field Type Description agent_idstring Unique agent identifier namestring Display name of the agent agent_typestring Type of agent (LLM_AGENT, SEQUENTIAL_AGENT, etc.) descriptionstring Description of the agent’s purpose is_activeboolean Whether the agent is currently active modelstring LLM model used by the agent (null for non-LLM agents) has_toolsboolean Whether the agent has tools configured has_mcpboolean Whether the agent has MCP integrations
Invoke Agent (Non-Streaming)
Send a message to a specific agent and receive the complete response.
Endpoint
POST https://api.junis.ai/api/external/agents/{agent_id}/completions
Path Parameters
Parameter Type Required Description agent_idstring Yes The agent ID to invoke
Request Body
Field Type Required Description messagestring Yes The message to send to the agent session_idstring No Existing session ID to continue conversation streamboolean No Set to false for non-streaming (default: false) temperaturenumber No Model temperature (0.0-2.0) max_tokensinteger No Maximum tokens in response
Request Example
curl -X POST "https://api.junis.ai/api/external/agents/brand-analyzer/completions" \
-H "X-API-Key: jns_live_YOUR_API_KEY_HERE" \
-H "Content-Type: application/json" \
-d '{
"message": "Analyze the brand positioning of Nike",
"stream": false
}'
import requests
response = requests.post(
"https://api.junis.ai/api/external/agents/brand-analyzer/completions" ,
headers = {
"X-API-Key" : "jns_live_YOUR_API_KEY_HERE" ,
"Content-Type" : "application/json"
},
json = {
"message" : "Analyze the brand positioning of Nike" ,
"stream" : False
}
)
print (response.json())
Response
Status Code: 200 OK
{
"id" : "agentcmpl-550e8400" ,
"agent_id" : "brand-analyzer" ,
"agent_name" : "Brand Analyzer" ,
"session_id" : "sess_abc123def456" ,
"response" : "Nike's brand positioning centers on empowerment and athletic excellence..." ,
"created" : 1733480400 ,
"model" : "anthropic/claude-sonnet-4-5-20250929"
}
Response Fields
Field Type Description idstring Unique response identifier (format: agentcmpl-{uuid}) agent_idstring ID of the agent that processed the request agent_namestring Display name of the agent session_idstring Session ID (can be used for follow-up messages) responsestring The agent’s complete response createdinteger Unix timestamp of response creation modelstring LLM model used for the response
Invoke Agent (Streaming)
Send a message and receive the response as a real-time stream using Server-Sent Events (SSE).
Endpoint
POST https://api.junis.ai/api/external/agents/{agent_id}/completions
Request Body
Field Type Required Description messagestring Yes The message to send to the agent session_idstring No Existing session ID to continue conversation streamboolean Yes Set to true for streaming temperaturenumber No Model temperature (0.0-2.0) max_tokensinteger No Maximum tokens in response
Request Example
curl -X POST "https://api.junis.ai/api/external/agents/brand-analyzer/completions" \
-H "X-API-Key: jns_live_YOUR_API_KEY_HERE" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{
"message": "Analyze the brand positioning of Nike",
"stream": true
}'
import requests
response = requests.post(
"https://api.junis.ai/api/external/agents/brand-analyzer/completions" ,
headers = {
"X-API-Key" : "jns_live_YOUR_API_KEY_HERE" ,
"Content-Type" : "application/json" ,
"Accept" : "text/event-stream"
},
json = {
"message" : "Analyze the brand positioning of Nike" ,
"stream" : True
},
stream = True
)
for line in response.iter_lines():
if line:
print (line.decode( 'utf-8' ))
const response = await fetch (
'https://api.junis.ai/api/external/agents/brand-analyzer/completions' ,
{
method: 'POST' ,
headers: {
'X-API-Key' : 'jns_live_YOUR_API_KEY_HERE' ,
'Content-Type' : 'application/json' ,
'Accept' : 'text/event-stream'
},
body: JSON . stringify ({
message: 'Analyze the brand positioning of Nike' ,
stream: true
})
}
);
const reader = response . body . getReader ();
const decoder = new TextDecoder ();
while ( true ) {
const { done , value } = await reader . read ();
if ( done ) break ;
const chunk = decoder . decode ( value );
const lines = chunk . split ( ' \n ' );
for ( const line of lines ) {
if ( line . startsWith ( 'data: ' )) {
const data = JSON . parse ( line . slice ( 6 ));
if ( data . type === 'content' ) {
process . stdout . write ( data . content );
}
}
}
}
The streaming response uses OpenAI-compatible format with choices array:
Field Type Description idstring Unique chunk identifier objectstring Always "agent.completion.chunk" createdinteger Unix timestamp agent_idstring Agent ID agent_namestring Agent display name modelstring LLM model used choicesarray Array containing delta content
Streaming Response Example
data: {"id": "agentcmpl-abc123", "object": "agent.completion.chunk", "created": 1733480400, "agent_id": "brand-analyzer", "agent_name": "Brand Analyzer", "session_id": "sess_abc123", "model": "anthropic/claude-sonnet-4-5-20250929", "choices": [{"index": 0, "delta": {"role": "assistant"}, "finish_reason": null}]}
data: {"id": "agentcmpl-abc123", "object": "agent.completion.chunk", "created": 1733480400, "agent_id": "brand-analyzer", "agent_name": "Brand Analyzer", "model": "anthropic/claude-sonnet-4-5-20250929", "choices": [{"index": 0, "delta": {"content": "Nike's "}, "finish_reason": null}]}
data: {"id": "agentcmpl-abc123", "object": "agent.completion.chunk", "created": 1733480400, "agent_id": "brand-analyzer", "agent_name": "Brand Analyzer", "model": "anthropic/claude-sonnet-4-5-20250929", "choices": [{"index": 0, "delta": {"content": "brand positioning "}, "finish_reason": null}]}
data: {"id": "agentcmpl-abc123", "object": "agent.completion.chunk", "created": 1733480400, "agent_id": "brand-analyzer", "agent_name": "Brand Analyzer", "model": "anthropic/claude-sonnet-4-5-20250929", "choices": [{"index": 0, "delta": {}, "finish_reason": "stop"}]}
data: [DONE]
Chunk Object Fields
Field Type Description choices[].indexinteger Always 0 choices[].delta.rolestring "assistant" (first chunk only)choices[].delta.contentstring Incremental text content choices[].finish_reasonstring null during streaming, "stop" on completion
Session Continuity
You can maintain conversation context by reusing the session_id from previous responses:
import requests
API_KEY = "jns_live_YOUR_API_KEY_HERE"
BASE_URL = "https://api.junis.ai/api/external/agents"
# First message
response1 = requests.post(
f " { BASE_URL } /brand-analyzer/completions" ,
headers = { "X-API-Key" : API_KEY , "Content-Type" : "application/json" },
json = { "message" : "Tell me about Nike's brand" }
)
session_id = response1.json()[ "session_id" ]
# Follow-up message using same session
response2 = requests.post(
f " { BASE_URL } /brand-analyzer/completions" ,
headers = { "X-API-Key" : API_KEY , "Content-Type" : "application/json" },
json = {
"message" : "How does it compare to Adidas?" ,
"session_id" : session_id # Continue conversation
}
)
print (response2.json()[ "response" ])
Error Responses
Common Errors
400 Bad Request - Invalid Agent Type Filter
{
"detail" : "Invalid agent_type 'unknown'. Must be one of: ['LLM_AGENT', 'SEQUENTIAL_AGENT', 'PARALLEL_AGENT', 'LOOP_AGENT']"
}
400 Bad Request - Inactive Agent
{
"error" : {
"message" : "Agent 'brand-analyzer' is not active" ,
"type" : "invalid_request" ,
"code" : "agent_inactive"
}
}
402 Payment Required - Insufficient Credits
{
"error" : {
"message" : "크레딧이 부족합니다. 현재 잔액: $0.50" ,
"message_en" : "Insufficient credits. Balance: $0.50" ,
"type" : "insufficient_credits" ,
"code" : "insufficient_credits"
},
"credit_balance" : 0.5 ,
"effective_balance" : 0.5
}
403 Forbidden - No Subscription
{
"error" : {
"message" : "Basic or Pro subscription required. Please visit /subscription to start a subscription." ,
"message_ko" : "Basic 또는 Pro 구독이 필요합니다. 구독을 시작하려면 /subscription 페이지를 방문하세요." ,
"type" : "subscription_required" ,
"code" : "no_subscription"
}
}
403 Forbidden - Insufficient Scopes
{
"error" : "insufficient_scopes" ,
"message" : "API key does not have required permissions" ,
"required_scopes" : [ "agents:invoke" ],
"missing_scopes" : [ "agents:invoke" ],
"available_scopes" : [ "orchestrator:invoke" , "sessions:read" ]
}
{
"error" : {
"message" : "Agent 'invalid-agent-id' not found in organization" ,
"type" : "not_found" ,
"code" : "agent_not_found"
}
}
Best Practices
Use the List Agents endpoint to discover available agents and their capabilities: # Get all active LLM agents
response = requests.get(
"https://api.junis.ai/api/external/agents" ,
headers = { "X-API-Key" : api_key},
params = { "agent_type" : "LLM_AGENT" , "is_active" : True }
)
agents = response.json()[ "agents" ]
for agent in agents:
print ( f " { agent[ 'name' ] } : { agent[ 'description' ] } " )
print ( f " - Model: { agent[ 'model' ] } " )
print ( f " - Has Tools: { agent[ 'has_tools' ] } " )
print ( f " - Has MCP: { agent[ 'has_mcp' ] } " )
Handling Streaming Responses
For streaming responses, implement proper SSE parsing: import json
import requests
response = requests.post(
f " { BASE_URL } /agents/ { agent_id } /completions" ,
headers = { "X-API-Key" : api_key, "Content-Type" : "application/json" },
json = { "message" : prompt, "stream" : True },
stream = True
)
full_response = ""
for line in response.iter_lines():
if line:
line = line.decode( 'utf-8' )
if line.startswith( 'data: ' ):
try :
data = json.loads(line[ 6 :])
if data.get( 'type' ) == 'content' :
full_response += data.get( 'content' , '' )
elif data.get( 'type' ) == 'done' :
break
elif data.get( 'type' ) == 'error' :
raise Exception (data.get( 'error' ))
except json.JSONDecodeError:
continue
Implement comprehensive error handling for production use: def invoke_agent ( agent_id , message , api_key ):
try :
response = requests.post(
f "https://api.junis.ai/api/external/agents/ { agent_id } /completions" ,
headers = { "X-API-Key" : api_key, "Content-Type" : "application/json" },
json = { "message" : message},
timeout = 120
)
if response.status_code == 200 :
return response.json()
elif response.status_code == 402 :
raise PaymentError( "Subscription or credits required" )
elif response.status_code == 403 :
raise PermissionError ( "Insufficient API key permissions" )
elif response.status_code == 404 :
raise ValueError ( f "Agent not found: { agent_id } " )
else :
raise Exception ( f "API error: { response.json().get( 'detail' ) } " )
except requests.Timeout:
raise TimeoutError ( "Request timed out after 120 seconds" )
Efficiently manage sessions for multi-turn conversations: class AgentSession :
def __init__ ( self , agent_id , api_key ):
self .agent_id = agent_id
self .api_key = api_key
self .session_id = None
def send_message ( self , message ):
payload = { "message" : message}
if self .session_id:
payload[ "session_id" ] = self .session_id
response = requests.post(
f "https://api.junis.ai/api/external/agents/ { self .agent_id } /completions" ,
headers = { "X-API-Key" : self .api_key, "Content-Type" : "application/json" },
json = payload
)
result = response.json()
self .session_id = result.get( "session_id" )
return result[ "response" ]
# Usage
session = AgentSession( "brand-analyzer" , api_key)
print (session.send_message( "Tell me about Nike" ))
print (session.send_message( "Compare it to Adidas" )) # Continues conversation