# Z.AI Integration via LowCodeAPI
## Overview
Z.AI is a comprehensive AI platform providing advanced models for code completion, chat, image generation, video generation, audio transcription, and agent capabilities through RESTful APIs. The platform offers:
- **LLM Models** - GLM-4.7, GLM-4.6, GLM-4.5 series for chat and code completion
- **Image Generation** - Create images from text prompts using CogView models
- **Video Generation** - Asynchronous video generation from text
- **Audio Transcription** - Transcribe audio files to text
- **Agent Capabilities** - Build AI agents with tool use and conversation history
- **Utility Tools** - Tokenizer, web search, and web reader
## Base Endpoint
```
https://api.lowcodeapi.com/zai/
```
## Authentication
LowCodeAPI handles authentication automatically. You only need to:
1. **Sign up** at [Z.AI](https://z.ai)
2. **Get your API Key** from [API Key Management](https://z.ai/manage-apikey/apikey-list)
3. **Connect your account** in LowCodeAPI dashboard
4. **Use your `api_token`** in all requests
The `api_token` is your LowCodeAPI authentication token. LowCodeAPI will automatically:
- Fetch your Z.AI API credentials
- Apply them to each request
- Handle Bearer token authentication
**Auth Type**: TOKEN (Bearer Token)
## API Categories
- **Chat** - 2 endpoints for chat completions and code completion
- **Image** - 1 endpoint for image generation
- **Video** - 2 endpoints for video generation and status retrieval
- **Audio** - 1 endpoint for audio transcription
- **Agent** - 4 endpoints for agent chat, results, and conversation history
- **Tool** - 3 endpoints for tokenizer, web search, and web reader
## Common Endpoints
### Category: Chat
#### Create Chat Completion
**Method**: `POST` | **LowCodeAPI Path**: `/api/paas/v4/chat/completions`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/paas/v4/chat/completions?api_token={api_token}
```
**Description**: Create a chat completion model that generates AI replies for given conversation messages. It supports multimodal inputs (text, images, audio, video, file), offers configurable parameters (like temperature, max tokens, tool use), and supports both streaming and non-streaming output modes.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | string | Yes | The model code to be called. GLM-4.7 are the latest flagship model series, foundational models specifically designed for agent applications. Examples: glm-4.7, glm-4.6, glm-4.5, glm-4.5-air, glm-4.5-x, glm-4.5-airx, glm-4.5-flash |
| `messages` | array | Yes | The current conversation message list as the model's prompt input, provided in JSON array format. Possible message types include system messages, user messages, assistant messages, and tool messages |
| `temperature` | number | No | Sampling temperature, controls the randomness of the output, must be a positive number within the range [0.0, 1.0]. Default: 1 |
| `top_p` | number | No | Another method of temperature sampling, value range is [0.01, 1.0]. Default: 0.95 |
| `max_tokens` | number | No | The maximum number of tokens for model output, the GLM-4.7 GLM-4.6 series supports 128K maximum output, the GLM-4.5 series supports 96K maximum output. Default: 1024 |
| `stream` | boolean | No | This parameter should be set to false or omitted when using synchronous call. It indicates that the model returns all content at once after generating all content. If set to true, the model will return the generated content in chunks via standard Event Stream. Default: false |
| `tools` | array | No | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported |
| `tool_choice` | string | No | Controls how the model selects a tool. Default: auto |
| `response_format` | object | No | Specifies the response format of the model. Defaults to text. Supports two formats: { "type": "text" } plain text mode, returns natural language text, { "type": "json_object" } JSON mode, returns valid JSON data |
| `thinking` | object | No | Only supported by GLM-4.5 series and higher models. This parameter is used to control whether the model enable the chain of thought |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/chat/completions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-4.7",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
],
"temperature": 0.7,
"max_tokens": 1024
}'
```
**Example Response**:
```json
{
"data": {
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1699000000,
"model": "glm-4.7",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I'm doing well, thank you for asking!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}
}
```
**Official Documentation**: [Create Chat Completion](https://docs.z.ai/api-reference/llm/chat-completion)
---
#### Code Completion
**Method**: `POST` | **LowCodeAPI Path**: `/api/coding/paas/v4/chat/completions`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/coding/paas/v4/chat/completions?api_token={api_token}
```
**Description**: Code Completion is a feature that allows you to generate code snippets based on a given prompt. It is a powerful tool for developers to quickly generate code snippets based on a given prompt.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | string | Yes | The model code to be called. GLM-4.7 are the latest flagship model series. Examples: glm-4.7, glm-4.6, glm-4.5 |
| `messages` | array | Yes | The current conversation message list as the model's prompt input |
| `temperature` | number | No | Sampling temperature, controls the randomness of the output, must be a positive number within the range [0.0, 1.0] |
| `max_tokens` | number | No | The maximum number of tokens for model output |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/api/coding/paas/v4/chat/completions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-4.7",
"messages": [
{"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
]
}'
```
**Official Documentation**: [Code Completion](https://docs.z.ai/api-reference/llm/chat-completion)
---
### Category: Image
#### Generate Image
**Method**: `POST` | **LowCodeAPI Path**: `/api/paas/v4/generate/image`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/paas/v4/generate/image?api_token={api_token}
```
**Description**: Generate images based on text prompts.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | string | Yes | The model code to be called for image generation. Example: cogview-4-250304 |
| `prompt` | string | Yes | Text prompt describing the image to generate |
| `size` | string | No | The size of the generated image. Other sizes: 768x1344, 864x1152, 1344x768, 1152x864, 1440x720, 720x1440. Default: 1024x1024 |
| `n` | number | No | Number of images to generate. Default: 1 |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/generate/image?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "cogview-4-250304",
"prompt": "A beautiful landscape with a river and mountains",
"size": "1024x1024",
"n": 1
}'
```
**Example Response**:
```json
{
"data": {
"created": 1699000000,
"data": [
{
"url": "https://example.com/generated-image.png"
}
]
}
}
```
**Official Documentation**: [Generate Image](https://docs.z.ai/api-reference/image/generate-image)
---
### Category: Video
#### Generate Video
**Method**: `POST` | **LowCodeAPI Path**: `/api/paas/v4/videos/generations`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/paas/v4/videos/generations?api_token={api_token}
```
**Description**: Generate video asynchronously based on text prompts.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | string | Yes | The model code to be called for video generation |
| `prompt` | string | Yes | Text prompt describing the video to generate |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/videos/generations?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "video-model",
"prompt": "A serene ocean sunset with gentle waves"
}'
```
**Example Response**:
```json
{
"data": {
"id": "video-gen-123",
"status": "processing",
"message": "Video generation started"
}
}
```
**Official Documentation**: [Generate Video](https://docs.z.ai/api-reference/video/generate-video)
---
#### Retrieve Video Result Using ID
**Method**: `GET` | **LowCodeAPI Path**: `/api/paas/v4/async-result/id`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/paas/v4/async-result/id?id={id}&api_token={api_token}
```
**Description**: Retrieve the status and result of an asynchronous video generation task.
**Path Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID returned from the video generation request |
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/zai/api/paas/v4/async-result/id?id=video-gen-123&api_token=YOUR_API_TOKEN"
```
**Example Response**:
```json
{
"data": {
"id": "video-gen-123",
"status": "completed",
"url": "https://example.com/generated-video.mp4"
}
}
```
**Official Documentation**: [Get Video Status](https://docs.z.ai/api-reference/video/get-video-status)
---
### Category: Audio
#### Audio Transcriptions
**Method**: `POST` | **LowCodeAPI Path**: `/api/paas/v4/audio/transcriptions`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/paas/v4/audio/transcriptions?api_token={api_token}
```
**Description**: Transcribes audio into the input language.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `file` | file | Yes | The audio file to transcribe |
| `model` | string | Yes | The model code to be called for audio transcription |
| `language` | string | No | The language of the input audio |
| `prompt` | string | No | Optional text to guide the model's style or continue a previous audio segment |
| `response_format` | string | No | The format of the transcript output. Options: json, text, srt, verbose_json, vtt |
| `temperature` | number | No | The sampling temperature, between 0 and 1 |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/audio/transcriptions?api_token=YOUR_API_TOKEN" \
-F "[email protected]" \
-F "model=whisper-model"
```
**Example Response**:
```json
{
"data": {
"text": "This is the transcribed text from the audio file.",
"language": "en"
}
}
```
**Official Documentation**: [Audio Transcriptions](https://docs.z.ai/api-reference/audio/audio-transcriptions)
---
### Category: Agent
#### Agent Chat
**Method**: `POST` | **LowCodeAPI Path**: `/v1/agents`
**Full URL**:
```
https://api.lowcodeapi.com/zai/v1/agents?api_token={api_token}
```
**Description**: Chat with an AI agent.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | string | Yes | The model code to be called for agent chat |
| `messages` | array | Yes | The conversation messages |
| `agent_id` | string | No | The agent ID to use for the chat |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/v1/agents?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-4.7",
"messages": [
{"role": "user", "content": "Help me plan a trip"}
],
"agent_id": "agent-123"
}'
```
**Official Documentation**: [Agent Chat](https://docs.z.ai/api-reference/agents/agent)
---
#### Retrieve Agent Result
**Method**: `POST` | **LowCodeAPI Path**: `/api/v1/agents/async-result`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/v1/agents/async-result?api_token={api_token}
```
**Description**: Retrieve the result of an asynchronous agent task.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `task_id` | string | Yes | The task ID returned from an asynchronous agent request |
**Official Documentation**: [Get Async Result](https://docs.z.ai/api-reference/agents/get-async-result)
---
#### Conversation History
**Method**: `POST` | **LowCodeAPI Path**: `/api/v1/agents/conversation`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/v1/agents/conversation?api_token={api_token}
```
**Description**: Retrieve conversation history for an agent.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `conversation_id` | string | Yes | The conversation ID to retrieve history for |
| `limit` | number | No | Maximum number of messages to retrieve. Default: 50 |
**Official Documentation**: [Agent Conversation](https://docs.z.ai/api-reference/agents/agent-conversation)
---
#### File Upload
**Method**: `POST` | **LowCodeAPI Path**: `/api/paas/v4/files`
**Full URL**:
```
https://api.lowcodeapi.com/zai/api/paas/v4/files?api_token={api_token}
```
**Description**: Upload auxiliary files (such as glossaries, terminology lists) to support the agent service.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `file` | file | Yes | The file to upload. Limit to 100MB. Allowed formats: pdf, doc, xlsx, ppt, txt, jpg, png |
| `purpose` | string | No | The purpose of the file upload. Example: agent |
**Official Documentation**: [File Upload](https://docs.z.ai/api-reference/agents/file-upload)
---
### Category: Tool
#### Tokenizer
**Method**: `POST` | **LowCodeAPI Path**: `/paas/v4/tokenizer`
**Full URL**:
```
https://api.lowcodeapi.com/zai/paas/v4/tokenizer?api_token={api_token}
```
**Description**: Tokenize text input using the specified model. This endpoint converts text strings into tokens according to the model's tokenization scheme, which is useful for understanding token counts, managing input length, and analyzing text structure.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | string | Yes | The model code to be used for tokenization. Examples: glm-4.7, glm-4.6, glm-4.5 |
| `input` | string | Yes | The text string to be tokenized |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/paas/v4/tokenizer?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-4.7",
"input": "Hello, world!"
}'
```
**Example Response**:
```json
{
"data": {
"tokens": ["Hello", ",", "world", "!"],
"token_count": 4
}
}
```
**Official Documentation**: [Tokenizer](https://docs.z.ai/api-reference/tools/tokenizer)
---
#### Web Search
**Method**: `POST` | **LowCodeAPI Path**: `/paas/v4/web_search`
**Full URL**:
```
https://api.lowcodeapi.com/zai/paas/v4/web_search?api_token={api_token}
```
**Description**: Perform web search and return relevant results. This endpoint allows you to search the web for information, articles, and resources based on a query string.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `query` | string | Yes | The search query string. Examples: latest AI developments, Python programming tutorials |
| `max_results` | number | No | Maximum number of search results to return. Default: 10 |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/paas/v4/web_search?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"query": "latest AI developments",
"max_results": 10
}'
```
**Official Documentation**: [Web Search](https://docs.z.ai/api-reference/tools/web-search)
---
#### Web Reader
**Method**: `POST` | **LowCodeAPI Path**: `/paas/v4/reader`
**Full URL**:
```
https://api.lowcodeapi.com/zai/paas/v4/reader?api_token={api_token}
```
**Description**: Read and extract content from a webpage. This endpoint fetches a webpage from the provided URL and extracts its text content, making it available for analysis, summarization, or use in other AI operations.
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `url` | string | Yes | The URL of the webpage to read and extract content from. Example: https://example.com/article |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/zai/paas/v4/reader?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com/article"
}'
```
**Official Documentation**: [Web Reader](https://docs.z.ai/api-reference/tools/web-reader)
---
## Usage Examples
### Example 1: Chat Completion with Streaming
Creating an interactive chat session with streaming responses:
```bash
# Step 1: Create a chat completion with streaming enabled
# The stream=true parameter enables Server-Sent Events (SSE) streaming
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/chat/completions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-4.7",
"messages": [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Explain quantum computing"}
],
"stream": true,
"temperature": 0.7
}'
# Step 2: Continue the conversation with previous context
# Include the assistant's response from Step 1 in the messages array
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/chat/completions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-4.7",
"messages": [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Explain quantum computing"},
{"role": "assistant", "content": "Previous response here..."},
{"role": "user", "content": "Can you simplify that explanation?"}
]
}'
```
### Example 2: Image Generation Workflow
Generate images and use them in your application:
```bash
# Step 1: Generate an image from text prompt
# Returns a URL to the generated image
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/generate/image?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "cogview-4-250304",
"prompt": "A futuristic city at night with neon lights",
"size": "1024x1024",
"n": 1
}'
# Step 2: Generate multiple variations
# Change the prompt slightly to get different results
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/generate/image?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "cogview-4-250304",
"prompt": "A futuristic city at night with neon lights, cyberpunk style",
"size": "1024x1024"
}'
```
### Example 3: Video Generation with Status Polling
Generate video asynchronously and poll for completion:
```bash
# Step 1: Start video generation (returns a task ID)
curl -X POST "https://api.lowcodeapi.com/zai/api/paas/v4/videos/generations?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "video-model",
"prompt": "A timelapse of a flower blooming"
}'
# Response: {"data": {"id": "video-gen-abc123", "status": "processing"}}
# Step 2: Poll for video generation status
# Use the ID returned from Step 1
curl -X GET "https://api.lowcodeapi.com/zai/api/paas/v4/async-result/id?id=video-gen-abc123&api_token=YOUR_API_TOKEN"
# Step 3: Continue polling until status is "completed"
# Once complete, the response includes the video URL
```
## Complete Endpoint Reference
For a complete list of all 12 endpoints and their parameters, refer to:
- **OpenAPI Definition**: `https://backend.lowcodeapi.com/zai/definition`
- **Official Provider Documentation**: https://docs.z.ai/api-reference/introduction
## Rate Limits & Best Practices
- Different models have different token limits (GLM-4.7/4.6: 128K, GLM-4.5: 96K)
- Use streaming for long responses to improve user experience
- Video generation is asynchronous - poll the status endpoint for results
- Audio transcription supports multiple output formats (JSON, text, SRT, VTT)
## Error Handling
Standard HTTP status codes apply. Errors are returned in the response body with details. All responses are wrapped in a `data` key:
```json
{
"data": {
// Actual response from provider
}
}
```