# Baseten Integration via LowCodeAPI
**Last Updated**: February 11, 2026
## Overview
Baseten is a platform for deploying and managing machine learning models with APIs. It enables deployment and scaling of ML models with a focus on developer experience, providing inference endpoints, model management, training jobs, and deployment automation.
**Key Features:**
- Inference endpoints for real-time ML predictions
- Async inference for long-running jobs
- Model and deployment management
- Environment-based deployments (development, production, custom)
- Training job orchestration and monitoring
- Autoscaling configuration
- Secret management for model credentials
## Base Endpoint
```
https://api.lowcodeapi.com/baseten
```
**Important**: Always include the provider name in the URL path after `api.lowcodeapi.com/`
## Authentication
Baseten uses API key authentication.
**Setup Instructions:**
1. Sign up at [https://www.baseten.co](https://www.baseten.co)
2. Navigate to [API Keys Settings](https://app.baseten.co/settings/api_keys)
3. Generate a new API key
4. Use the API key with LowCodeAPI
**Authentication Method:**
- Header: `Authorization`
- Format: `Api-Key {your_api_key}` or `Bearer {your_api_key}` (depends on endpoint)
**Note**: When using LowCodeAPI, you only need the single `api_token` from LowCodeAPI. The system automatically maps your credentials to Baseten.
## URL Format (Important)
LowCodeAPI supports **two URL formats** for endpoints with path parameters. AI agents must try the **New Format first**, and only fall back to the **Old Format** if it doesn't work.
### New Format (Priority) - Dynamic Path Parameters
- **Path parameters stay in the URL path** - Do NOT convert to query parameters
- **Provider name is always in the URL path** after `api.lowcodeapi.com/`
- Pattern: `https://api.lowcodeapi.com/baseten/{path_with_params}?api_token={api_token}`
### Old Format (Fallback) - Sanitized Path + Query Parameters
- **Path parameters become query parameters**
- **Provider name is always in the URL path**
- Pattern: `https://api.lowcodeapi.com/baseten/{sanitized_path}?{path_params}&api_token={api_token}`
### Decision Flow for AI Agents
1. **Always use New Format first** - Keep path parameters in the URL path
2. If you get a 404 or error, try Old Format with sanitized path
3. Log which format worked for future requests to Baseten
### URL Transformation Examples
| New Format (Priority) | Old Format (Fallback) |
|----------------------|----------------------|
| `/environments/{env_name}/predict` | `/environments/envname/predict?env_name={env_name}` |
| `/deployment/{deployment_id}/predict` | `/deployment/deploymentid/predict?deployment_id={deployment_id}` |
| `/v1/models/{model_id}` | `/v1/models/modelid?model_id={model_id}` |
## API Categories
- **Inference** - Real-time and async model predictions
- **Models** - Model management and listing
- **Chains** - Chain orchestration
- **Deployments** - Deployment lifecycle management
- **Environments** - Environment configuration
- **Secrets** - Secure credential storage
- **API Keys** - API key management
- **Training** - Training job orchestration
## Common Endpoints
### Call Environment Predict
**POST** `/environments/{env_name}/predict`
Call the deployment associated with the specified environment.
| Format | URL |
|--------|-----|
| **New Format** | `https://api.lowcodeapi.com/baseten/environments/{env_name}/predict?api_token={api_token}` |
| **Old Format** | `https://api.lowcodeapi.com/baseten/environments/envname/predict?env_name={env_name}&api_token={api_token}` |
**Path Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `env_name` | string | Yes | The name of the model's environment you want to call |
**Request Body:**
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `input` | object | Yes | JSON-serializable model input |
**Example Request (New Format):**
```bash
curl -X POST "https://api.lowcodeapi.com/baseten/environments/production/predict?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"text": "Hello, how are you?"
}
}'
```
**Example Response:**
```json
{
"data": {
"predictions": ["I'm doing well!"],
"model_id": "abc123"
}
}
```
**Official Documentation:** [https://docs.baseten.co/reference/inference-api/predict-endpoints/environments-predict](https://docs.baseten.co/reference/inference-api/predict-endpoints/environments-predict)
---
### Call Development Deployment Predict
**POST** `/development/predict`
Call the development deployment.
| Format | URL |
|--------|-----|
| **New Format** | `https://api.lowcodeapi.com/baseten/development/predict?api_token={api_token}` |
| **Old Format** | `https://api.lowcodeapi.com/baseten/development/predict?api_token={api_token}` |
**Request Body:**
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `input` | object | Yes | JSON-serializable model input |
**Example Request:**
```bash
curl -X POST "https://api.lowcodeapi.com/baseten/development/predict?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"prompt": "Generate code"
}
}'
```
**Official Documentation:** [https://docs.baseten.co/reference/inference-api/predict-endpoints/development-predict](https://docs.baseten.co/reference/inference-api/predict-endpoints/development-predict)
---
### Get All Models
**GET** `/v1/models`
Retrieve all models in your Baseten account.
| Format | URL |
|--------|-----|
| **New Format** | `https://api.lowcodeapi.com/baseten/v1/models?api_token={api_token}` |
| **Old Format** | `https://api.lowcodeapi.com/baseten/v1/models?api_token={api_token}` |
**Example Request:**
```bash
curl -X GET "https://api.lowcodeapi.com/baseten/v1/models?api_token=YOUR_API_TOKEN"
```
**Example Response:**
```json
{
"data": [
{
"id": "model_abc123",
"name": "my-model",
"created_at": "2025-01-01T00:00:00Z"
}
]
}
```
**Official Documentation:** [https://docs.baseten.co/reference/management-api/models/gets-all-models](https://docs.baseten.co/reference/management-api/models/gets-all-models)
---
### Activate Model Environment
**POST** `/v1/models/{model_id}/environments/{env_name}/activate`
Activate a deployment associated with an environment.
| Format | URL |
|--------|-----|
| **New Format** | `https://api.lowcodeapi.com/baseten/v1/models/{model_id}/environments/{env_name}/activate?api_token={api_token}` |
| **Old Format** | `https://api.lowcodeapi.com/baseten/v1/models/modelid/environments/envname/activate?model_id={model_id}&env_name={env_name}&api_token={api_token}` |
**Path Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_id` | string | Yes | The model's alphanumeric ID |
| `env_name` | string | Yes | The name of the environment |
**Example Request:**
```bash
curl -X POST "https://api.lowcodeapi.com/baseten/v1/models/model_abc123/environments/production/activate?api_token=YOUR_API_TOKEN"
```
**Official Documentation:** [https://docs.baseten.co/reference/management-api/deployments/activate/activates-a-deployment-associated-with-an-environment](https://docs.baseten.co/reference/management-api/deployments/activate/activates-a-deployment-associated-with-an-environment)
---
### Get Training Job Metrics
**GET** `/v1/training_projects/{training_project_id}/jobs/{training_job_id}/metrics`
Get training job metrics for monitoring.
| Format | URL |
|--------|-----|
| **New Format** | `https://api.lowcodeapi.com/baseten/v1/training_projects/{training_project_id}/jobs/{training_job_id}/metrics?api_token={api_token}` |
| **Old Format** | `https://api.lowcodeapi.com/baseten/v1/training_projects/trainingprojectid/jobs/trainingjobid/metrics?training_project_id={training_project_id}&training_job_id={training_job_id}&api_token={api_token}` |
**Path Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `training_project_id` | string | Yes | The training project's alphanumeric ID |
| `training_job_id` | string | Yes | The training job's alphanumeric ID |
**Example Request:**
```bash
curl -X GET "https://api.lowcodeapi.com/baseten/v1/training_projects/proj_123/jobs/job_456/metrics?api_token=YOUR_API_TOKEN"
```
**Official Documentation:** [https://docs.baseten.co/reference/training-api/get-training-job-metrics](https://docs.baseten.co/reference/training-api/get-training-job-metrics)
## Complete Endpoint Reference
## Response Format
All responses from LowCodeAPI are wrapped in a `data` key:
```json
{
"data": {
// Actual response from provider API
}
}
```
The `data` key contains the raw response from the provider's API.
### Inference Endpoints
| Method | Endpoint | New Format | Old Format |
|--------|----------|------------|------------|
| POST | Call environment predict | `/environments/{env_name}/predict` | `/environments/envname/predict` |
| POST | Call development predict | `/development/predict` | `/development/predict` |
| POST | Call deployment predict | `/deployment/{deployment_id}/predict` | `/deployment/deploymentid/predict` |
| POST | Call environment async predict | `/environments/{env_name}/async_predict` | `/environments/envname/async_predict` |
| POST | Call development async predict | `/development/async_predict` | `/development/async_predict` |
| POST | Call deployment async predict | `/deployment/{deployment_id}/async_predict` | `/deployment/deploymentid/async_predict` |
| GET | Get async request status | `/async_request/{request_id}` | `/async_request/requestid` |
| DELETE | Cancel async request | `/async_request/{request_id}` | `/async_request/requestid` |
| GET | Get environment async queue status | `/environments/{env_name}/async_queue_status` | `/environments/envname/async_queue_status` |
| GET | Get development async queue status | `/development/async_queue_status` | `/development/async_queue_status` |
| GET | Get deployment async queue status | `/deployment/{deployment_id}/async_queue_status` | `/deployment/deploymentid/async_queue_status` |
| POST | Wake production environment | `/production/wake` | `/production/wake` |
| POST | Wake development deployment | `/development/wake` | `/development/wake` |
| POST | Wake deployment | `/deployment/{deployment_id}/wake` | `/deployment/deploymentid/wake` |
### Model Management
| Method | Endpoint | New Format | Old Format |
|--------|----------|------------|------------|
| GET | Get all models | `/v1/models` | `/v1/models` |
| GET | Get model by ID | `/v1/models/{model_id}` | `/v1/models/modelid` |
| DELETE | Delete model | `/v1/models/{model_id}` | `/v1/models/modelid` |
### Chains
| Method | Endpoint | New Format | Old Format |
|--------|----------|------------|------------|
| GET | Get all chains | `/v1/chains` | `/v1/chains` |
| GET | Get chain by ID | `/v1/chains/{chain_id}` | `/v1/chains/chainid` |
| DELETE | Delete chain | `/v1/chains/{chain_id}` | `/v1/chains/chainid` |
### Deployments
| Method | Endpoint | New Format | Old Format |
|--------|----------|------------|------------|
| POST | Activate model environment | `/v1/models/{model_id}/environments/{env_name}/activate` | `/v1/models/modelid/environments/envname/activate` |
| POST | Activate development deployment | `/v1/models/{model_id}/deployments/development/activate` | `/v1/models/modelid/deployments/development/activate` |
| POST | Activate deployment | `/v1/models/{model_id}/deployments/{deployment_id}/activate` | `/v1/models/modelid/deployments/deploymentid/activate` |
| POST | Deactivate model environment | `/v1/models/{model_id}/environments/{env_name}/activate` | `/v1/models/modelid/environments/envname/activate` |
| POST | Deactivate development deployment | `/v1/models/{model_id}/deployments/development/activate` | `/v1/models/modelid/deployments/development/activate` |
| POST | Deactivate deployment | `/v1/models/{model_id}/deployments/{deployment_id}/activate` | `/v1/models/modelid/deployments/deploymentid/activate` |
| POST | Promote to model environment | `/v1/models/{model_id}/environments/{env_name}/promote` | `/v1/models/modelid/environments/envname/promote` |
| POST | Cancel promotion to environment | `/v1/models/{model_id}/environments/{env_name}/cancel_promotion` | `/v1/models/modelid/environments/envname/cancel_promotion` |
| POST | Promote development deployment | `/v1/models/{model_id}/deployments/development/promote` | `/v1/models/modelid/deployments/development/promote` |
| POST | Promote deployment | `/v1/models/{model_id}/deployments/{deployment_id}/promote` | `/v1/models/modelid/deployments/deploymentid/promote` |
| PATCH | Update development autoscaling settings | `/v1/models/{model_id}/deployments/development/autoscaling_settings` | `/v1/models/modelid/deployments/development/autoscaling_settings` |
| PATCH | Update deployment autoscaling settings | `/v1/models/{model_id}/deployments/{deployment_id}/autoscaling_settings` | `/v1/models/modelid/deployments/deploymentid/autoscaling_settings` |
| GET | Get all model deployments | `/v1/models/{model_id}/deployments` | `/v1/models/modelid/deployments` |
| GET | Get production model deployment | `/v1/models/{model_id}/deployments/production` | `/v1/models/modelid/deployments/production` |
| GET | Get development model deployment | `/v1/models/{model_id}/deployments/development` | `/v1/models/modelid/deployments/development` |
| GET | Get model deployment by ID | `/v1/models/{model_id}/deployments/{deployment_id}` | `/v1/models/modelid/deployments/deploymentid` |
| DELETE | Delete model deployment | `/v1/models/{model_id}/deployments/{deployment_id}` | `/v1/models/modelid/deployments/deploymentid` |
### Environments
| Method | Endpoint | New Format | Old Format |
|--------|----------|------------|------------|
| POST | Create environment | `/v1/models/{model_id}/environments` | `/v1/models/modelid/environments` |
| GET | Get all environments | `/v1/models/{model_id}/environments` | `/v1/models/modelid/environments` |
| GET | Get environment details | `/v1/models/{model_id}/environments/{env_name}` | `/v1/models/modelid/environments/envname` |
| PATCH | Update model environment | `/v1/models/{model_id}/environments/{env_name}` | `/v1/models/modelid/environments/envname` |
### Secrets
| Method | Endpoint | New Format | Old Format |
|--------|----------|------------|------------|
| GET | Get all secrets | `/v1/secrets` | `/v1/secrets` |
| POST | Upsert secret | `/v1/secrets` | `/v1/secrets` |
### API Keys
| Method | Endpoint | New Format | Old Format |
|--------|----------|------------|------------|
| GET | Get all API keys | `/v1/api_keys` | `/v1/api_keys` |
| POST | Create API key | `/v1/api_keys` | `/v1/api_keys` |
| DELETE | Delete API key | `/v1/api_keys?prefix={prefix}` | `/v1/api_keys?prefix={prefix}` |
### Training
| Method | Endpoint | New Format | Old Format |
|--------|----------|------------|------------|
| GET | Get all training jobs | `/v1/training_projects/{training_project_id}/jobs` | `/v1/training_projects/trainingprojectid/jobs` |
| GET | Get training job by ID | `/v1/training_projects/{training_project_id}/jobs/{training_job_id}` | `/v1/training_projects/trainingprojectid/jobs/trainingjobid` |
| POST | Search training jobs | `/v1/training_jobs/search` | `/v1/training_jobs/search` |
| POST | Stop training job | `/v1/training_projects/{training_project_id}/jobs/{training_job_id}/stop` | `/v1/training_projects/trainingprojectid/jobs/trainingjobid/stop` |
| POST | Recreate training job | `/v1/training_projects/{training_project_id}/jobs/{training_job_id}/recreate` | `/v1/training_projects/trainingprojectid/jobs/trainingjobid/recreate` |
| GET | Get training job checkpoints | `/v1/training_projects/{training_project_id}/jobs/{training_job_id}/checkpoints` | `/v1/training_projects/trainingprojectid/jobs/trainingjobid/checkpoints` |
| GET | Get training job checkpoint files | `/v1/training_projects/{training_project_id}/jobs/{training_job_id}/checkpoint_files` | `/v1/training_projects/trainingprojectid/jobs/trainingjobid/checkpoint_files` |
| GET | Get training job logs | `/v1/training_projects/{training_project_id}/jobs/{training_job_id}/logs` | `/v1/training_projects/trainingprojectid/jobs/trainingjobid/logs` |
| GET | Get training job metrics | `/v1/training_projects/{training_project_id}/jobs/{training_job_id}/metrics` | `/v1/training_projects/trainingprojectid/jobs/trainingjobid/metrics` |
| GET | Download training job artifacts | `/v1/training_projects/{training_project_id}/jobs/{training_job_id}/download` | `/v1/training_projects/trainingprojectid/jobs/trainingjobid/download` |
## API Definition Endpoints
You can retrieve the complete API definition for Baseten:
| Format | URL | Description |
|--------|-----|-------------|
| **New Format** | `https://backend.lowcodeapi.com/baseten/openapi` | OpenAPI spec with dynamic path parameters |
| **Old Format** | `https://backend.lowcodeapi.com/baseten/definition` | API definition with sanitized paths |
**Example:**
```bash
# Get new format OpenAPI spec
curl -X GET "https://backend.lowcodeapi.com/baseten/openapi"
# Get old format API definition
curl -X GET "https://backend.lowcodeapi.com/baseten/definition"
```
## Usage Examples
### Example 1: Model Prediction Workflow
```bash
# Step 1: Get all models to find your model ID
curl -X GET "https://api.lowcodeapi.com/baseten/v1/models?api_token=YOUR_API_TOKEN"
# Response contains model_id, e.g., "model_abc123"
# Step 2: Call prediction on production environment using the model_id from Step 1
curl -X POST "https://api.lowcodeapi.com/baseten/environments/production/predict?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"text": "What is the capital of France?"
}
}'
# The environment name "production" is a predefined environment name
# The input format depends on your specific model
# Step 3: Wake up the production environment if it's cold (optional)
curl -X POST "https://api.lowcodeapi.com/baseten/production/wake?api_token=YOUR_API_TOKEN"
```
### Example 2: Async Inference Workflow
```bash
# Step 1: Submit async prediction job
curl -X POST "https://api.lowcodeapi.com/baseten/development/async_predict?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"prompt": "Generate a long article"
}
}'
# Response contains request_id, e.g., "req_456"
# Step 2: Check async request status using request_id from Step 1
curl -X GET "https://api.lowcodeapi.com/baseten/async_request/req_456?api_token=YOUR_API_TOKEN"
# Step 3: Check async queue status
curl -X GET "https://api.lowcodeapi.com/baseten/development/async_queue_status?api_token=YOUR_API_TOKEN"
# Step 4: Cancel the async request if needed (using request_id from Step 1)
curl -X DELETE "https://api.lowcodeapi.com/baseten/async_request/req_456?api_token=YOUR_API_TOKEN"
```
### Example 3: Model Deployment Management
```bash
# Step 1: Get all models
curl -X GET "https://api.lowcodeapi.com/baseten/v1/models?api_token=YOUR_API_TOKEN"
# Response contains model_id, e.g., "model_abc123"
# Step 2: Get all environments for the model using model_id from Step 1
curl -X GET "https://api.lowcodeapi.com/baseten/v1/models/model_abc123/environments?api_token=YOUR_API_TOKEN"
# Response lists available environments
# Step 3: Activate production environment for the model
curl -X POST "https://api.lowcodeapi.com/baseten/v1/models/model_abc123/environments/production/activate?api_token=YOUR_API_TOKEN"
# Step 4: Get production deployment details
curl -X GET "https://api.lowcodeapi.com/baseten/v1/models/model_abc123/deployments/production?api_token=YOUR_API_TOKEN"
# Step 5: Update autoscaling settings for the deployment
curl -X PATCH "https://api.lowcodeapi.com/baseten/v1/models/model_abc123/deployments/production/autoscaling_settings?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"autoscaling_settings": {
"min_instances": 1,
"max_instances": 10
}
}'
```
### Example 4: Training Job Monitoring
```bash
# Step 1: List all training jobs in a project (no specific job ID needed for listing)
curl -X GET "https://api.lowcodeapi.com/baseten/v1/training_projects/proj_123/jobs?api_token=YOUR_API_TOKEN"
# Response contains training_job_id for each job, e.g., "job_456"
# Step 2: Get training job details using training_project_id and training_job_id
curl -X GET "https://api.lowcodeapi.com/baseten/v1/training_projects/proj_123/jobs/job_456?api_token=YOUR_API_TOKEN"
# Step 3: Get training job metrics using IDs from Step 1 and 2
curl -X GET "https://api.lowcodeapi.com/baseten/v1/training_projects/proj_123/jobs/job_456/metrics?api_token=YOUR_API_TOKEN"
# Step 4: Get training job logs
curl -X GET "https://api.lowcodeapi.com/baseten/v1/training_projects/proj_123/jobs/job_456/logs?api_token=YOUR_API_TOKEN"
# Step 5: Get training job checkpoints
curl -X GET "https://api.lowcodeapi.com/baseten/v1/training_projects/proj_123/jobs/job_456/checkpoints?api_token=YOUR_API_TOKEN"
# Step 6: Stop training job if needed
curl -X POST "https://api.lowcodeapi.com/baseten/v1/training_projects/proj_123/jobs/job_456/stop?api_token=YOUR_API_TOKEN"
```
## Error Handling
Common HTTP status codes:
| Status Code | Description |
|-------------|-------------|
| 200 | Success |
| 400 | Bad Request - Invalid parameters |
| 401 | Unauthorized - Invalid or missing API token |
| 404 | Not Found - Resource or endpoint doesn't exist |
| 422 | Unprocessable Entity - Validation error |
| 429 | Too Many Requests - Rate limit exceeded |
| 500 | Internal Server Error - Baseten server error |
All responses are wrapped in a `data` key:
```json
{
"data": {
// Actual response from Baseten
},
"error": "Error message (if applicable)"
}
```
## Best Practices
1. **Use Async Inference** for long-running predictions to avoid timeouts
2. **Wake Up Deployments** before heavy loads to reduce cold starts
3. **Monitor Training Jobs** regularly using the metrics endpoint
4. **Use Autoscaling** to handle variable workloads efficiently
5. **Keep API Keys Secure** - rotate them periodically using the API Keys endpoints
6. **Environment Isolation** - Use separate environments for development and production
## Links
- **Official Documentation:** [https://docs.baseten.co/reference](https://docs.baseten.co/reference)
- **API Keys:** [https://app.baseten.co/settings/api_keys](https://app.baseten.co/settings/api_keys)
- **Website:** [https://www.baseten.co](https://www.baseten.co)