# Banana.dev Integration via LowCodeAPI
**Last Updated**: February 11, 2026
## Overview
Banana.dev is a serverless GPU inference platform that provides a CI/CD build pipeline and a simple Python framework (Potassium) to serve your models with automatic scaling. It enables developers to deploy and run AI models without managing infrastructure.
**Main Features**:
- Serverless GPU inference for AI models
- Automatic scaling based on demand
- Simple model deployment via GitHub or Docker
- Real-time model runs with status checking
- Usage analytics and account management
- Support for various AI frameworks and models
## Base Endpoint
```
https://api.lowcodeapi.com/bananadev
```
**Important**: Always include the provider name `bananadev` in the URL path after `api.lowcodeapi.com/`.
## Authentication
Banana.dev uses API key authentication via a Bearer token:
1. **API Key**: Your Banana.dev API key
### Setting Up Credentials
1. Go to the [Banana.dev Account API Keys](https://www.banana.dev/account/api-keys)
2. Generate a new API key or use an existing one
3. Copy the API key
4. Use the API key in the `Authorization` header with the `Bearer` prefix
**Authentication Header**: The LowCodeAPI wrapper automatically handles the Bearer token authentication for you. You only need to provide your API key when setting up the integration.
## URL Format
LowCodeAPI supports two URL formats for Banana.dev. Always try the **New Format first**, and only fall back to the **Old Format** if it doesn't work.
### New Format (Priority)
Path parameters remain in the URL path. This is the preferred format.
**Pattern**:
```
https://api.lowcodeapi.com/bananadev/{path_with_params}?api_token={api_token}
```
### Old Format (Fallback)
Path parameters are converted to query parameters. Use this only if New Format fails.
**Pattern**:
```
https://api.lowcodeapi.com/bananadev/{sanitized_path}?{path_params}&api_token={api_token}
```
### Decision Flow for AI Agents
1. **Always try New Format first** - Keep path parameters in the URL path
2. If you receive a 404 or error, **try Old Format** with sanitized path
3. Log which format worked for future requests to Banana.dev
## API Categories
- **Runs**: Endpoints for starting, checking, and managing inference runs on Banana.dev serverless GPU infrastructure
- **Models**: Endpoints for listing and retrieving information about available AI models in your Banana.dev account
- **Account**: Endpoints for managing account information, usage statistics, and billing details
- **Deployment**: Endpoints for deploying and managing model deployments on Banana.dev infrastructure
## Common Endpoints
### Start a Run
**Method**: POST
**New Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/start?api_token=YOUR_API_TOKEN
```
**Old Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/start?api_token=YOUR_API_TOKEN
```
**Path Parameters**: None
**Query Parameters**: None
**Request Body**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelKey` | string | Yes | The model key identifier for the model you want to run |
| `modelInputs` | object | Yes | Input parameters for the model inference |
| `startOnly` | boolean | No | If true, only start the run without waiting for completion (default: false) |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/start?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelKey": "your-model-key",
"modelInputs": {
"prompt": "Explain quantum computing in simple terms",
"max_tokens": 500
},
"startOnly": false
}'
```
**Example Response**:
```json
{
"data": {
"id": "run_abc123xyz456",
"message": "Run created",
"created": 1707600000000,
"api": {
"callID": "run_abc123xyz456",
"finished": true,
"modelOutputs": [
{
"message": "Quantum computing is a type of computing..."
}
]
}
}
}
```
**Official Documentation**: [Start API Reference](https://docs.banana.dev/api/start)
---
### Check Run Status
**Method**: POST
**New Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/check?api_token=YOUR_API_TOKEN
```
**Old Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/check?api_token=YOUR_API_TOKEN
```
**Path Parameters**: None
**Query Parameters**: None
**Request Body**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The ID of the run to check (returned from the start endpoint) |
| `modelKey` | string | Yes | The model key identifier for the run |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/check?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"id": "run_abc123xyz456",
"modelKey": "your-model-key"
}'
```
**Example Response**:
```json
{
"data": {
"id": "run_abc123xyz456",
"message": "Run completed",
"created": 1707600000000,
"api": {
"callID": "run_abc123xyz456",
"finished": true,
"modelOutputs": [
{
"message": "Quantum computing is a type of computing..."
}
]
}
}
}
```
**Official Documentation**: [Check API Reference](https://docs.banana.dev/api/check)
---
### Start and Wait for Run Completion
**Method**: POST
**New Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/run?api_token=YOUR_API_TOKEN
```
**Old Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/run?api_token=YOUR_API_TOKEN
```
**Path Parameters**: None
**Query Parameters**: None
**Request Body**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelKey` | string | Yes | The model key identifier for the model you want to run |
| `modelInputs` | object | Yes | Input parameters for the model inference |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/run?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelKey": "your-model-key",
"modelInputs": {
"prompt": "Write a haiku about coding",
"max_tokens": 100
}
}'
```
**Example Response**:
```json
{
"data": {
"id": "run_def789ghi012",
"message": "Run completed",
"created": 1707600000000,
"api": {
"callID": "run_def789ghi012",
"finished": true,
"modelOutputs": [
{
"message": "Code flows like water,\nBugs hide in the digital night,\nDebug brings the light."
}
]
}
}
}
```
**Official Documentation**: [Run API Reference](https://docs.banana.dev/api/run)
---
### List All Models
**Method**: GET
**New Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/models?api_token=YOUR_API_TOKEN
```
**Old Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/models?api_token=YOUR_API_TOKEN
```
**Path Parameters**: None
**Query Parameters**: None
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/bananadev/v1/models?api_token=YOUR_API_TOKEN"
```
**Example Response**:
```json
{
"data": {
"models": [
{
"modelKey": "stable-diffusion-xl",
"name": "Stable Diffusion XL",
"createdAt": 1707500000000,
"updatedAt": 1707600000000
},
{
"modelKey": "llama-2-7b",
"name": "Llama 2 7B",
"createdAt": 1707400000000,
"updatedAt": 1707550000000
}
]
}
}
```
**Official Documentation**: [Models API Reference](https://docs.banana.dev/api/models)
---
### Get Model Details
**Method**: GET
**New Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/models/{model_key}?api_token=YOUR_API_TOKEN
```
**Old Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/models/modelkey?model_key={model_key}&api_token=YOUR_API_TOKEN
```
**Path Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model_key` | string | Yes | The model key identifier |
**Query Parameters**: None
**Example Request** (New Format):
```bash
curl -X GET "https://api.lowcodeapi.com/bananadev/v1/models/stable-diffusion-xl?api_token=YOUR_API_TOKEN"
```
**Example Response**:
```json
{
"data": {
"modelKey": "stable-diffusion-xl",
"name": "Stable Diffusion XL",
"description": "High-quality image generation model",
"createdAt": 1707500000000,
"updatedAt": 1707600000000,
"config": {
"gpuType": "A10G",
"maxConcurrency": 5
}
}
}
```
**Official Documentation**: [Models API Reference](https://docs.banana.dev/api/models)
---
### Get Account Information
**Method**: GET
**New Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/account?api_token=YOUR_API_TOKEN
```
**Old Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/account?api_token=YOUR_API_TOKEN
```
**Path Parameters**: None
**Query Parameters**: None
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/bananadev/v1/account?api_token=YOUR_API_TOKEN"
```
**Example Response**:
```json
{
"data": {
"accountId": "acc_1234567890",
"email": "[email protected]",
"plan": "pro",
"createdAt": 1700000000000
}
}
```
**Official Documentation**: [Account API Reference](https://docs.banana.dev/api/account)
---
### Get Usage Statistics
**Method**: GET
**New Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/usage?api_token=YOUR_API_TOKEN
```
**Old Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/usage?api_token=YOUR_API_TOKEN
```
**Path Parameters**: None
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `startDate` | string | No | Start date for usage statistics (ISO 8601 format) |
| `endDate` | string | No | End date for usage statistics (ISO 8601 format) |
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/bananadev/v1/usage?startDate=2026-02-01T00:00:00Z&endDate=2026-02-11T23:59:59Z&api_token=YOUR_API_TOKEN"
```
**Example Response**:
```json
{
"data": {
"totalCalls": 1250,
"totalComputeTime": 450000,
"totalCost": 45.50,
"byModel": [
{
"modelKey": "stable-diffusion-xl",
"calls": 800,
"computeTime": 300000,
"cost": 30.00
},
{
"modelKey": "llama-2-7b",
"calls": 450,
"computeTime": 150000,
"cost": 15.50
}
]
}
}
```
**Official Documentation**: [Usage API Reference](https://docs.banana.dev/api/usage)
---
### Deploy a Model
**Method**: POST
**New Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/deploy?api_token=YOUR_API_TOKEN
```
**Old Format URL**:
```
https://api.lowcodeapi.com/bananadev/v1/deploy?api_token=YOUR_API_TOKEN
```
**Path Parameters**: None
**Query Parameters**: None
**Request Body**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `modelKey` | string | Yes | The model key identifier for the model to deploy |
| `source` | string | Yes | Source of the model (e.g., GitHub repository URL, Docker image) |
| `environment` | object | No | Environment variables and configuration for the deployment |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/deploy?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelKey": "my-custom-model",
"source": "https://github.com/username/model-repo",
"environment": {
"PYTHON_VERSION": "3.10",
"GPU_TYPE": "A10G",
"MAX_CONCURRENCY": 3
}
}'
```
**Example Response**:
```json
{
"data": {
"modelKey": "my-custom-model",
"deploymentId": "deploy_xyz123",
"status": "deploying",
"createdAt": 1707600000000
}
}
```
**Official Documentation**: [Deploy API Reference](https://docs.banana.dev/api/deploy)
---
## Complete Endpoint Reference
## Response Format
All responses from LowCodeAPI are wrapped in a `data` key:
```json
{
"data": {
// Actual response from provider API
}
}
```
The `data` key contains the raw response from the provider's API.
| Method | Category | New Format Path | Old Format Path | Description |
|--------|----------|-----------------|-----------------|-------------|
| POST | Runs | `/v1/start` | `/v1/start` | Start a new inference run |
| POST | Runs | `/v1/check` | `/v1/check` | Check the status of a running inference job |
| POST | Runs | `/v1/run` | `/v1/run` | Start a run and wait for it to complete |
| GET | Models | `/v1/models` | `/v1/models` | List all models in your account |
| GET | Models | `/v1/models/{model_key}` | `/v1/models/modelkey?model_key={model_key}` | Get detailed information about a specific model |
| GET | Account | `/v1/account` | `/v1/account` | Get account information |
| GET | Account | `/v1/usage` | `/v1/usage` | Get usage statistics |
| POST | Deployment | `/v1/deploy` | `/v1/deploy` | Deploy a new model or update an existing model deployment |
## API Definition Endpoints
To discover all available endpoints for Banana.dev:
**New Format (OpenAPI Spec)**:
```
https://backend.lowcodeapi.com/bananadev/openapi
```
**Old Format (API Definition)**:
```
https://backend.lowcodeapi.com/bananadev/definition
```
## Usage Examples
### Example 1: Quick Inference with Auto-Completion
Run a model and wait for completion in a single request.
```bash
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/run?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelKey": "stable-diffusion-xl",
"modelInputs": {
"prompt": "A futuristic city at sunset, digital art style",
"width": 1024,
"height": 1024,
"steps": 30
}
}'
```
**Note**: The `/v1/run` endpoint automatically waits for the inference to complete and returns the results. This is the simplest way to get model outputs for quick tasks.
---
### Example 2: Async Inference with Status Polling
Start a long-running inference and check its status periodically.
```bash
# Step 1: Start the run (startOnly=true to return immediately)
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/start?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelKey": "llama-2-7b",
"modelInputs": {
"prompt": "Write a comprehensive guide to machine learning",
"max_tokens": 2000
},
"startOnly": true
}'
# Response includes: {"id": "run_abc123xyz456", ...}
# Step 2: Check status periodically (use the id from Step 1)
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/check?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"id": "run_abc123xyz456",
"modelKey": "llama-2-7b"
}'
# Step 3: Once "finished": true, use the results from the response
```
**Note**: In Step 2, the `id` comes from the response in Step 1. You can poll the `/v1/check` endpoint until the run is complete. The `modelKey` must match the one used in Step 1.
---
### Example 3: List and Use Available Models
Discover available models and run inference on a specific one.
```bash
# Step 1: List all available models
curl -X GET "https://api.lowcodeapi.com/bananadev/v1/models?api_token=YOUR_API_TOKEN"
# Response includes: models with modelKey, name, etc.
# Step 2: Get details for a specific model (use modelKey from Step 1)
curl -X GET "https://api.lowcodeapi.com/bananadev/v1/models/stable-diffusion-xl?api_token=YOUR_API_TOKEN"
# Step 3: Run inference on the selected model
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/run?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelKey": "stable-diffusion-xl",
"modelInputs": {
"prompt": "A serene mountain landscape with a crystal clear lake",
"negative_prompt": "blurry, low quality",
"width": 1024,
"height": 1024
}
}'
```
**Note**: In Step 2, the `modelKey` (e.g., `stable-diffusion-xl`) is the identifier from Step 1. Use the same `modelKey` in Step 3 to run inference on that specific model.
---
### Example 4: Monitor Usage and Costs
Track your API usage and costs over a specific time period.
```bash
# Get usage for the current month (February 2026)
curl -X GET "https://api.lowcodeapi.com/bananadev/v1/usage?startDate=2026-02-01T00:00:00Z&endDate=2026-02-11T23:59:59Z&api_token=YOUR_API_TOKEN"
# Response includes totalCalls, totalComputeTime, totalCost, and breakdown by model
```
**Note**: The `startDate` and `endDate` are optional. If not provided, the API returns usage for a default period (usually the last 30 days). Dates must be in ISO 8601 format.
---
### Example 5: Deploy a Custom Model
Deploy your own model from a GitHub repository.
```bash
curl -X POST "https://api.lowcodeapi.com/bananadev/v1/deploy?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"modelKey": "my-llm-finetune",
"source": "https://github.com/username/my-llm-repo",
"environment": {
"PYTHON_VERSION": "3.10",
"GPU_TYPE": "A10G",
"MAX_CONCURRENCY": 5,
"API_KEY": "your-custom-api-key"
}
}'
# Response includes deploymentId and status
```
**Note**: The `source` can be a GitHub repository URL, Docker image, or other supported sources. Environment variables in the `environment` object are injected into your model deployment. The `modelKey` you assign will be used to run inference later.
---
## Error Handling
Banana.dev returns standard HTTP status codes:
| Status Code | Meaning |
|-------------|---------|
| 200 | Success - Request processed successfully |
| 400 | Bad Request - Invalid parameters or malformed request |
| 401 | Unauthorized - Invalid or missing API key |
| 404 | Not Found - Model or run not found |
| 429 | Too Many Requests - Rate limit exceeded |
| 500 | Internal Server Error - Banana.dev service error |
| 503 | Service Unavailable - Temporary service outage |
**Common Error Messages**:
- **Invalid API Key**: The provided API key is invalid or expired
- **Model Not Found**: The specified modelKey does not exist
- **Run Not Found**: The specified run ID does not exist
- **Rate Limit Exceeded**: You have exceeded your API rate limit
- **Deployment Failed**: Model deployment encountered an error
## Notes
- Banana.dev provides automatic scaling - you don't need to manage servers
- The `/v1/run` endpoint is the simplest for quick one-off inferences
- Use `/v1/start` and `/v1/check` for long-running jobs that need async handling
- Model keys are unique identifiers you assign to your models during deployment
- Usage statistics help monitor costs and optimize model selection
- Custom models can be deployed from GitHub, Docker images, or other sources
**Official Banana.dev Documentation**: [Banana.dev Docs](https://docs.banana.dev)