# OpenAI Integration via LowCodeAPI

## Overview

OpenAI provides GPT models and AI services including chat completions, embeddings, image generation, audio transcription/translation, and fine-tuning capabilities.

## Base Endpoint

```
https://api.lowcodeapi.com/openai/
```

## Authentication

LowCodeAPI handles authentication automatically. You only need to:

1. **Sign up** at https://platform.openai.com/
2. **Connect your account** in LowCodeAPI dashboard
3. **Use your `api_token`** in all requests

The `api_token` is your LowCodeAPI authentication token. LowCodeAPI will automatically:
- Fetch your OpenAI API key from database
- Apply it to each request with proper Bearer token header

**Auth Type**: `API Key` (Bearer Token)

## API Categories

- Image Generation AI
- Chat
- Audio
- Completions
- Embeddings
- Fine-Tuning
- Files
- Models
- Moderations
- Images
- Batch
- Assistants
- Threads
- Messages

## Common Endpoints

### Category: Chat

#### Create Chat Completion

**Method**: `POST` | **LowCodeAPI Path**: `/v1/chat/completions`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/chat/completions?api_token={api_token}
```

**Description**: Creates a completion for the chat message using GPT models.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Request Body Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | Model ID (gpt-5, gpt-5-mini, gpt-5-nano, gpt-5-pro, gpt-4.1, etc.) |
| `messages` | array | Yes | Array of message objects with role and content |
| `temperature` | number | No | Sampling temperature (0-2, default: 1) |
| `top_p` | number | No | Nucleus sampling parameter |
| `n` | number | No | Number of completions to generate |
| `stream` | boolean | No | Enable streaming response |
| `max_tokens` | number | No | Maximum tokens to generate |
| `presence_penalty` | number | No | Penalize new topics (-2.0 to 2.0) |
| `frequency_penalty` | number | No | Penalize repetition (-2.0 to 2.0) |
| `logit_bias` | number | No | Modify likelihood of specified tokens |
| `user` | string | No | Unique identifier for end-user |
| `response_format` | object | No | Format specification (e.g., JSON mode) |
| `seed` | string | No | Seed for deterministic sampling |
| `tools` | array | No | List of tools the model may call |
| `tool_choice` | object | No | Controls which tool is called |
| `parallel_tool_calls` | boolean | No | Enable parallel function calling |
| `stream_options` | object | No | Options for streaming response |

**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/openai/v1/chat/completions?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5-mini",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant specializing in technical writing."
      },
      {
        "role": "user",
        "content": "Explain how OAuth 2.0 works in simple terms."
      }
    ],
    "max_tokens": 500,
    "temperature": 0.7
  }'
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/chat/create

---

### Category: Audio

#### Create Transcription

**Method**: `POST` | **LowCodeAPI Path**: `/v1/audio/transcriptions`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/audio/transcriptions?api_token={api_token}
```

**Description**: Transcribes audio into the input language using Whisper model.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Request Body Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | file | Yes | The audio file to transcribe (mp3, mp4, mpeg, mpga, m4a, wav, webm) |
| `model` | string | Yes | Model ID (whisper-1) |
| `language` | string | No | Input audio language in ISO-639-1 format |
| `prompt` | string | No | Text to guide the model style |
| `response_format` | string | No | Output format: json, text, srt, verbose_json, vtt |
| `temperature` | number | No | Sampling temperature (0-1) |
| `timestamp_granularities[]` | array | No | Timestamp granularities for the transcription |

**Example Request**:
```bash
# Transcribe audio file
curl -X POST "https://api.lowcodeapi.com/openai/v1/audio/transcriptions?api_token=YOUR_API_TOKEN" \
  -F "[email protected]" \
  -F "model=whisper-1" \
  -F "language=en" \
  -F "response_format=json"
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/audio/createTranscription

---

#### Create Translation

**Method**: `POST` | **LowCodeAPI Path**: `/v1/audio/translations`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/audio/translations?api_token={api_token}
```

**Description**: Translates audio into English using Whisper model.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Request Body Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | file | Yes | The audio file to translate |
| `model` | string | Yes | Model ID (whisper-1) |
| `prompt` | string | No | Text to guide the model style |
| `response_format` | string | No | Output format: json, text, srt, verbose_json, vtt |
| `temperature` | number | No | Sampling temperature (0-1) |

**Example Request**:
```bash
# Translate audio to English
curl -X POST "https://api.lowcodeapi.com/openai/v1/audio/translations?api_token=YOUR_API_TOKEN" \
  -F "file=@spanish_audio.mp3" \
  -F "model=whisper-1" \
  -F "response_format=text"
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/audio/createTranslation

---

### Category: Embeddings

#### Create Embeddings

**Method**: `POST` | **LowCodeAPI Path**: `/v1/embeddings`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/embeddings?api_token={api_token}
```

**Description**: Create embeddings for text inputs.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Request Body Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | Embedding model ID |
| `input` | string/array | Yes | Text to embed (string or array of strings) |
| `encoding_format` | string | No | Format: float or base64 |
| `dimensions` | number | No | Number of dimensions for the output |
| `user` | string | No | Unique identifier for end-user |

**Example Request**:
```bash
# Generate embeddings for text
curl -X POST "https://api.lowcodeapi.com/openai/v1/embeddings?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "text-embedding-3-small",
    "input": "Your text string goes here",
    "encoding_format": "float"
  }'
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/embeddings/create

---

### Category: Images

#### Create Image

**Method**: `POST` | **LowCodeAPI Path**: `/v1/images/generations`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/images/generations?api_token={api_token}
```

**Description**: Creates an image given a prompt using DALL-E models.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Request Body Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | Image generation model (dall-e-3, dall-e-2) |
| `prompt` | string | Yes | Text description of desired image |
| `n` | number | No | Number of images to generate (1-10, default: 1) |
| `quality` | string | No | Image quality: standard or hd (dall-e-3 only) |
| `response_format` | string | No | Format: url or b64_json |
| `size` | string | No | Image size: 256x256, 512x512, 1024x1024, 1792x1024, 1024x1792 |
| `style` | string | No | Image style: vivid or natural (dall-e-3 only) |
| `user` | string | No | Unique identifier for end-user |

**Example Request**:
```bash
# Generate an image from text prompt
curl -X POST "https://api.lowcodeapi.com/openai/v1/images/generations?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "dall-e-3",
    "prompt": "A serene mountain landscape at sunset with a lake reflection",
    "n": 1,
    "size": "1024x1024",
    "quality": "hd",
    "style": "vivid"
  }'
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/images/create

---

### Category: Fine-Tuning

#### Create Fine Tuning Job

**Method**: `POST` | **LowCodeAPI Path**: `/v1/fine_tuning/jobs`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/fine_tuning/jobs?api_token={api_token}
```

**Description**: Creates a fine-tuning job to customize a model.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Request Body Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | Base model to fine-tune |
| `training_file` | string | Yes | ID of uploaded training file |
| `validation_file` | string | No | ID of uploaded validation file |
| `hyperparameters` | object | No | Hyperparameter settings |
| `suffix` | string | No | String added to fine-tuned model name |
| `integrations` | array | No | Array of integrations to enable |

**Example Request**:
```bash
# Create a fine-tuning job
curl -X POST "https://api.lowcodeapi.com/openai/v1/fine_tuning/jobs?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5-mini",
    "training_file": "file-abc123",
    "suffix": "custom-model",
    "hyperparameters": {
      "n_epochs": 3,
      "learning_rate_multiplier": 0.5
    }
  }'
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/fine-tuning/create

---

#### List Fine Tuning Jobs

**Method**: `GET` | **LowCodeAPI Path**: `/v1/fine_tuning/jobs`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/fine_tuning/jobs?after={after}&limit={limit}&api_token={api_token}
```

**Description**: List fine-tuning jobs with pagination.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
| `after` | string | No | Identifier for last job from previous page |
| `limit` | number | No | Number of jobs to retrieve (default: 20) |

**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/openai/v1/fine_tuning/jobs?limit=50&api_token=YOUR_API_TOKEN"
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/fine-tuning/list

---

### Category: Models

#### List Models

**Method**: `GET` | **LowCodeAPI Path**: `/v1/models`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/models?api_token={api_token}
```

**Description**: Lists the currently available models.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/openai/v1/models?api_token=YOUR_API_TOKEN"
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/models/list

---

#### Retrieve Model

**Method**: `GET` | **LowCodeAPI Path**: `/v1/models/model`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/models/model?model={model}&api_token={api_token}
```

**Description**: Retrieves a model instance, providing basic information about the model.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The ID of the model to retrieve |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/openai/v1/models/model?model=gpt-5-mini&api_token=YOUR_API_TOKEN"
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/models/retrieve

---

### Category: Moderations

#### Create Moderation

**Method**: `POST` | **LowCodeAPI Path**: `/v1/moderations`

**Full URL**:
```
https://api.lowcodeapi.com/openai/v1/moderations?api_token={api_token}
```

**Description**: Classifies if text violates OpenAI's content policy.

**Query Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |

**Request Body Parameters**:

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | string/array | Yes | Text to moderate |
| `model` | string | No | Moderation model to use |

**Example Request**:
```bash
# Check if content violates policy
curl -X POST "https://api.lowcodeapi.com/openai/v1/moderations?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "I want to hurt someone."
  }'
```

**Official Documentation**: https://platform.openai.com/docs/api-reference/moderations/create

---

## Usage Examples

### Example 1: Chat Completion with Streaming

Generate responses with real-time streaming.

```bash
# Enable streaming for immediate response
curl -X POST "https://api.lowcodeapi.com/openai/v1/chat/completions?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5-mini",
    "messages": [
      {"role": "system", "content": "You are a creative writing assistant."},
      {"role": "user", "content": "Write a short story about time travel."}
    ],
    "stream": true,
    "max_tokens": 500,
    "temperature": 0.8
  }'
```

### Example 2: Generate and Edit Images

Create images using DALL-E models.

```bash
# Generate image from text
curl -X POST "https://api.lowcodeapi.com/openai/v1/images/generations?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "dall-e-3",
    "prompt": "A futuristic city with flying cars at sunset, cyberpunk style",
    "n": 1,
    "size": "1024x1024",
    "quality": "hd"
  }'

# Generate multiple variations
curl -X POST "https://api.lowcodeapi.com/openai/v1/images/generations?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "dall-e-3",
    "prompt": "A cute robot gardener tending to plants",
    "n": 3,
    "size": "1024x1024",
    "style": "natural"
  }'
```

### Example 3: Audio Processing

Transcribe and translate audio files.

```bash
# Transcribe audio to text
curl -X POST "https://api.lowcodeapi.com/openai/v1/audio/transcriptions?api_token=YOUR_API_TOKEN" \
  -F "[email protected]" \
  -F "model=whisper-1" \
  -F "language=en" \
  -F "response_format=verbose_json"

# Translate foreign audio to English
curl -X POST "https://api.lowcodeapi.com/openai/v1/audio/translations?api_token=YOUR_API_TOKEN" \
  -F "file=@french_speech.mp3" \
  -F "model=whisper-1" \
  -F "response_format=text"
```

### Example 4: Fine-tune a Custom Model

Complete workflow for model customization.

```bash
# Step 1: Upload training file (via Files API)
# Step 2: Create fine-tuning job
curl -X POST "https://api.lowcodeapi.com/openai/v1/fine_tuning/jobs?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5-mini",
    "training_file": "file-abc123",
    "suffix": "my-custom-model"
  }'

# Step 3: Check job status
curl -X GET "https://api.lowcodeapi.com/openai/v1/fine_tuning/jobs?api_token=YOUR_API_TOKEN"

# Step 4: Use the fine-tuned model
# Once job completes, use the returned fine_tuned_model ID
curl -X POST "https://api.lowcodeapi.com/openai/v1/chat/completions?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "ft:gpt-5-mini:org-id:suffix:abc123",
    "messages": [{"role": "user", "content": "Test custom model"}]
  }'
```

### Example 5: Content Moderation

Check content for policy violations.

```bash
# Moderate user-generated content
curl -X POST "https://api.lowcodeapi.com/openai/v1/moderations?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "This is a sample comment to check for policy violations."
  }'
```

## Complete Endpoint Reference

For a complete list of all endpoints and their parameters, refer to:
- **OpenAPI Definition**: `https://backend.lowcodeapi.com/openai/definition`
- **Official Provider Documentation**: https://platform.openai.com/docs/api-reference/introduction

## Rate Limits & Best Practices

- Use streaming for long completions to improve perceived performance
- Fine-tuning jobs can take time - check status periodically
- Choose appropriate model based on task complexity and cost
- Use embeddings for semantic search and similarity matching
- Moderate user content before processing in production systems

## Error Handling

Standard HTTP status codes apply. Common errors include rate limiting, invalid tokens, or model availability issues.