# Groq Integration via LowCodeAPI

**Last Updated**: January 27, 2025

## Overview
Groq provides ultra-fast AI inference with hardware-accelerated compute for large language models and AI applications.

**Categories:**
- {'id': 'ai-cloud', 'name': 'AI Cloud'}

## Base Endpoint
https://api.lowcodeapi.com/groq

**Important**: Always include the provider name in the URL path after `api.lowcodeapi.com/`

## Authentication
**Type:** TOKEN

**Official Documentation:** https://console.groq.com/docs

## URL Format (Important)

LowCodeAPI supports two URL formats. **Always try the New Format first**, then fall back to Old Format if needed.

### New Format (Priority)
- Path parameters stay in the URL path
- Do NOT include path parameters as query parameters
- Example: `https://api.lowcodeapi.com/{provider}/resource/{id}?api_token=XXX`

### Old Format (Fallback)
- Path parameters become query parameters
- Example: `https://api.lowcodeapi.com/{provider}/resource/id?id={id}&api_token=XXX`

### Decision Flow for AI Agents
1. Always use **New Format** first - keep path parameters in the URL path
2. If you get a 404 or error, try **Old Format** with sanitized path
3. Log which format worked for future requests to this provider

## API Categories

## Common Endpoints

### Create chat completion

**Method:** POST
**LowCodeAPI Path:** /v1/chat/completions

**New Format URL:**
https://api.lowcodeapi.com/groq/v1/chat/completions?api_token=YOUR_API_TOKEN

**Old Format URL:**
https://api.lowcodeapi.com/groq/v1/chat/completions?api_token=YOUR_API_TOKEN

**Request Body:**

| Field | Type | Description |
|-------|------|-------------|
| frequency_penalty | number |  |
| logit_bias | object |  |
| logprobs | boolean |  |
| max_tokens | number |  |
| messages | array |  |
| model | string |  |
| n | number |  |
| presence_penalty | number |  |
| response_format | object |  |
| seed | number |  |
| stop | array |  |
| stream | boolean |  |
| stream_options | object |  |
| temperature | number |  |
| tool_choice | string |  |
| tools | array |  |
| top_logprobs | number |  |
| top_p | number |  |
| user | string |  |

**Example Request (New Format):**

```bash
curl -X POST 'https://api.lowcodeapi.com/groq/v1/chat/completions?api_token=YOUR_API_TOKEN'
```

**Official Documentation:** https://console.groq.com/docs/chat-completions

### List models

**Method:** GET
**LowCodeAPI Path:** /v1/models

**New Format URL:**
https://api.lowcodeapi.com/groq/v1/models?api_token=YOUR_API_TOKEN

**Old Format URL:**
https://api.lowcodeapi.com/groq/v1/models?api_token=YOUR_API_TOKEN

**Example Request (New Format):**

```bash
curl -X GET 'https://api.lowcodeapi.com/groq/v1/models?api_token=YOUR_API_TOKEN'
```

**Official Documentation:** https://console.groq.com/docs/models

### Create batch request

**Method:** POST
**LowCodeAPI Path:** /v1/batch

**New Format URL:**
https://api.lowcodeapi.com/groq/v1/batch?api_token=YOUR_API_TOKEN

**Old Format URL:**
https://api.lowcodeapi.com/groq/v1/batch?api_token=YOUR_API_TOKEN

**Request Body:**

| Field | Type | Description |
|-------|------|-------------|
| completion_window | string |  |
| endpoint | string |  |
| input_file_id | string |  |
| metadata | object |  |

**Example Request (New Format):**

```bash
curl -X POST 'https://api.lowcodeapi.com/groq/v1/batch?api_token=YOUR_API_TOKEN'
```

**Official Documentation:** https://console.groq.com/docs/batch


## Usage Examples

### Example 1: Basic Chat Completion

Creating a simple text completion or chat message:

```bash
# Create a chat completion - no path parameters needed
curl -X POST "https://api.lowcodeapi.com/groq/v1/chat/completions?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "your-model-name",
    "messages": [
      {"role": "user", "content": "Hello, how can you help me?"}
    ]
  }'

# Response includes generated content
```

### Example 2: Text Generation with Path Parameters

Generating text with specific model settings:

```bash
# Generate content using a specific model
curl -X POST "https://api.lowcodeapi.com/groq/v1/models/generate?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Write a short poem about technology",
    "max_tokens": 100
  }'

# Or if the provider uses model in the path:
curl -X POST "https://api.lowcodeapi.com/groq/v1/models/{MODEL_ID}:generateContent?api_token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "contents": [{"parts": [{"text": "Write a short story"}]}]
  }'
```

### Example 3: List Available Models

```bash
# Get list of available models
curl -X GET "https://api.lowcodeapi.com/groq/v1/models?api_token=YOUR_API_TOKEN"
```

## Error Handling

LowCodeAPI returns standard HTTP status codes. Common errors:

| Status Code | Description |
|-------------|-------------|
| 200 | Success - Request completed successfully |
| 400 | Bad Request - Invalid parameters or request body |
| 401 | Unauthorized - Invalid or missing API token |
| 403 | Forbidden - Insufficient permissions |
| 404 | Not Found - Endpoint or resource doesn't exist |
| 429 | Too Many Requests - Rate limit exceeded |
| 500 | Server Error - Provider API error |

All error responses include error details:

```json
{
  "data": {
    "error": {
      "message": "Error description",
      "code": "ERROR_CODE"
    }
  }
}
```

## Complete Endpoint Reference

| Endpoint | Method | Category |
|----------|--------|----------|
| Create chat completion | POST | Chat |
| List models | GET | Models |
| Create batch request | POST | Batch |

## API Definition Endpoints

You can fetch the complete API specification for this provider:

**New Format (OpenAPI spec):**
```bash
curl 'https://backend.lowcodeapi.com/groq/openapi'
```

**Old Format (API definition):**
```bash
curl 'https://backend.lowcodeapi.com/groq/definition'
```

## Response Format

All responses are wrapped in a `data` key:

```json
{
  "data": {
    // Actual response from provider (object or array)
  }
}
```