# Gemini by Google Integration via LowCodeAPI
## Overview
Google's Gemini API provides access to state-of-the-art generative AI models for building multimodal applications. Supports text generation, image understanding, code generation, embeddings, and more using powerful Gemini models like Gemini 2.5 Pro and Flash.
## Base Endpoint
```
https://api.lowcodeapi.com/gemini/
```
## Authentication
LowCodeAPI handles authentication automatically. You only need to:
1. **Sign up** at [Google Cloud Console](https://console.cloud.google.com/apis/credentials)
2. **Create an API Key** for the Gemini API
3. **Connect your account** in LowCodeAPI dashboard
4. **Use your `api_token`** in all requests
The `api_token` is your LowCodeAPI authentication token. LowCodeAPI will automatically:
- Fetch your Google API key from the database
- Apply it to each request
**Auth Type**: API Key
## API Categories
- **Frontier AI Labs** - Advanced AI research and model APIs
## Common Endpoints
### Category: Models
#### List Models
**Method**: `GET` | **LowCodeAPI Path**: `/v1beta/models`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/models?api_token={api_token}
```
**Description**: Lists the Gemini models available through the API
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/gemini/v1beta/models?api_token=YOUR_API_TOKEN"
```
**Example Response**:
```json
{
"data": {
"models": [
{
"name": "models/gemini-2.5-flash",
"displayName": "Gemini 2.5 Flash",
"description": "Fast and versatile model"
}
]
}
}
```
**Official Documentation**: [https://ai.google.dev/api/models#v1beta.models.list](https://ai.google.dev/api/models#v1beta.models.list)
---
#### Get Model
**Method**: `GET` | **LowCodeAPI Path**: `/v1beta/models/name`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/models/name?name={name}&api_token={api_token}
```
**Description**: Gets information about a specific Model such as its version number, token limits, parameters and other metadata
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `name` | string | Yes | The name of the model (e.g., models/gemini-2.5-flash) |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/gemini/v1beta/models/name?name=models/gemini-2.5-flash&api_token=YOUR_API_TOKEN"
```
**Official Documentation**: [https://ai.google.dev/api/models#v1beta.models.get](https://ai.google.dev/api/models#v1beta.models.get)
---
### Category: Content Generation
#### Generate Content
**Method**: `POST` | **LowCodeAPI Path**: `/v1beta/models/model-generatecontent`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/models/model-generatecontent?model={model}&api_token={api_token}
```
**Description**: Generates a model response given an input GenerateContentRequest. Returns the full response in a single package
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The model name (e.g., models/gemini-2.5-flash) |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `contents` | array | Yes | The contents of the current conversation with the model |
| `generationConfig` | object | No | Configuration options (temperature, topP, maxOutputTokens, etc.) |
| `systemInstruction` | object | No | Developer-set system instruction for the model |
| `tools` | array | No | A list of Tools the model may use |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-generatecontent?model=models/gemini-2.5-flash&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [{"text": "Explain quantum computing in simple terms"}]
}
],
"generationConfig": {
"temperature": 0.7,
"maxOutputTokens": 1024
}
}'
```
**Official Documentation**: [https://ai.google.dev/api/generate-content#v1beta.models.generateContent](https://ai.google.dev/api/generate-content#v1beta.models.generateContent)
---
#### Stream Generate Content
**Method**: `POST` | **LowCodeAPI Path**: `/v1beta/models/model-streamgeneratecontent`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/models/model-streamgeneratecontent?model={model}&api_token={api_token}
```
**Description**: Generates a streamed response from the model using Server-Sent Events (SSE)
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The model name (e.g., models/gemini-2.5-flash) |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `contents` | array | Yes | The contents of the current conversation with the model |
| `generationConfig` | object | No | Configuration options for generation |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-streamgeneratecontent?model=models/gemini-2.5-flash&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [{"text": "Write a short poem about AI"}]
}
]
}'
```
**Official Documentation**: [https://ai.google.dev/api/generate-content#v1beta.models.streamGenerateContent](https://ai.google.dev/api/generate-content#v1beta.models.streamGenerateContent)
---
### Category: Embeddings
#### Embed Content
**Method**: `POST` | **LowCodeAPI Path**: `/v1beta/models/model-embedcontent`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/models/model-embedcontent?model={model}&api_token={api_token}
```
**Description**: Generates a text embedding vector from the input Content using the specified Gemini Embedding model
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The embedding model name (e.g., models/text-embedding-004) |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `content` | object | Yes | The content to embed |
| `taskType` | string | No | Optional task type (RETRIEVAL_QUERY, RETRIEVAL_DOCUMENT, etc.) |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-embedcontent?model=models/text-embedding-004&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"content": {
"parts": [{"text": "Hello, world!"}]
},
"taskType": "RETRIEVAL_DOCUMENT"
}'
```
**Official Documentation**: [https://ai.google.dev/api/embeddings#v1beta.models.embedContent](https://ai.google.dev/api/embeddings#v1beta.models.embedContent)
---
#### Batch Embed Contents
**Method**: `POST` | **LowCodeAPI Path**: `/v1beta/models/model-batchembedcontents`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/models/model-batchembedcontents?model={model}&api_token={api_token}
```
**Description**: Generates multiple embedding vectors from the input Content which consists of a batch of strings
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The embedding model name |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `requests` | array | Yes | Embed requests (max 2048) |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-batchembedcontents?model=models/text-embedding-004&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"requests": [
{"content": {"parts": [{"text": "First text"}]}},
{"content": {"parts": [{"text": "Second text"}]}}
]
}'
```
**Official Documentation**: [https://ai.google.dev/api/embeddings#v1beta.models.batchEmbedContents](https://ai.google.dev/api/embeddings#v1beta.models.batchEmbedContents)
---
### Category: Tokens
#### Count Tokens
**Method**: `POST` | **LowCodeAPI Path**: `/v1beta/models/model-counttokens`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/models/model-counttokens?model={model}&api_token={api_token}
```
**Description**: Runs a model's tokenizer on input Content and returns the token count
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The model name (e.g., models/gemini-2.5-flash) |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `contents` | array | No | The contents to count tokens for |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-counttokens?model=models/gemini-2.5-flash&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{"parts": [{"text": "Count my tokens"}]}
]
}'
```
**Official Documentation**: [https://ai.google.dev/api/tokens#v1beta.models.countTokens](https://ai.google.dev/api/tokens#v1beta.models.countTokens)
---
### Category: Files
#### Upload File
**Method**: `POST` | **LowCodeAPI Path**: `/upload/v1beta/files`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/upload/v1beta/files?api_token={api_token}
```
**Description**: Creates a File by uploading data from the user's device for use in multimodal generation
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `file` | file | Yes | The file to upload |
| `file.displayName` | string | No | Display name of the file |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/gemini/upload/v1beta/files?api_token=YOUR_API_TOKEN" \
-H "Content-Type: multipart/form-data" \
-F "[email protected]" \
-F "file.displayName=My Image"
```
**Official Documentation**: [https://ai.google.dev/api/files#v1beta.media.upload](https://ai.google.dev/api/files#v1beta.media.upload)
---
#### List Files
**Method**: `GET` | **LowCodeAPI Path**: `/v1beta/files`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/files?api_token={api_token}
```
**Description**: Lists the metadata for Files owned by the requesting project
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `pageSize` | number | No | Maximum number of Files to return per page |
| `pageToken` | string | No | Page token from a previous ListFiles call |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/gemini/v1beta/files?pageSize=10&api_token=YOUR_API_TOKEN"
```
**Official Documentation**: [https://ai.google.dev/api/files#v1beta.files.list](https://ai.google.dev/api/files#v1beta.files.list)
---
### Category: Caching
#### Create Cached Content
**Method**: `POST` | **LowCodeAPI Path**: `/v1beta/cachedContents`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/cachedContents?api_token={api_token}
```
**Description**: Creates CachedContent resource for reusing context across multiple API calls
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The model to use (format: models/{model}) |
| `contents` | array | Yes | The content to cache |
| `ttl` | string | No | Expiration time in seconds (max: 604800) |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/cachedContents?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "models/gemini-2.5-flash",
"contents": [
{"parts": [{"text": "Large context to cache..."}]}
],
"ttl": "3600"
}'
```
**Official Documentation**: [https://ai.google.dev/api/caching#v1beta.cachedContents.create](https://ai.google.dev/api/caching#v1beta.cachedContents.create)
---
### Category: Interactions
#### Create Interaction
**Method**: `POST` | **LowCodeAPI Path**: `/v1beta/interactions`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/interactions?api_token={api_token}
```
**Description**: Creates a new interaction (conversational session) with the model
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | string | Yes | The inputs for the interaction |
| `model` | string | No | The model to use (e.g., gemini-2.5-flash) |
| `system_instruction` | string | No | System instruction for the interaction |
| `stream` | boolean | No | Whether the interaction will be streamed |
| `store` | boolean | No | Whether to store response for later retrieval |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/interactions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-flash",
"input": "Hello! Tell me a joke.",
"stream": false,
"store": true
}'
```
**Official Documentation**: [https://ai.google.dev/api/interactions-api#creating-an-interaction](https://ai.google.dev/api/interactions-api#creating-an-interaction)
---
#### Get Interaction
**Method**: `GET` | **LowCodeAPI Path**: `/v1beta/interactions/id`
**Full URL**:
```
https://api.lowcodeapi.com/gemini/v1beta/interactions/id?id={id}&api_token={api_token}
```
**Description**: Retrieves the full details of a single interaction based on its ID
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | string | Yes | The unique identifier of the interaction |
| `stream` | boolean | No | If true, streams the response incrementally |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/gemini/v1beta/interactions/id?id=INTERACTION_ID&api_token=YOUR_API_TOKEN"
```
**Official Documentation**: [https://ai.google.dev/api/interactions-api#retrieving-an-interaction](https://ai.google.dev/api/interactions-api#retrieving-an-interaction)
---
## Usage Examples
### Example 1: Text Generation with Configuration
This example demonstrates generating text with custom configuration:
```bash
# Step 1: List available models
# No ID required - returns all available Gemini models
curl -X GET "https://api.lowcodeapi.com/gemini/v1beta/models?api_token=YOUR_API_TOKEN"
# Step 2: Generate content with a specific model
# No ID required - creates a new generation
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-generatecontent?model=models/gemini-2.5-flash&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [{"text": "Write a product description for a smartwatch"}]
}
],
"generationConfig": {
"temperature": 0.8,
"topP": 0.95,
"maxOutputTokens": 500
}
}'
# Step 3: Count tokens in your prompt before generation
# No ID required - counts tokens in the provided content
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-counttokens?model=models/gemini-2.5-flash&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{"parts": [{"text": "Write a product description for a smartwatch"}]}
]
}'
```
### Example 2: Multimodal with File Upload
This example shows how to work with images and files:
```bash
# Step 1: Upload an image file
# No ID required - uploads a new file and returns file metadata
curl -X POST "https://api.lowcodeapi.com/gemini/upload/v1beta/files?api_token=YOUR_API_TOKEN" \
-H "Content-Type: multipart/form-data" \
-F "file=@/path/to/image.jpg"
# Step 2: List uploaded files
# No ID required - returns paginated list of files
curl -X GET "https://api.lowcodeapi.com/gemini/v1beta/files?pageSize=20&api_token=YOUR_API_TOKEN"
# Step 3: Generate content using the uploaded file
# Replace FILE_NAME with the name returned from Step 1 (e.g., files/abc123)
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-generatecontent?model=models/gemini-2.5-flash&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [
{"file_data": {"file_uri": "files/FILE_NAME"}},
{"text": "Describe this image in detail"}
]
}
]
}'
```
### Example 3: Embeddings and Semantic Search
This example demonstrates creating embeddings for semantic search:
```bash
# Step 1: Create embeddings for a document
# No ID required - generates embedding vector
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-embedcontent?model=models/text-embedding-004&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"content": {
"parts": [{"text": "Machine learning is a subset of artificial intelligence"}]
},
"taskType": "RETRIEVAL_DOCUMENT"
}'
# Step 2: Batch embed multiple texts
# No ID required - processes multiple texts in one request
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-batchembedcontents?model=models/text-embedding-004&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"requests": [
{"content": {"parts": [{"text": "Text 1"}]}},
{"content": {"parts": [{"text": "Text 2"}]}},
{"content": {"parts": [{"text": "Text 3"}]}}
]
}'
# Step 3: Create a query embedding for search
# No ID required - generates query embedding
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-embedcontent?model=models/text-embedding-004&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"content": {
"parts": [{"text": "What is AI?"}]
},
"taskType": "RETRIEVAL_QUERY"
}'
```
### Example 4: Context Caching for Long Conversations
This example shows how to use cached content for efficiency:
```bash
# Step 1: Create cached content with large context
# No ID required - creates a new cached content resource
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/cachedContents?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "models/gemini-2.5-flash",
"contents": [
{"role": "user", "parts": [{"text": "Here is a large document..."}]},
{"role": "model", "parts": [{"text": "I understand this document..."}]}
],
"ttl": "7200"
}'
# Step 2: Use cached content in generation
# Replace CACHED_CONTENT_NAME with the name from Step 1 (e.g., cachedContents/abc123)
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/models/model-generatecontent?model=models/gemini-2.5-flash&api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [{"text": "Based on the document, what are the key points?"}]
}
],
"cachedContent": "cachedContents/CACHED_CONTENT_NAME"
}'
```
### Example 5: Streaming and Conversational Interactions
This example demonstrates the interactions API for conversational workflows:
```bash
# Step 1: Create an interaction session
# No ID required - creates a new interaction and returns interaction ID
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/interactions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-flash",
"input": "Hello! I need help with Python programming.",
"store": true
}'
# Step 2: Retrieve the interaction to see the full response
# Replace INTERACTION_ID with the ID returned from Step 1
curl -X GET "https://api.lowcodeapi.com/gemini/v1beta/interactions/id?id=INTERACTION_ID&api_token=YOUR_API_TOKEN"
# Step 3: Continue the conversation using the previous interaction ID
# Use the same INTERACTION_ID from Step 1
curl -X POST "https://api.lowcodeapi.com/gemini/v1beta/interactions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-flash",
"input": "Can you show me an example?",
"previous_interaction_id": "INTERACTION_ID",
"store": true
}'
```
## Complete Endpoint Reference
For a complete list of all endpoints and their parameters, refer to:
- **OpenAPI Definition**: `https://backend.lowcodeapi.com/gemini/definition`
- **Official Provider Documentation**: [https://ai.google.dev/gemini-api/docs](https://ai.google.dev/gemini-api/docs)
## Rate Limits & Best Practices
- **Rate limits**: Based on your Google Cloud project quotas
- **Token limits**: Vary by model (check model metadata for limits)
- **Best practice**: Use cached content for repeated context to reduce costs
- **Best practice**: Use streaming for faster response times on long generations
- **Best practice**: Count tokens before generation to estimate costs
- **Best practice**: Batch embeddings for multiple texts to reduce API calls
## Error Handling
Standard HTTP status codes apply:
- **200** - Success
- **400** - Bad request (invalid parameters)
- **401** - Authentication failed
- **429** - Rate limit exceeded
- **500** - Server error