# NVIDIA Integration via LowCodeAPI
## Overview
NVIDIA NIM (NVIDIA Inference Microservices) provides access to top open-source AI models through a unified, OpenAI-compatible API. Build AI applications with 100+ models from Meta, Mistral AI, Google, and more, plus specialized models for computer vision, multimodal, healthcare, climate simulation, and route optimization.
## Base Endpoint
```
https://api.lowcodeapi.com/nvidia/
```
## Authentication
LowCodeAPI handles authentication automatically. You only need to:
1. **Sign up** at https://build.nvidia.com
2. **Connect your account** in LowCodeAPI dashboard
3. **Use your `api_token`** in all requests
The `api_token` is your LowCodeAPI authentication token. LowCodeAPI will automatically:
- Fetch your NVIDIA API key from database
- Apply it to each request with proper Bearer token header
**Auth Type**: `API Key` (Bearer Token)
## API Categories
- Chat - LLM chat completions with 100+ models
- Models - List available NVIDIA NIM models
- Embeddings - Text embeddings for retrieval and semantic search
- Reranking - Passage reranking for RAG applications
- Visual Models - Computer vision (detection, OCR, feature extraction)
- Multimodal - Vision-language models
- Healthcare - Bioinformatics and medical imaging
- Climate Simulation - Weather forecasting and downscaling
- Route Optimization - Logistics routing optimization
## Common Endpoints
### Category: Chat
#### Create Chat Completion
**Method**: `POST` | **LowCodeAPI Path**: `/v1/chat/completions`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/chat/completions?api_token={api_token}
```
**Description**: Creates a model response for the given chat conversation. NVIDIA NIM provides access to 100+ top open-source models through a unified, OpenAI-compatible API.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `messages` | array | Yes | Array of message objects with role (system, user, assistant) and content |
| `model` | string | Yes | Model ID (meta/llama-3.1-70b-instruct, mistralai/mistral-large, nvidia/llama-3.1-nemotron-70b-instruct, google/gemma-2-27b-it) |
| `temperature` | number | No | Sampling temperature (0-2, higher = more random) |
| `max_tokens` | number | No | Maximum tokens to generate |
| `top_p` | number | No | Nucleus sampling (alternative to temperature) |
| `frequency_penalty` | number | No | Penalize repetition (-2.0 to 2.0) |
| `presence_penalty` | number | No | Penalize new topics (-2.0 to 2.0) |
| `stream` | boolean | No | Enable streaming response |
| `stop` | array | No | Up to 4 sequences where generation stops |
| `n` | number | No | Number of chat completion choices |
| `logprobs` | boolean | No | Return log probabilities |
| `top_logprobs` | number | No | Number of top log probabilities (0-20) |
| `logit_bias` | object | No | Modify likelihood of specified tokens |
| `seed` | number | No | Seed for deterministic sampling |
| `user` | string | No | Unique identifier for end-user |
| `response_format` | object | No | Format specification (e.g., {type: json_object}) |
| `tools` | array | No | List of tools the model may call |
| `tool_choice` | string/object | No | Controls which tool is called |
| `stream_options` | object | No | Options for streaming response |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/chat/completions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "meta/llama-3.1-70b-instruct",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant specializing in technical documentation."
},
{
"role": "user",
"content": "Explain how NVIDIA NIM works."
}
],
"max_tokens": 500,
"temperature": 0.7
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/llm-apis
---
### Category: Models
#### List Models
**Method**: `GET` | **LowCodeAPI Path**: `/v1/models`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/models?api_token={api_token}
```
**Description**: Lists the currently available NVIDIA NIM models with basic information including owner and availability.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/nvidia/v1/models?api_token=YOUR_API_TOKEN"
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/llm-apis
---
### Category: Embeddings
#### Create Embedding
**Method**: `POST` | **LowCodeAPI Path**: `/v1/embeddings`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/embeddings?api_token={api_token}
```
**Description**: Creates an embedding vector representing the input text. NVIDIA NIM supports various high-performance embedding models for retrieval, classification, clustering, and semantic similarity tasks.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | string/array | Yes | Text to embed (string or array of tokens) |
| `model` | string | Yes | Embedding model ID (nvidia/nv-embed-v1, nvidia/nv-embedqa-e5-v5, baai/bge-m3) |
| `encoding_format` | string | No | Format: float or base64 |
| `dimensions` | number | No | Number of dimensions for output |
| `input_type` | string | No | Input type: query or document |
| `truncate` | string | No | Handle long inputs: NONE or END |
| `user` | string | No | Unique identifier for end-user |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/embeddings?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/nv-embed-v1",
"input": "Your text string goes here",
"encoding_format": "float",
"input_type": "query"
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/retrieval-apis
---
### Category: Reranking
#### Create Ranking
**Method**: `POST` | **LowCodeAPI Path**: `/v1/ranking`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/ranking?api_token={api_token}
```
**Description**: Reranks passages by their relevance to a query. NVIDIA NIM reranking models improve retrieval accuracy in RAG applications by scoring and reordering search results.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | Reranking model ID (nvidia/llama-3.2-nemoretriever-500m-rerank-v2, nvidia/llama-3.2-nv-rerankqa-1b-v2) |
| `query` | object | Yes | Query object with text field |
| `passages` | array | Yes | Array of passage objects with text field |
| `truncate` | string | No | Handle long inputs: NONE or END |
| `user` | string | No | Unique identifier for end-user |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/ranking?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/llama-3.2-nemoretriever-500m-rerank-v2",
"query": {
"text": "which way should i go?"
},
"passages": [
{
"text": "two roads diverged in a yellow wood, and sorry i could not travel both"
},
{
"text": "the path less traveled makes all the difference"
}
]
}'
```
**Official Documentation**: https://docs.nvidia.com/nim/nemo-retriever/text-reranking/latest/reference.html
---
### Category: Visual Models
#### Run DINOv2 Inference
**Method**: `POST` | **LowCodeAPI Path**: `/v1/cv/nvidia/nv-dinov2`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/nv-dinov2?api_token={api_token}
```
**Description**: Runs DINOv2 inference to extract feature embeddings from images. NVIDIA DINOv2 is a vision transformer model trained using self-supervised learning.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | string | Yes | Base64 encoded image or file identifier |
| `embedding_type` | string | No | Type: cluster or projection |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/nv-dinov2?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"image": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA...",
"embedding_type": "projection"
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-nv-dinov2
---
#### Run Grounding DINO Inference
**Method**: `POST` | **LowCodeAPI Path**: `/v1/cv/nvidia/nv-grounding-dino`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/nv-grounding-dino?api_token={api_token}
```
**Description**: Runs Grounding DINO inference for open-vocabulary object detection using text prompts.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | string | Yes | Base64 encoded image or file identifier |
| `prompt` | string | Yes | Text prompt describing objects to detect |
| `box_threshold` | number | No | Confidence threshold (default: 0.35) |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/nv-grounding-dino?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAA...",
"prompt": "person . car . dog .",
"box_threshold": 0.35
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-nv-grounding-dino
---
#### Run OCRdnet Inference
**Method**: `POST` | **LowCodeAPI Path**: `/v1/cv/nvidia/ocdrnet`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/ocdrnet?api_token={api_token}
```
**Description**: Runs OCRdnet inference to extract text from document images.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | string | Yes | Base64 encoded document image |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/ocdrnet?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"image": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAU..."
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-ocdrnet
---
#### Run Retail Object Detection
**Method**: `POST` | **LowCodeAPI Path**: `/v1/cv/nvidia/retail-object-detection`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/retail-object-detection?api_token={api_token}
```
**Description**: Runs retail object detection inference optimized for detecting retail products and items on shelves.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | string | Yes | Base64 encoded image |
| `num_detections` | number | No | Maximum number of objects to detect |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/retail-object-detection?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAA...",
"num_detections": 10
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-retail-object-detection
---
#### Run Visual ChangeNet Inference
**Method**: `POST` | **LowCodeAPI Path**: `/v1/cv/nvidia/visual-changenet`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/visual-changenet?api_token={api_token}
```
**Description**: Runs Visual ChangeNet inference to detect and visualize changes between two images for satellite imagery and construction monitoring.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image_a` | string | Yes | First image (Base64 or file ID) |
| `image_b` | string | Yes | Second image (Base64 or file ID) |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/visual-changenet?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"image_a": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAU...",
"image_b": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAU..."
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-visual-changenet
---
#### Get Inference Status
**Method**: `GET` | **LowCodeAPI Path**: `/v1/v2/nvcf/pexec/status/requestid`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/v2/nvcf/pexec/status/requestid?requestId={requestId}&api_token={api_token}
```
**Description**: Gets the result of an earlier function invocation that returned status 202. Use for polling async inference results from visual models and other NVIDIA NIM endpoints.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `requestId` | string | Yes | Request identifier from async response |
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Example Request**:
```bash
curl -X GET "https://api.lowcodeapi.com/nvidia/v1/v2/nvcf/pexec/status/requestid?requestId=nvcf-req-abc123-def456-ghi789&api_token=YOUR_API_TOKEN"
```
**Official Documentation**: https://docs.api.nvidia.com/cloud-functions/reference/statuspolling
---
### Category: Multimodal
#### Run NeVA 22B Inference
**Method**: `POST` | **LowCodeAPI Path**: `/v1/vlm/nvidia/neva-22b`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/vlm/nvidia/neva-22b?api_token={api_token}
```
**Description**: Runs NeVA 22B inference for vision-language tasks. NeVA (NVIDIA Enhanced Vision Assistant) can understand and generate content from images and text.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `messages` | array | Yes | Array of message objects with role and content (can include images) |
| `model` | string | Yes | Model ID (nvidia/neva-22b) |
| `max_tokens` | number | No | Maximum tokens to generate |
| `temperature` | number | No | Sampling temperature |
| `top_p` | number | No | Nucleus sampling |
| `stream` | boolean | No | Enable streaming |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/vlm/nvidia/neva-22b?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/neva-22b",
"messages": [
{
"role": "user",
"content": "What is in this image?"
}
],
"max_tokens": 200,
"temperature": 0.7
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-neva-22b-infer
---
#### Run VILA Inference
**Method**: `POST` | **LowCodeAPI Path**: `/v1/vlm/nvidia/vila`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/vlm/nvidia/vila?api_token={api_token}
```
**Description**: Runs VILA inference for vision-language tasks. VILA (Vision-Language-Agnostic) is optimized for understanding images and text.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `messages` | array | Yes | Array of message objects (can include images) |
| `model` | string | Yes | Model ID (nvidia/vila) |
| `max_tokens` | number | No | Maximum tokens to generate |
| `temperature` | number | No | Sampling temperature |
| `top_p` | number | No | Nucleus sampling |
| `stream` | boolean | No | Enable streaming |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/vlm/nvidia/vila?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/vila",
"messages": [
{
"role": "user",
"content": "Describe this image in detail"
}
]
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-vila-infer
---
#### Run Llama 3.1 Nemotron Nano VL 8B Inference
**Method**: `POST` | **LowCodeAPI Path**: `/v1/vlm/nvidia/llama-3.1-nemotron-nano-vl-8b-v1`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/vlm/nvidia/llama-3.1-nemotron-nano-vl-8b-v1?api_token={api_token}
```
**Description**: Runs Llama 3.1 Nemotron Nano VL 8B inference for vision-language tasks. Compact multimodal model for edge deployment.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `messages` | array | Yes | Array of message objects (can include images) |
| `model` | string | Yes | Model ID (nvidia/llama-3.1-nemotron-nano-vl-8b-v1) |
| `max_tokens` | number | No | Maximum tokens to generate |
| `temperature` | number | No | Sampling temperature |
| `top_p` | number | No | Nucleus sampling |
| `stream` | boolean | No | Enable streaming |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/vlm/nvidia/llama-3.1-nemotron-nano-vl-8b-v1?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/llama-3.1-nemotron-nano-vl-8b-v1",
"messages": [
{
"role": "user",
"content": "What can you see in this picture?"
}
]
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/multimodal-apis
---
### Category: Healthcare
#### Run Parabricks Universal Variant Calling
**Method**: `POST` | **LowCodeAPI Path**: `/v1/health/nvidia/deepvariant`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/health/nvidia/deepvariant?api_token={api_token}
```
**Description**: Runs DeepVariant inference to identify variants in short- and long-read sequencing datasets. Supports Illumina, Oxford Nanopore, and Pacific Biosciences.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | object | Yes | Input data with reference genome, BAM file, and BAI file |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/health/nvidia/deepvariant?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"reference": "...",
"bam_file": "...",
"bai_file": "..."
}
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-deepvariant
---
#### Run Parabricks fq2bam Sequence Alignment
**Method**: `POST` | **LowCodeAPI Path**: `/v1/health/nvidia/fq2bam`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/health/nvidia/fq2bam?api_token={api_token}
```
**Description**: Generates BAM/CRAM output using BWA-MEM and GATK best practices for pair-ended FASTQ files.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | object | Yes | Input data with reference genome and FASTQ files |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/health/nvidia/fq2bam?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"reference": "...",
"fastq_files": ["..."]
}
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-fq2bam
---
#### Generate Molecules with GenMol
**Method**: `POST` | **LowCodeAPI Path**: `/v1/health/nvidia/genmol`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/health/nvidia/genmol?api_token={api_token}
```
**Description**: Fragment-based molecule generation using masked diffusion model trained on SAFE representations.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | object | Yes | Molecular sequences in SAFE format |
| `num_molecules` | number | No | Number of molecules to generate |
| `temperature` | number | No | SoftMax temperature scaling |
| `sample_num` | number | No | Number of samples to generate |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/health/nvidia/genmol?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {"safe_representation": "..."},
"num_molecules": 5,
"temperature": 1.0
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-genmol
---
#### Generate Synthetic CT Images with MAISI
**Method**: `POST` | **LowCodeAPI Path**: `/v1/health/nvidia/maisi`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/health/nvidia/maisi?api_token={api_token}
```
**Description**: Generates high-quality synthetic 3D CT images with optional anatomical annotations for research purposes.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `num_output_samples` | number | Yes | Number of CT images to generate |
| `body_region` | array | Yes | Region: head, chest, abdomen, pelvis |
| `anatomy_list` | array | No | Up to 127 anatomical classes to annotate |
| `output_size` | array | No | x, y, z size (x/y: 128-512, z: 128-768) |
| `spacing` | array | No | Spacing in mm (0.5-5.0) |
| `controllable_anatomy_size` | array | No | (organ_name, size_value) tuples for 10 anatomies |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/health/nvidia/maisi?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"num_output_samples": 1,
"body_region": ["chest", "abdomen"],
"output_size": [256, 256, 256]
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-maisi
---
#### Generate Molecules with MolMIM
**Method**: `POST` | **LowCodeAPI Path**: `/v1/health/nvidia/molmim`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/health/nvidia/molmim?api_token={api_token}
```
**Description**: Generates new molecules in SMILES format by sampling from latent space with optional optimization.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | string | Yes | Seed molecule in SMILES format |
| `num_samples` | number | No | Number of molecules to generate |
| `property_name` | string | No | Scoring function (QED or LogP) |
| `optimize` | boolean | No | Enable CMA-ES optimization |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/health/nvidia/molmim?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": "CC(C)Cc1ccc(cc1)C(C)C",
"num_samples": 10,
"property_name": "QED",
"optimize": true
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-molmim
---
#### Run VISTA-3D Medical Imaging Segmentation
**Method**: `POST` | **LowCodeAPI Path**: `/v1/health/nvidia/vista3d`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/health/nvidia/vista3d?api_token={api_token}
```
**Description**: 3D medical imaging segmentation with multi-head architecture for accurate analysis across anatomies and modalities.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `image` | string | Yes | CT image in NIfTI format (Base64 or file ID) |
| `class_info` | array | No | Class/point information for targeted segmentation |
| `point_prompt` | array | No | Click-based point prompts for interactive segmentation |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/health/nvidia/vista3d?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"image": "data:application/octet-stream;base64,...",
"class_info": [{"class_index": 1, "points": [[x, y, z]]}]
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-vista3d
---
### Category: Climate Simulation
#### Run CorrDiff Weather Downscaling
**Method**: `POST` | **LowCodeAPI Path**: `/v1/climate/nvidia/corrdiff`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/climate/nvidia/corrdiff?api_token={api_token}
```
**Description**: Runs CorrDiff inference for weather downscaling. Down-scales 25-km resolution forecast data to 3-km resolution using Patch-Based Corrector Diffusion.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | object | Yes | 25-km resolution forecast data with 38 variables |
| `lead_time` | string | Yes | Forecast lead time |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/climate/nvidia/corrdiff?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {"forecast_data": "..."},
"lead_time": "6h"
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-corrdiff
---
#### Run FourCastNet Global Weather Forecasting
**Method**: `POST` | **LowCodeAPI Path**: `/v1/climate/nvidia/fourcastnet`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/climate/nvidia/fourcastnet?api_token={api_token}
```
**Description**: Runs FourCastNet V2 inference for global atmospheric forecasting using Spherical Fourier Neural Operator with 6-hour delta capability.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | object | Yes | 73 surface and atmospheric variables |
| `datetime` | string | Yes | Forecast initialization datetime |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/climate/nvidia/fourcastnet?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {"weather_data": "..."},
"datetime": "2025-01-15T00:00:00Z"
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-fourcastnet
---
### Category: Route Optimization
#### Submit cuOpt Routing Problem
**Method**: `POST` | **LowCodeAPI Path**: `/v1/route/nvidia/cuopt`
**Full URL**:
```
https://api.lowcodeapi.com/nvidia/v1/route/nvidia/cuopt?api_token={api_token}
```
**Description**: Submits a routing optimization problem to NVIDIA cuOpt solver. AI microservice for logistics routing supporting CVRP, CVRTW, and PDPTW problems.
**Query Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `api_token` | string | Yes | Your LowCodeAPI authentication token |
**Request Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input` | object | Yes | Routing problem configuration (vehicles, tasks, constraints) |
| `input_file` | string | No | File identifier for large input data |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/route/nvidia/cuopt?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"vehicles": [{"id": "v1", "location": [0, 0]}],
"tasks": [{"id": "t1", "location": [10, 10]}],
"problem_type": "CVRP"
}
}'
```
**Official Documentation**: https://docs.api.nvidia.com/nim/reference/nvidia-cuopt
---
## Usage Examples
### Example 1: Chat Completion with Multiple Models
Generate responses using various NVIDIA-supported models.
```bash
# Use Llama 3.1 70B for general tasks
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/chat/completions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "meta/llama-3.1-70b-instruct",
"messages": [
{"role": "system", "content": "You are a technical expert."},
{"role": "user", "content": "Explain transformer architecture."}
],
"temperature": 0.5
}'
# Use Mistral Large for complex reasoning
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/chat/completions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/mistral-large",
"messages": [{"role": "user", "content": "Solve: What is 15% of 280?"}],
"temperature": 0.3
}'
```
### Example 2: RAG Pipeline with Embeddings and Reranking
Build a retrieval-augmented generation workflow.
```bash
# Step 1: Generate query embedding
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/embeddings?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/nv-embedqa-e5-v5",
"input": "What are the benefits of GPU acceleration?",
"input_type": "query"
}'
# Step 2: Rerank retrieved passages
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/ranking?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/llama-3.2-nemoretriever-500m-rerank-v2",
"query": {"text": "What are the benefits of GPU acceleration?"},
"passages": [
{"text": "GPUs provide parallel processing..."},
{"text": "CPU serial processing is slower..."}
]
}'
# Step 3: Generate response with top reranked passage
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/chat/completions?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "meta/llama-3.1-70b-instruct",
"messages": [
{"role": "user", "content": "Based on this context: [TOP_RERANKED_PASSAGE], answer the question."}
]
}'
```
### Example 3: Computer Vision Workflow
Detect objects and extract features from images.
```bash
# Step 1: Detect objects with Grounding DINO
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/nv-grounding-dino?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAA...",
"prompt": "person . car . dog .",
"box_threshold": 0.3
}'
# Step 2: Extract embeddings for image similarity
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/cv/nvidia/nv-dinov2?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAA...",
"embedding_type": "projection"
}'
# Step 3: Poll for async results if status 202 returned
curl -X GET "https://api.lowcodeapi.com/nvidia/v1/v2/nvcf/pexec/status/requestid?requestId=nvcf-req-abc123&api_token=YOUR_API_TOKEN"
```
### Example 4: Multimodal Image Understanding
Analyze images with vision-language models.
```bash
# Describe an image with NeVA
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/vlm/nvidia/neva-22b?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/neva-22b",
"messages": [
{
"role": "user",
"content": "Describe this image in detail, including colors, objects, and mood."
}
],
"max_tokens": 300
}'
# Lightweight multimodal with Nano VL
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/vlm/nvidia/llama-3.1-nemotron-nano-vl-8b-v1?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "nvidia/llama-3.1-nemotron-nano-vl-8b-v1",
"messages": [
{"role": "user", "content": "What objects are in this image?"}
]
}'
```
### Example 5: Climate Data Processing
Downscale weather forecasts for localized predictions.
```bash
# Submit CorrDiff downscaling job
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/climate/nvidia/corrdiff?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"forecast_data": "25km resolution data with 38 variables"
},
"lead_time": "12h"
}'
# Generate global weather forecast with FourCastNet
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/climate/nvidia/fourcastnet?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {"weather_data": "73 variables for initial state"},
"datetime": "2025-02-01T00:00:00Z"
}'
```
### Example 6: Route Optimization for Delivery
Solve vehicle routing problems for logistics.
```bash
# Submit last-mile delivery optimization
curl -X POST "https://api.lowcodeapi.com/nvidia/v1/route/nvidia/cuopt?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": {
"vehicles": [
{"id": "v1", "start_location": [0, 0], "capacity": 100},
{"id": "v2", "start_location": [5, 5], "capacity": 80}
],
"tasks": [
{"id": "t1", "pickup_location": [10, 10], "delivery_location": [20, 20], "demand": 30},
{"id": "t2", "pickup_location": [15, 5], "delivery_location": [25, 15], "demand": 25}
],
"problem_type": "PDPTW"
}
}'
```
## Complete Endpoint Reference
For a complete list of all endpoints and their parameters, refer to:
- **OpenAPI Definition**: `https://backend.lowcodeapi.com/nvidia/definition`
- **Official Provider Documentation**: https://docs.api.nvidia.com/nim/reference/llm-apis
## Rate Limits & Best Practices
- NVIDIA NIM supports 100+ models via single API endpoint
- Use streaming for long chat completions to improve responsiveness
- Async endpoints return 202 - use status polling endpoint to retrieve results
- Specialized endpoints (visual, multimodal, healthcare, climate) use AI.api.nvidia.com domain
- Generic endpoints (chat, embeddings, models) use integrate.api.nvidia.com domain
- Choose appropriate model based on task complexity and latency requirements
- For RAG applications, combine embeddings with reranking for best results
## Error Handling
Standard HTTP status codes apply. Common errors include:
- 400: Invalid request parameters
- 401: Invalid API credentials
- 429: Rate limit exceeded
- 500: Internal server error
Async inference responses with status 202 include a request ID for polling completion status.