# TextCortex Integration via LowCodeAPI
## Overview
TextCortex is an AI writing assistant platform for generating and improving written content. The TextCortex API provides:
- **Content Generation** - Generate blog posts, articles, and marketing copy
- **Text Completion** - Complete and expand on existing text
- **Code Generation** - Generate code in various programming languages
- **Text Rewriting** - Paraphrase and improve existing content
- **Summarization** - Create concise summaries of long content
## Base Endpoint
```
https://api.lowcodeapi.com/textcortex/
```
## Authentication
LowCodeAPI handles authentication automatically using Bearer token authentication. You only need to:
1. **Sign up** at [TextCortex](https://textcortex.com) to get your API Key
2. **Connect your account** in the LowCodeAPI dashboard
3. **Use your `api_token`** in all requests
The `api_token` is your LowCodeAPI authentication token. LowCodeAPI will automatically:
- Fetch your TextCortex API key
- Apply it to each request as a Bearer token
**Auth Type**: Bearer Token
## API Categories
- **Content Generation AI** - AI-powered writing assistance
## Common Endpoints
### Category: TextCortex API
#### Generate Code
**Method**: `POST` | **LowCodeAPI Path**: `/v1/codes`
**Full URL**:
```
https://api.lowcodeapi.com/textcortex/v1/codes?api_token={api_token}
```
**Description**: Generate code for a given programming language.
**Request Body**:
```json
{
"text": "Create a function to calculate fibonacci numbers",
"mode": "python",
"max_tokens": 2048,
"temperature": 0.7,
"model": "icortex-1"
}
```
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `text` | string | Yes | Instruction for the program |
| `mode` | string | Yes | Programming language (python, java, javascript, go, php, js_regex) |
| `max_tokens` | integer | No | Maximum tokens to generate (default: 2048) |
| `temperature` | number | No | Sampling temperature (higher = more creative) |
| `model` | string | No | Model to use (default: icortex-1) |
| `n` | integer | No | Number of outputs (default: 1) |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/textcortex/v1/codes?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"text": "Create a function to calculate fibonacci numbers",
"mode": "python",
"max_tokens": 2048
}'
```
**Official Documentation**: [Generate Code](https://docs.textcortex.com/api/paths/codes/post)
---
#### Generate Content
**Method**: `POST` | **LowCodeAPI Path**: `/v1/chats`
**Full URL**:
```
https://api.lowcodeapi.com/textcortex/v1/chats?api_token={api_token}
```
**Description**: Generate text content for various use cases.
**Request Body**:
```json
{
"prompt": "Write a blog post about the benefits of AI in content creation",
"max_tokens": 1024,
"temperature": 0.7,
"n": 1
}
```
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `prompt` | string | Yes | Text prompt for generation |
| `max_tokens` | integer | No | Maximum tokens to generate |
| `temperature` | number | No | Sampling temperature (0-1) |
| `n` | integer | No | Number of outputs to generate |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/textcortex/v1/chats?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Write a blog post about the benefits of AI in content creation",
"max_tokens": 1024,
"temperature": 0.7
}'
```
**Official Documentation**: [Generate Content](https://docs.textcortex.com/api)
---
#### Summarize Text
**Method**: `POST` | **LowCodeAPI Path**: `/v1/summaries`
**Full URL**:
```
https://api.lowcodeapi.com/textcortex/v1/summaries?api_token={api_token}
```
**Description**: Generate a concise summary of long text.
**Request Body**:
```json
{
"text": "Long text content to summarize...",
"max_tokens": 200,
"temperature": 0.5
}
```
**Request Body Fields**:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `text` | string | Yes | Text to summarize |
| `max_tokens` | integer | No | Maximum tokens for summary |
| `temperature` | number | No | Sampling temperature |
**Example Request**:
```bash
curl -X POST "https://api.lowcodeapi.com/textcortex/v1/summaries?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"text": "Long article text to summarize...",
"max_tokens": 200
}'
```
**Official Documentation**: [Summarize](https://docs.textcortex.com/api)
---
## Usage Examples
### Example 1: Generate Code
Create code in various languages:
```bash
# Generate Python code
# No ID needed - generates new code
curl -X POST "https://api.lowcodeapi.com/textcortex/v1/codes?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"text": "Create a function to validate email addresses",
"mode": "python",
"max_tokens": 2048
}'
# Generate JavaScript code
# No ID needed - generates new code
curl -X POST "https://api.lowcodeapi.com/textcortex/v1/codes?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"text": "Create an async function to fetch user data",
"mode": "javascript",
"max_tokens": 2048
}'
```
### Example 2: Generate Marketing Content
Create marketing copy:
```bash
# Generate blog post
# No ID needed - generates new content
curl -X POST "https://api.lowcodeapi.com/textcortex/v1/chats?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Write a compelling product description for a smart watch",
"max_tokens": 1024,
"temperature": 0.7
}'
# Generate social media post
# No ID needed - generates new content
curl -X POST "https://api.lowcodeapi.com/textcortex/v1/chats?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Write an engaging Twitter post about a new product launch",
"max_tokens": 280,
"temperature": 0.8
}'
```
### Example 3: Summarize Content
Create summaries of long text:
```bash
# Summarize article
# No ID needed - generates summary
curl -X POST "https://api.lowcodeapi.com/textcortex/v1/summaries?api_token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"text": "Long article content...",
"max_tokens": 200,
"temperature": 0.5
}'
```
## Complete Endpoint Reference
For a complete list of all 16 endpoints and their parameters, refer to:
- **OpenAPI Definition**: https://backend.lowcodeapi.com/textcortex/definition
- **Official TextCortex Documentation**: https://docs.textcortex.com/api
## Rate Limits & Best Practices
- **Rate Limit**: Refer to your TextCortex plan for rate limits
- **Best Practices**:
- Use appropriate temperature for your use case (lower for factual, higher for creative)
- Set reasonable max_tokens to manage costs
- Provide clear and specific prompts for better results
- Use specific programming modes for code generation
- Implement retry logic for failed requests
- Cache generated content when possible
## Error Handling
All responses are wrapped in a `data` key:
```json
{
"data": {
// Actual response from TextCortex
}
}
```
Common errors:
- **400**: Invalid request parameters
- **401**: Invalid API key
- **429**: Rate limit exceeded
- **500**: Generation failed