Usage Guide
Installation
bun add @imagineoai/javascript-sdk
# or
npm install @imagineoai/javascript-sdk
Basic Usage
⚠️ Migration Note: The SDK now uses grouped method categories:
- Image methods:
client.images.upload,client.images.generateRun,client.images.describe- Prompt methods:
client.prompts.enhance,client.prompts.list, etc.Old flat methods like
uploadImage,generateImageRun,describe, andenhancePrompthave been removed.
Browser
import { ImagineoAIClient, upload } from "@imagineoai/javascript";
const client = new ImagineoAIClient("https://api.imagineoai.com", { apiKey: "sk-..." });
// Upload an image
const file = new File([/* ... */], "image.png");
const result = await client.images.upload({ file });
// Generate an image with Flux Dev model (default)
const fluxResult = await client.images.generateRun({
prompt: 'A beautiful landscape',
model_type: 'flux-dev' // Optional, this is the default
});
// Generate an image with OpenAI model
const openAIResult = await client.images.generateRun({
prompt: 'A stunning portrait',
model_type: 'openai'
});
// Generate an image with Google Imagen4 model
const googleResult = await client.images.generateRun({
prompt: 'A futuristic city at sunset',
model_type: 'google-imagen4',
aspect_ratio: '16:9' // Required for Google Imagen4
});
// Generate with WAN Image 2.1
const wan21Result = await client.images.generateRun({
prompt: 'A magical forest',
model_type: 'wan-image-2.1',
strength_model: 1.5
});
// Generate with WAN Image 2.2 (dual LoRA)
const wan22Result = await client.images.generateRun({
prompt: 'An epic battle scene',
model_type: 'wan-image-2.2',
lora_low_name: 'style-lora',
lora_low_strength: 0.7,
lora_high_name: 'detail-lora',
lora_high_strength: 1.2
});
// Generate with Qwen Image Edit
const qwenResult = await client.images.generate({
prompt: 'Transform into a cyberpunk scene with neon lights',
model_type: 'qwen-image-edit',
negative_prompt: 'daylight, nature, traditional',
lora_1: 'cyberpunk-style-v2',
strength_model: 1.5,
width: 1024,
height: 1024
});
// Generate consistent character images
const characterResult = await client.images.character.json({
prompt: 'A brave warrior in a mystical forest',
character_reference_image: 'https://example.com/character.jpg',
style_type: 'Realistic',
aspect_ratio: '16:9'
});
// ---
Edit: Mask Image from URLs
const pngBlob = await client.edit.maskImage(
"https://example.com/original.png",
"https://example.com/mask.png"
);
// Use pngBlob in the browser, or stream in Node.js
// Fetch layers for a specific run
const layersResponse = await client.edit.getLayers("run-id");
console.log(layersResponse.data.layers);
Node.js
import { ImagineoAIClient, upload } from "@imagineoai/javascript/server";
import fs from "fs";
const client = new ImagineoAIClient("https://api.imagineoai.com", { apiKey: "sk-..." });
const buffer = fs.readFileSync("./image.png");
const result = await client.images.upload({ file: buffer });
Uploading Files
- Browser: Only
FileorBlobobjects are accepted. - Node.js: Accepts
BufferorReadablestreams. Streams are buffered internally. - The
descriptionfield is optional.
Error Handling
- All SDK methods throw on error (network, validation, or API error).
- Catch errors with
try/catch:
try {
await client.uploadImage({ file });
} catch (e) {
// handle error
}
Debugging
- Pass
{ debug: true }to the client constructor to log API requests.
Environment Differences
- Browser: Uses native
fetchandFormData. - Node.js: Uses
formdata-nodeandundicifor fetch and form uploads.
Advanced Usage
- Use your own fetch implementation by polyfilling
globalThis.fetchin Node.js. - For custom endpoints, use the
uploadfunction directly.
Usage
This guide shows you how to use the ImagineoAI SDK for various image generation and editing tasks.
Basic Setup
import { ImagineoAIClient } from '@imagineoai/javascript';
const client = new ImagineoAIClient('https://api.imagineoai.com', {
apiKey: 'your-api-key'
});
Image Generation
Basic Generation with Flux Dev (Default)
const result = await client.images.generate({
prompt: 'A beautiful sunset over mountains',
model_type: 'flux-dev', // Optional, this is the default
width: 1024,
height: 1024
});
console.log(result.data.image_url);
Generation with OpenAI
const result = await client.images.generate({
prompt: 'A serene landscape',
model_type: 'openai',
reference_image_url: 'https://example.com/reference.jpg' // Optional
});
console.log(result.data.image_url);
Generation with Google Imagen4
const result = await client.images.generate({
prompt: 'A futuristic cityscape',
model_type: 'google-imagen4',
aspect_ratio: '16:9'
});
console.log(result.data.image_url);
Generation with Flux Kontext Max
const result = await client.images.generate({
prompt: 'A majestic dragon soaring through storm clouds at golden hour',
model_type: 'flux-kontext-max',
aspect_ratio: '16:9',
seed: 42 // For reproducible results
});
console.log(result.data.image_url);
Fine-tuned Model Generation with WAN Workflow
The WAN workflow is specifically designed for using fine-tuned models and LoRAs. It provides optimized parameters and batch processing for higher quality results.
Basic Fine-tuned Model Generation
const result = await client.images.generate({
prompt: 'A portrait in the style of my fine-tuned model',
model_id: 'your-model-uuid', // Required: Turso DB model ID
workflow_type: 'wan', // Enable WAN workflow
width: 1920,
height: 1088
});
console.log(result.data.image_url);
Advanced WAN Generation with Custom Parameters
const result = await client.images.generate({
prompt: 'A detailed scene using my custom trained style',
model_id: 'your-model-uuid', // Your fine-tuned model ID
workflow_type: 'wan', // Enable WAN workflow
width: 1920,
height: 1088,
strength_model: 1.5, // Model strength (0-2, default: 1)
model_weight: 0.8, // Model weight (0-1, default: 0.7)
guidance: 8.0, // Guidance scale (0-100, default: 7.5)
webhook_url: 'https://your-webhook.com/endpoint' // Optional webhook
});
console.log('Run ID:', result.data.run_id);
Comparing Default vs WAN Workflow
// Default workflow - standard generation
const defaultResult = await client.images.generate({
prompt: 'A landscape painting',
model_id: 'your-model-uuid',
workflow_type: 'default', // or omit (default)
width: 1024,
height: 1024
});
// WAN workflow - optimized for fine-tuned models
const wanResult = await client.images.generate({
prompt: 'A landscape painting',
model_id: 'your-model-uuid',
workflow_type: 'wan', // Uses specialized deployment
width: 1920,
height: 1088,
strength_model: 1.2 // Fine-tune the model influence
});
WAN Workflow Features
- Batch Size: Automatically sets batch size to 2 for better quality
- Model Resolution: Automatically fetches model from database and verifies permissions
- LoRA Support: Uses your model's ComfyDeploy ID as the LoRA name
- Strength Control: Fine-tune model influence with
strength_model(0-2)
Advanced Fine-tuning with WAN Image 2.2 Workflow
The WAN Image 2.2 workflow provides dual LoRA support, allowing you to combine two LoRA models with independent strength controls for more sophisticated generation:
// WAN Image 2.2 - Dual LoRA generation (both LoRAs required)
const dualLoraResult = await client.images.generate({
prompt: 'A professional fashion photograph with natural lighting',
model_id: 'your-model-uuid',
workflow_type: 'wan-image-2.2', // Enable dual LoRA workflow
width: 1200,
height: 1500,
lora_low_name: 'wan2.2/wan2.2-lora-instagirl-2.0/Instagirlv2.0_lownoise.safetensors', // Required
lora_low_strength: 0.5, // Low LoRA strength (0-2, default: 0.5)
lora_high_name: 'wan2.2/wan2.2-lora-instagirl-2.0/Instagirlv2.0_hinoise.safetensors', // Required
lora_high_strength: 1.0, // High LoRA strength (0-2, default: 1.0)
batch_size: 2 // Generate multiple variations
});
WAN Image 2.2 Validation
The SDK enforces that both LoRA names must be provided when using wan-image-2.2:
// This will throw an error - missing required LoRA parameters
try {
const result = await client.images.generate({
prompt: 'Portrait photography',
model_id: 'your-model-uuid',
workflow_type: 'wan-image-2.2'
// Missing lora_low_name and lora_high_name
});
} catch (error) {
console.error(error.message);
// "WAN Image 2.2 workflow requires both lora_low_name and lora_high_name parameters"
}
Workflow Comparison
| Feature | Default | WAN | WAN Image 2.2 |
|---|---|---|---|
| Fine-tuned Models | ✓ | ✓ | ✓ |
| Single LoRA | ✓ | ✓ | ✗ |
| Dual LoRA | ✗ | ✗ | ✓ (Required) |
| Strength Control | ✗ | ✓ (single) | ✓ (per LoRA) |
| Batch Processing | ✓ | ✓ (optimized) | ✓ |
| Best For | General use | Fine-tuned models | Complex style mixing |
Image Editing with Flux Kontext Max
The Flux Kontext Max model provides advanced image editing capabilities based on a reference image from a previous generation.
Complete Workflow: Generate and Edit
// Step 1: Generate an initial image
const generation = await client.images.generate({
prompt: 'A peaceful lake surrounded by mountains',
model_type: 'flux-kontext-max',
aspect_ratio: '16:9'
});
// Step 2: Edit the generated image
if (generation?.data?.run_id) {
const edit = await client.images.edit.fluxKontext.json({
original_run_id: generation.data.run_id,
prompt: 'Add a wooden dock extending into the lake with a small rowboat tied to it',
aspect_ratio: 'match_input_image' // Maintains original aspect ratio
});
console.log('Edit run ID:', edit.run_id);
console.log('Status:', edit.live_status);
}
Edit with Custom Aspect Ratio
// Asynchronous edit (default)
const asyncEdit = await client.images.edit.fluxKontext.json({
original_run_id: 'your-original-run-id',
prompt: 'Transform this into a cyberpunk scene with neon lights',
aspect_ratio: '1:1', // Change aspect ratio
seed: 123, // For consistent results
sync: false // Default - async operation
});
// Synchronous edit - waits for completion
const syncEdit = await client.images.edit.fluxKontext.json({
original_run_id: 'your-original-run-id',
prompt: 'Add dramatic lighting with golden hour colors',
aspect_ratio: '16:9',
seed: 456,
sync: true // Synchronous operation - returns completed result
});
console.log('Sync edit completed:', syncEdit.image_url); // Available immediately
FormData Editing (Alternative Method)
const edit = await client.images.edit.fluxKontext.formData({
original_run_id: 'your-original-run-id',
prompt: 'Add flying cars in the sky',
aspect_ratio: '16:9',
seed: 456
});
Synchronous vs Asynchronous Editing
Choose between immediate results or webhook-based processing:
// Synchronous - get results immediately (10-30 second wait)
const syncEdit = await client.images.edit.fluxKontext.json({
original_run_id: 'run-id',
prompt: 'Add a sunset sky',
sync: true // Waits for completion
});
console.log('Done:', syncEdit.image_url); // Available immediately
// Asynchronous - returns immediately, use webhooks/polling (default)
const asyncEdit = await client.images.edit.fluxKontext.json({
original_run_id: 'run-id',
prompt: 'Add a sunset sky',
sync: false // or omit (default)
});
console.log('Started:', asyncEdit.run_id); // Poll for completion
File Upload
Upload an Image
// Browser
const file = document.querySelector('input[type="file"]').files[0];
const upload = await client.images.upload({
file: file,
description: 'My uploaded image'
});
console.log(upload.data.image_url);
Upload a Mask
const maskUpload = await client.images.uploadMask({
file: maskFile,
description: 'Selection mask'
});
console.log(maskUpload.data.image_url);
Advanced Features
Character Generation
Generate consistent character images across different scenes using the Ideogram Character model:
// JSON method with URL reference
const response = await client.images.character.json({
prompt: 'A hero standing tall in a castle courtyard',
character_reference_image: 'https://example.com/my-character.jpg',
rendering_speed: 'Quality', // Quality mode for best results
style_type: 'Realistic',
aspect_ratio: '16:9',
resolution: '2048'
});
// FormData method with file upload (Browser)
const fileInput = document.getElementById('character-file');
const file = fileInput.files[0];
const response = await client.images.character.formData({
prompt: 'The same character in a battle scene',
character_reference_file: file,
style_type: 'Fiction',
magic_prompt_option: 'On' // Enhance the prompt automatically
});
// FormData method with Buffer (Node.js)
import fs from 'fs';
const characterBuffer = fs.readFileSync('./character-ref.png');
const response = await client.images.character.formData({
prompt: 'The character exploring a dungeon',
character_reference_file: characterBuffer,
rendering_speed: 'Turbo', // Faster generation
seed: 42 // For reproducible results
});
// Use existing run as character reference
const response = await client.images.character.json({
prompt: 'The character at a tavern',
character_reference_run_id: 'previous-run-uuid',
aspect_ratio: '1:1'
});
Reproducible Generation
Use the seed parameter to get consistent results:
const result1 = await client.images.generate({
prompt: 'A red apple',
model_type: 'flux-kontext-max',
seed: 42
});
const result2 = await client.images.generate({
prompt: 'A red apple',
model_type: 'flux-kontext-max',
seed: 42
});
// result1 and result2 will be nearly identical
Aspect Ratio Matching
When editing, use "match_input_image" to preserve the original image's aspect ratio:
const edit = await client.images.edit.fluxKontext.json({
original_run_id: 'original-run-id',
prompt: 'Change the lighting to golden hour',
aspect_ratio: 'match_input_image' // Automatically matches source
});
Error Handling
try {
const result = await client.images.generate({
prompt: 'A beautiful landscape',
model_type: 'flux-kontext-max'
});
if (result?.data?.image_url) {
console.log('Success:', result.data.image_url);
}
} catch (error) {
console.error('Generation failed:', error.message);
}
Monitoring Progress
For asynchronous operations, monitor progress using the run status:
const edit = await client.images.edit.fluxKontext.json({
original_run_id: 'run-id',
prompt: 'Add snow to the mountains',
sync: false // Async operation
});
// Check status periodically
const checkStatus = async (runId) => {
const runData = await client.images.getRun(runId);
console.log('Status:', runData.data.live_status);
console.log('Progress:', runData.data.progress);
if (runData.data.live_status === 'completed') {
console.log('Final image:', runData.data.image_url);
}
};
// Poll every 5 seconds
const interval = setInterval(() => {
checkStatus(edit.run_id);
}, 5000);
Best Practices
- Model Selection:
- Use Flux Kontext Max for high-quality generation and editing
- Use WAN workflow for fine-tuned models and LoRAs
- Workflow Choice:
- Default workflow: Standard generation with base models
- WAN workflow: Optimized for fine-tuned models with better batch processing
- Aspect Ratios: Choose appropriate aspect ratios for your use case
- Seeds: Use seeds for reproducible results in production
- Error Handling: Always wrap API calls in try-catch blocks
- Sync vs Async: Use sync for testing/simple cases, async for production
- Progress Monitoring: Implement proper status checking for async operations
- Fine-tuned Models:
- Always use
model_idwith WAN workflow - Adjust
strength_model(0-2) to control model influence - Default batch size of 2 provides better quality
- Always use
For more detailed information, see the API Reference.