ImagineoAI SDK API Reference
Overview
This document describes all available SDK methods, types, and endpoint shapes for both browser and Node.js environments. All types are strictly defined in src/types.ts and validated with Zod.
Classes & Functions
ImagineoAIClient
Constructor
new ImagineoAIClient(apiUrl: string, authOptions?: ImagineoAIClientAuthOptions, options?: { debug: boolean })
apiUrl: string — The base URL for the ImagineoAI API (e.g.,https://api.imagineoai.com)authOptions:{ apiKey: string } | { getToken: () => Promise<string | null> }options:{ debug: boolean }(optional)
Methods
⚠️ Migration Note: Methods are now grouped:
- Image methods:
client.images.upload,client.images.generateRun,client.images.describe- Prompt methods:
client.prompts.enhance,client.prompts.list, etc.Flat methods like
uploadImage,generateImageRun,describe, andenhancePromptare removed.
Edit
client.edit.maskImage(originalImageUrl: string, maskImageUrl: string, editServerUrl?: string): Promise<Blob | NodeJS.ReadableStream>- Submits two image URLs (original and mask) to the edit server for masking/inpainting.
- Sends a JSON payload:
{
"original_image_url": "https://...",
"mask_image_url": "https://..."
} - Returns:
- In the browser: a PNG
Blob - In Node.js: a PNG
NodeJS.ReadableStream
- In the browser: a PNG
- Throws: On validation, network, or API error.
Example (Browser)
const client = new ImagineoAIClient(apiUrl, { apiKey: 'sk-...' });
const pngBlob = await client.edit.maskImage(
"https://example.com/original.png",
"https://example.com/mask.png"
);
// Use the Blob as needed (e.g., display, download)
Example (Node.js)
const client = new ImagineoAIClient(apiUrl, { apiKey: 'sk-...' });
const fs = require('fs');
const pngStream = await client.edit.maskImage(
"https://example.com/original.png",
"https://example.com/mask.png"
);
pngStream.pipe(fs.createWriteStream("./output.png"));
client.edit.getLayers(runId: string): Promise<GetLayersResponse>- Fetches all layers associated with a given run ID.
- Returns:
GetLayersResponsecontaining an array ofLayerobjects. - Throws: On validation, network, or API error.
Example (Browser)
const client = new ImagineoAIClient(apiUrl, { apiKey: 'sk-...' });
const layersResponse = await client.edit.getLayers('run-id');
// Use layersResponse.data.layers
Example (Node.js)
const client = new ImagineoAIClient(apiUrl, { apiKey: 'sk-...' });
const layersResponse = await client.edit.getLayers('run-id');
// Use layersResponse.data.layers
Images
client.images.upload(input: UploadImageRequest): Promise<UploadImageResponse>client.images.uploadMask(input: UploadImageRequest): Promise<UploadImageResponse>client.images.generate(input: GenerateImageInput): Promise<GenerateImageOutput | null>client.images.describe(input: DescribePredictionRequest): Promise<DescribePredictionResponse>client.images.combine(input: CombineImageInput): Promise<GenerateImageOutput>client.images.edit.openai.json(input: ImageEditRequest): Promise<ImageEditResponse>client.images.edit.openai.formData(input: ImageEditFormDataRequest): Promise<ImageEditResponse>client.images.edit.fluxKontext.json(input: FluxKontextEditRequest): Promise<FluxKontextEditResponse>client.images.edit.fluxKontext.formData(input: FluxKontextEditFormDataRequest): Promise<FluxKontextEditResponse>client.images.edit.nanobanana.json(input: NanobananaEditRequest): Promise<GenerateImageOutput>client.images.edit.nanobanana.formData(input: NanobananaEditFormDataRequest): Promise<GenerateImageOutput>client.images.edit.gemini25.json(input: Gemini25EditRequest): Promise<GenerateImageOutput>client.images.edit.gemini25.formData(input: Gemini25EditFormDataRequest): Promise<GenerateImageOutput>client.images.combine.nanobanana.json(input: NanobananaC combineRequest): Promise<GenerateImageOutput>client.images.combine.nanobanana.formData(input: NanobananaCombineFormDataRequest): Promise<GenerateImageOutput>client.images.combine.gemini25.json(input: Gemini25CombineRequest): Promise<GenerateImageOutput>client.images.combine.gemini25.formData(input: Gemini25CombineFormDataRequest): Promise<GenerateImageOutput>client.images.character.json(input: CharacterGenerationRequest): Promise<CharacterGenerationResponse>client.images.character.formData(input: CharacterGenerationFormDataRequest): Promise<CharacterGenerationResponse>client.images.reference.json(input: ReferenceGenerationRequest): Promise<ReferenceGenerationResponse>client.images.reference.formData(input: ReferenceGenerationFormDataRequest): Promise<ReferenceGenerationResponse>client.images.genRemove(input: GenerativeRemoveRequest): Promise<GenerativeRemoveResponse>client.images.backgroundRemove(input: BackgroundRemoveRequest): Promise<BackgroundRemoveResponse>client.images.extract(input: ExtractRequest): Promise<ExtractResponse>
Model Types
The SDK supports the following model types for image generation:
Flux Dev (Default)
- Type:
flux-dev - Description: Default high-quality image generation model using Flux Dev architecture
- Required Parameters:
prompt: string- The text prompt for image generation
- Optional Parameters:
reference_image_url?: string- URL of an image to use as referencewidth?: number- Width of the generated imageheight?: number- Height of the generated image
Google Imagen4
- Type:
google-imagen4 - Description: Google's latest image generation model with advanced capabilities
- Required Parameters:
prompt: string- The text prompt for image generationaspect_ratio: string- Aspect ratio of the generated image (e.g., '16:9', '1:1', '9:16')
- Optional Parameters:
reference_image_url?: string- URL of an image to use as reference for style transfer
Flux Kontext Max
- Type:
flux-kontext-max - Description: Advanced image generation and editing model with superior quality and detail
- Required Parameters:
prompt: string- The text prompt for image generation
- Optional Parameters:
aspect_ratio?: string- Aspect ratio of the generated image (defaults to "1:1")- Supported ratios: "1:1", "16:9", "9:16", "4:3", "3:4", "3:2", "2:3", "4:5", "5:4", "21:9", "9:21", "2:1", "1:2"
seed?: number- Random seed for reproducible generation (integer ≥ 0)
- Editing Support: Flux Kontext Max also supports advanced image editing via
client.images.edit.fluxKontextmethods - Special Features:
- High-quality image generation and editing
- Deterministic generation with seed parameter
- "match_input_image" aspect ratio option for editing
OpenAI
- Type:
openai - Description: OpenAI's DALL-E image generation model
- Optional Parameters:
reference_image_url?: string- URL of an image to use as referencewidth?: number- Width of the generated imageheight?: number- Height of the generated image
WAN Image 2.1
- Type:
wan-image-2.1 - Description: WAN Image generation model version 2.1 with enhanced capabilities
- Parameters:
prompt: string- The text prompt for image generationstrength_model?: number- Model strength (0-2, default 1)batch_size?: number- Number of images to generate (default 2)
WAN Image 2.2
- Type:
wan-image-2.2 - Description: Latest WAN Image generation model with dual LoRA support
- Parameters:
prompt: string- The text prompt for image generationlora_low_name?: string- Low LoRA model namelora_low_strength?: number- Low LoRA strength (0-2, default 0.5)lora_high_name?: string- High LoRA model namelora_high_strength?: number- High LoRA strength (0-2, default 1.0)
Flux Kontext Multi
- Type:
flux-kontext-multi - Description: Multi-image combination and generation model
- Special Features:
- Combines multiple input images
- Advanced style transfer capabilities
- Supports deterministic generation with seed parameter
Nanobanana
- Type:
nanobanana - Description: Replicate-based model for advanced image generation, combination, and editing
- Required Parameters:
prompt: string- The text prompt for image generation
- Optional Parameters:
reference_image_url?: string- URL of reference image for style guidance
- Special Features:
- Style transfer through reference images
- Image combination capabilities
- Image editing support
- Both JSON and FormData input formats
Qwen Image Edit
- Type:
qwen-image-edit - Description: Advanced text-based image editing model powered by ComfyDeploy
- Required Parameters:
prompt: string- The text prompt describing the desired editwidth: number- Image width (64-2048)height: number- Image height (64-2048)
- Optional Parameters:
lora_1?: string- Optional LoRA model for style enhancementstrength_model?: number- LoRA strength (0-2, default: 1)negative_prompt?: string- Text describing what to avoid in the generation
- Special Features:
- AI-powered image editing with natural language
- Support for negative prompts for precise control
- LoRA model integration for style customization
- Seamless integration with existing generate workflow
Gemini 2.5 Flash Image Preview
- Type:
gemini-2.5 - Description: Google's Gemini 2.5 Flash Image Preview model for advanced multimodal image processing
- Required Parameters:
prompt: string- The text prompt for image generation/editing
- Optional Parameters:
aspect_ratio?: string- Aspect ratio of the generated image (e.g., '16:9', '1:1', '4:3')reference_image_url?: string- URL of reference image for context (generation)image_urls?: string[]- Array of 2-10 image URLs (combination)run_ids?: string[]- Array of 2-10 run IDs (combination)mask_url?: string- URL of mask image for inpainting (editing)
- Special Features:
- Advanced multimodal understanding
- Image combination with intelligent blending (2-10 images)
- Precise editing with optional mask support
- Support for multiple input formats (URLs, run IDs, files)
- High-quality output with customizable aspect ratios
Cloudinary AI Features
Generative Remove
client.images.genRemove(input: GenerativeRemoveRequest): Promise<GenerativeRemoveResponse>- Description: AI-powered object removal from images using text prompts
- Required Parameters:
run_idorimage_url: Source image identifierprompt: string- Description of objects to remove
- Optional Parameters:
remove_shadow?: boolean- Remove shadows of removed objects (default: false)multiple?: boolean- Detect and remove multiple instances (default: true)
- Response: Contains generated image URL with objects removed
Background Remove
client.images.backgroundRemove(input: BackgroundRemoveRequest): Promise<BackgroundRemoveResponse>- Description: Automatic background removal with AI detection
- Required Parameters:
run_idorimage_url: Source image identifier
- Response: Contains transparent PNG with background removed
Smart Extract
client.images.extract(input: ExtractRequest): Promise<ExtractResponse>- Description: Intelligent object extraction using text prompts
- Required Parameters:
run_idorimage_url: Source image identifierprompt: string- Description of objects to extract
- Optional Parameters:
multiple?: boolean- Extract multiple objects (default: true)mode?: 'mask' | 'extract'- Output mode (default: 'extract')
- Response: Contains extracted objects or masks
Example Usage:
// Remove objects
const removed = await client.images.genRemove({
run_id: "previous_run_id",
prompt: "remove all cars",
remove_shadow: true
});
// Remove background
const bgRemoved = await client.images.backgroundRemove({
image_url: "https://example.com/photo.jpg"
});
// Extract objects
const extracted = await client.images.extract({
run_id: "previous_run_id",
prompt: "extract the person",
multiple: false
});
Prompts
client.prompts.enhance(input: EnhancePromptRequest): Promise<EnhancePromptResponse>client.prompts.list(): Promise<ListPromptsResponse>(stub)client.prompts.get(id: string): Promise<GetPromptResponse>(stub)client.prompts.update(id: string, input: UpdatePromptRequest): Promise<UpdatePromptResponse>(stub)client.prompts.delete(id: string): Promise<DeletePromptResponse>(stub)
Example:
const client = new ImagineoAIClient(apiUrl, { apiKey: 'sk-...' });
const result = await client.images.generate({
prompt: 'A cat in a spacesuit',
width: 512,
height: 512,
model_id: 'uuid-model-id',
});
Standalone Upload
upload(input: UploadImageRequest): Promise<UploadImageResponse>- Browser: import from
@imagineoai/javascript - Node: import from
@imagineoai/javascript/server
- Browser: import from
uploadMask
Uploads an image to be used specifically as a mask for inpainting or editing workflows.
Signature:
client.images.uploadMask(input: UploadImageRequest): Promise<UploadImageResponse>
- input:
UploadImageRequestfile: The mask image file (must be aFileorBlobin the browser, orBuffer/Readablein Node.js)description(optional): A description or label for the maskparent_id(optional): ID of the parent run to link this upload to
- Returns:
UploadImageResponse(see below for new structure) - Throws: On validation, network, or API error.
Example (Browser)
const client = new ImagineoAIClient(apiUrl, { apiKey: 'sk-...' });
const maskFile = new File([maskBlob], 'mask.png', { type: 'image/png' });
const result = await client.images.uploadMask({ file: maskFile, description: 'Foreground mask' });
if (result.success) {
console.log(result.data.image_url);
}
Example (Node.js)
const fs = require('fs');
const { ImagineoAIClient } = require('@imagineoai/javascript/server');
const client = new ImagineoAIClient(apiUrl, { apiKey: 'sk-...' });
const maskBuffer = fs.readFileSync('./mask.png');
const result = await client.images.uploadMask({ file: maskBuffer, description: 'Foreground mask' });
if (result.success) {
console.log(result.data.image_url);
}
Notes:
- The SDK automatically sets the
isMaskflag and handles the correct Content-Type for uploads. - In the browser, only
FileorBlobtypes are accepted for the mask. - In Node.js, you may use
Bufferor a readable stream. - The API endpoint is
/api/v1/upload.
backgroundRemove
Removes the background from an image using Cloudinary's AI-powered background removal.
Signature:
client.images.backgroundRemove(input: BackgroundRemoveRequest): Promise<BackgroundRemoveResponse>
- input:
BackgroundRemoveRequestimage_url(optional): URL of the image to processrun_id(optional): ID of an existing run to use as source (automatically sets parent relationship)
- Returns:
BackgroundRemoveResponseurl: URL of the processed imagerunId: ID of the new runparentRunId(optional): ID of the parent run if usingrun_id
- Note: When using
run_id, the parent-child relationship is automatically established
Example
// Using run_id (automatic parent tracking)
const result = await client.images.backgroundRemove({
run_id: 'abc-123' // This becomes the parent
});
console.log(result.parentRunId); // 'abc-123'
// Using image URL directly
const result = await client.images.backgroundRemove({
image_url: 'https://example.com/image.jpg'
});
generativeRemove
Removes specified objects from images using Cloudinary's generative AI.
Signature:
client.images.generativeRemove(input: GenerativeRemoveRequest): Promise<GenerativeRemoveResponse>
- input:
GenerativeRemoveRequestimage_url(optional): URL of the imagerun_id(optional): ID of an existing run (automatically sets parent relationship)prompt: What to remove (e.g., "people", "text", "cars")remove_shadow(optional): Remove shadows of removed objects (default: true)multiple(optional): Remove all instances (default: true)
- Returns:
GenerativeRemoveResponse
Example
const result = await client.images.generativeRemove({
run_id: 'existing-run-id',
prompt: 'all people',
remove_shadow: true,
multiple: true
});
extract
Extracts specific objects from images using Cloudinary AI.
Signature:
client.images.extract(input: ExtractRequest): Promise<ExtractResponse>
- input:
ExtractRequestimage_urlorrun_id: Source imageprompt: What to extract (e.g., "phone", "person", "car")mode(optional): 'content' (default) or 'mask'preserve_alpha(optional): Preserve transparency (default: true)multiple(optional): Extract all instances (default: false)invert(optional): Extract everything except the prompt (default: false)
- Returns:
ExtractResponse
Example
const result = await client.images.extract({
run_id: 'source-run-id',
prompt: 'the main subject',
mode: 'content',
preserve_alpha: true
});
reframe
Reframes images to different aspect ratios using Luma AI, with automatic parent tracking when using run_id.
Signatures:
// JSON method
client.images.reframe.json(input: ReframeRequest): Promise<ReframeResponse>
// FormData method (for file uploads)
client.images.reframe.formData(input: ReframeFormDataRequest): Promise<ReframeResponse>
- input:
ReframeRequestorReframeFormDataRequestprompt: Description of how to reframeimage_url,run_id, orimage_file: Source image (run_id automatically becomes parent)aspect_ratio(optional): Target ratio (e.g., "16:9", "1:1", "9:16")model(optional): "photon-flash-1" (faster) or "photon-1" (higher quality)- Additional positioning parameters available
- Returns:
ReframeResponse
Example
// JSON method with automatic parent tracking
const result = await client.images.reframe.json({
run_id: 'source-run-id', // Automatically becomes the parent
prompt: 'Expand the scene with natural landscape',
aspect_ratio: '21:9',
model: 'photon-1'
});
// FormData method for direct file upload
const result = await client.images.reframe.formData({
image_file: imageFile,
prompt: 'Convert to square format',
aspect_ratio: '1:1'
});
Parent-Child Relationship Tracking
The SDK now supports comprehensive parent-child relationship tracking for image operations:
Automatic Tracking
- Background Removal: When using
run_id, the parent relationship is automatically established - Generative Remove: Similarly tracks parent when using
run_id - Extract: Parent tracking with
run_id - Reframe: Automatically sets parent when using
run_id
Manual Tracking
- Upload: Use
parent_idparameter to link uploads to existing runs
Example Workflow
// 1. Original image
const original = await client.images.generate({ prompt: "A landscape" });
// 2. Remove background (automatic parent tracking)
const noBg = await client.images.backgroundRemove({
run_id: original.data.run_id
});
// noBg.parentRunId === original.data.run_id
// 3. Reframe with automatic parent tracking
const reframed = await client.images.reframe.json({
run_id: noBg.runId, // Automatically becomes the parent
prompt: "Expand to panoramic view",
aspect_ratio: "21:9"
});
// 4. Upload edited version with parent link
const edited = await client.images.upload({
file: editedFile,
parent_id: reframed.run_id // Manual parent for uploads
});
Types
UploadImageResponse (updated)
Returned by both upload and uploadMask methods.
export type UploadImageResponse = {
success: boolean;
message: string;
data: {
id: string;
image_url: string;
run_id: string;
created_at: string;
parent_id?: string; // Optional: ID of parent run if linked
};
};
success: Indicates operation status.message: Provides context or error details.data: Contains the uploaded image's metadata.parent_id(optional): The ID of the parent run if this upload is linked to another image
ImagineoAIClientAuthOptions
export type ImagineoAIClientAuthOptions =
| { apiKey: string }
| { getToken: () => Promise<string | null> };
UploadImageRequest
export interface UploadImageRequest {
file: UploadFileType;
description?: string;
parent_id?: string; // Optional: ID of parent run to link this upload to
}
UploadFileType
export type UploadFileType =
| File // browser
| Blob // browser
| Buffer // node
| Readable // node
| any; // fallback for formdata-node File/Blob in node, checked at runtime
FluxKontextEditRequest
export type FluxKontextEditRequest = {
original_run_id: string;
prompt: string;
aspect_ratio?: "1:1" | "16:9" | "9:16" | "4:3" | "3:4" | "3:2" | "2:3" | "4:5" | "5:4" | "21:9" | "9:21" | "2:1" | "1:2" | "match_input_image";
seed?: number;
model: "flux-kontext-max";
sync?: boolean;
};
FluxKontextEditFormDataRequest
export interface FluxKontextEditFormDataRequest {
original_run_id: string;
prompt: string;
aspect_ratio?: "1:1" | "16:9" | "9:16" | "4:3" | "3:4" | "3:2" | "2:3" | "4:5" | "5:4" | "21:9" | "9:21" | "2:1" | "1:2" | "match_input_image";
seed?: number;
model?: "flux-kontext-max";
sync?: boolean;
}
FluxKontextEditResponse
export type FluxKontextEditResponse = {
run_id: string;
user_id: string;
created_at: string;
updated_at: string;
image_url: string | null;
inputs: any;
live_status: string | null;
status: string;
progress: number;
run_type: string;
parent_run_id?: string | null;
};
CharacterGenerationRequest
export type CharacterGenerationRequest = {
prompt: string;
character_reference_image?: string; // URL or base64
character_reference_run_id?: string; // UUID to use existing run image
rendering_speed?: "Default" | "Turbo" | "Quality";
style_type?: "Auto" | "Fiction" | "Realistic";
magic_prompt_option?: "On" | "Off" | "Auto";
aspect_ratio?: "1:1" | "16:9" | "9:16" | "4:3" | "3:4" | "3:2" | "2:3";
resolution?: "auto" | "720" | "1024" | "1280" | "2048";
seed?: number;
image?: string; // Optional background/context image
mask?: string; // Optional mask for inpainting
};
CharacterGenerationFormDataRequest
export interface CharacterGenerationFormDataRequest {
prompt: string;
character_reference_file?: File | Buffer; // File in browser, Buffer in Node.js
character_reference_image?: string; // URL or base64
character_reference_run_id?: string; // UUID to use existing run image
rendering_speed?: "Default" | "Turbo" | "Quality";
style_type?: "Auto" | "Fiction" | "Realistic";
magic_prompt_option?: "On" | "Off" | "Auto";
aspect_ratio?: "1:1" | "16:9" | "9:16" | "4:3" | "3:4" | "3:2" | "2:3";
resolution?: "auto" | "720" | "1024" | "1280" | "2048";
seed?: number;
image?: string;
mask?: string;
}
CharacterGenerationResponse
export type CharacterGenerationResponse = {
success: boolean;
message: string;
data: {
run_id: string;
status?: string;
live_status?: string;
progress?: number;
image_url?: string | null;
created_at?: string;
updated_at?: string;
run_type?: "character-generation";
};
code?: string;
details?: any;
};
GenerateImageInput
The GenerateImageInput type supports the following parameters:
export interface GenerateImageInput {
// Required
prompt: string;
// Model Selection
model_type?: 'openai' | 'google-imagen4' | 'flux-kontext-max' | 'comfydeploy';
model_id?: string; // UUID of model in Turso database (for custom/fine-tuned models)
// Workflow Selection
workflow_type?: 'default' | 'wan' | 'wan-image-2.2'; // Default: 'default'
// Dimensions (OpenAI/ComfyDeploy)
width?: number; // 64-2048
height?: number; // 64-2048
// Aspect Ratio (Google Imagen4, Flux Kontext Max)
aspect_ratio?: string; // Required for Google Imagen4
// Model Parameters
strength_model?: number; // 0-2, default: 1 (WAN workflow only)
model_weight?: number; // 0-1, default: 0.7
guidance?: number; // 0-100, default: 7.5
// WAN Image 2.2 Dual LoRA Parameters
lora_low_name?: string; // Required for wan-image-2.2 workflow
lora_low_strength?: number; // 0-2, default: 0.5
lora_high_name?: string; // Required for wan-image-2.2 workflow
lora_high_strength?: number; // 0-2, default: 1.0
// Other Parameters
reference_image_url?: string; // For image editing/style transfer
seed?: number; // For reproducible generation (Flux Kontext Max)
negative_prompt?: string;
scheduler?: string;
steps?: number;
cfg_scale?: number;
batch_size?: number;
batch_count?: number;
upscale_factor?: number;
// Webhook
webhook_url?: string;
// Metadata
metadata?: Record<string, any>;
}
WAN Workflow Parameters
When using workflow_type: 'wan' for fine-tuned models:
- Required:
model_id(Turso DB UUID),width,height - Optional:
strength_model(controls model strength, 0-2, default: 1) - The system automatically:
- Fetches the model from the database using
model_id - Checks user permissions for the model
- Uses the model's
comfy_deploy_idas the LoRA name - Sets batch size to 2 for better quality
- Fetches the model from the database using
WAN Image 2.2 Workflow Parameters
When using workflow_type: 'wan-image-2.2' for dual LoRA generation:
- Required:
model_id(Turso DB UUID)width,heightlora_low_name(path to low-strength LoRA)lora_high_name(path to high-strength LoRA)
- Optional:
lora_low_strength(0-2, default: 0.5)lora_high_strength(0-2, default: 1.0)batch_size(1-4, default: 1)
- The SDK enforces validation:
- Throws an error if either LoRA name is missing
- Both LoRAs must be specified together
GenerateImageOutput
See src/types.ts for full Zod schemas and type definitions.
CombineImageInput / GenerateImageOutput
CombineImageInputis used for combining multiple images (files or URLs) into one output image. Requires at least two images. Optional parameters includemode(e.g., 'side-by-side', 'overlay'),output_format, etc.- Output is the same as
GenerateImageOutput. Seesrc/types.tsfor the full Zod schema and type definitions.
ReferenceGenerationRequest
export type ReferenceGenerationRequest = {
prompt: string;
images: Array<{
image_reference: string; // URL, base64, or run ID
tag: string; // Tag name (without @ prefix)
}>;
aspect_ratio?: "16:9" | "9:16" | "4:3" | "3:4" | "1:1" | "21:9";
resolution?: "720p" | "1080p";
};
ReferenceGenerationFormDataRequest
export interface ReferenceGenerationFormDataRequest {
prompt: string;
images: string; // JSON string of ReferenceImage[]
aspect_ratio?: "16:9" | "9:16" | "4:3" | "3:4" | "1:1" | "21:9";
resolution?: "720p" | "1080p";
}
ReferenceGenerationResponse
export type ReferenceGenerationResponse = {
success: boolean;
message: string;
data: {
run_id: string;
status?: string;
live_status?: string;
progress?: number;
image_url?: string | null;
created_at?: string;
updated_at?: string;
run_type?: "reference-generation";
};
code?: string;
details?: any;
}
Error Handling
- All methods may throw if the API returns an error or the input is invalid.
- API error shape:
export type ApiError = {
message: string;
code?: string;
details?: any;
}
Type Guards
isNodeBuffer(f: unknown): f is BufferisNodeReadable(f: unknown): f is ReadableisBrowserFile(f: unknown): f is FileisBrowserBlob(f: unknown): f is Blob
Zod and OpenAPI Integration
- All schemas are defined with Zod.
- See
/docs/architecture.mdfor how Zod schemas are used for both validation and OpenAPI docs.