🤖 AI Features
Index
Available Providers
Provider Configuration
Text Generation
Object Generation
Agents with Tools
Image Generation
Image Editing
Speech-to-Text
Practical Examples
Available Providers
Provider Models Vision Tools API Key Required Rork AI Default ✅ ✅ ❌ OpenAI GPT-4.5, GPT-4o, o3, o3-mini, o1, o1-mini ✅ ✅ ✅ Anthropic Claude Sonnet 4, Claude 3.7 Sonnet, Claude 3.5 Sonnet/Haiku, Claude 3 Opus ✅ ✅ ✅ Google Gemini Gemini 2.5 Pro/Flash, Gemini 2.0 Flash/Flash Lite ✅ ✅ ✅ OpenRouter GPT-4.5, o3, Claude 4, Gemini 2.5, Llama 4, DeepSeek R1/V3, Qwen 3, Mistral Large 2 ✅ ✅ ✅
Note: The SDK @rork-ai/toolkit-sdk is pre-installed. Do not install it manually.
Provider Configuration
Setting Up API Keys
import { useAIProviderSettings } from '@/contexts/AIContext';
function AISettings() {
const {
currentProvider,
allProviders,
setProvider,
setApiKey,
removeApiKey,
hasApiKey,
} = useAIProviderSettings();
const handleSetOpenAIKey = async (key: string) => {
await setApiKey('openai', key);
};
return (
<View>
{allProviders.map(provider => (
<View key={provider.id}>
<Text>{provider.name}</Text>
<Text>{provider.description}</Text>
{provider.requiresApiKey && (
<TextInput
placeholder="Enter API Key"
secureTextEntry
onChangeText={(key) => setApiKey(provider.id, key)}
/>
)}
<Switch
value={currentProvider === provider.id}
onValueChange={() => setProvider(provider.id)}
disabled={provider.requiresApiKey && !hasApiKey(provider.id)}
/>
</View>
))}
</View>
);
}
Switching Providers
import { useAI } from '@/contexts/AIContext';
function MyComponent() {
const { setProvider, setModel, currentProviderConfig } = useAI();
// Switch to OpenAI
await setProvider('openai');
// Select specific model
setModel('gpt-4o-mini');
// Get available models for current provider
console.log(currentProviderConfig?.models);
}
Available Models by Provider
OpenAI
gpt-4.5-preview - Latest flagship model with enhanced reasoning
gpt-4o - Multimodal model, fast and capable
gpt-4o-mini - Fast and affordable
o3 - Advanced reasoning model
o3-mini - Efficient reasoning model
o1 - Original reasoning model
o1-mini - Smaller reasoning model
Anthropic
claude-sonnet-4-20250514 - Latest Claude 4 model, best performance
claude-3-7-sonnet-20250219 - Claude 3.7, excellent balance
claude-3-5-sonnet-20241022 - Claude 3.5 Sonnet, reliable
claude-3-5-haiku-20241022 - Fastest Claude model
claude-3-opus-20240229 - Most capable Claude 3
Google Gemini
gemini-2.5-pro - Latest Pro model, 1M context
gemini-2.5-flash - Latest Flash model, fast with 1M context
gemini-2.0-flash - Stable 2.0 Flash
gemini-2.0-flash-lite - Lightweight, fastest
OpenRouter
openai/gpt-4.5-preview - GPT-4.5 via OpenRouter
openai/o3-mini - o3 Mini via OpenRouter
anthropic/claude-sonnet-4 - Claude Sonnet 4 via OpenRouter
anthropic/claude-3.7-sonnet - Claude 3.7 via OpenRouter
google/gemini-2.5-pro - Gemini 2.5 Pro via OpenRouter
meta-llama/llama-4-maverick - Llama 4 Maverick (1M context)
meta-llama/llama-4-scout - Llama 4 Scout (512K context)
deepseek/deepseek-r1 - DeepSeek R1 reasoning model
deepseek/deepseek-v3 - DeepSeek V3
qwen/qwen-3-235b - Qwen 3 235B
mistralai/mistral-large-2 - Mistral Large 2
Text Generation
Basic Usage
import { useAI } from '@/contexts/AIContext';
function MyComponent() {
const { generateText } = useAI();
// Simple prompt (uses current provider)
const summary = await generateText('Summarize this text: ...');
// With messages
const response = await generateText({
messages: [
{ role: 'user', content: 'Explain React Native in 2 sentences' },
],
});
// With specific provider and options
const customResponse = await generateText({
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
options: {
provider: 'openai',
model: 'gpt-4o-mini',
temperature: 0.7,
maxTokens: 1000,
},
});
}
With Images (Vision)
import { useAI } from '@/contexts/AIContext';
const { generateText } = useAI();
const description = await generateText({
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe this image' },
{ type: 'image', image: 'data:image/jpeg;base64,...' },
],
},
],
options: {
provider: 'anthropic',
model: 'claude-3-5-sonnet-20241022',
},
});
Text Generation Hook
import { useTextGeneration } from '@/contexts/AIContext';
function MyComponent() {
const { generateText, isLoading, error, currentProvider } = useTextGeneration();
const handleGenerate = async () => {
const result = await generateText('Write a slogan for my app');
console.log(result);
};
return (
<View>
<Text>Using: {currentProvider}</Text>
<Button
title="Generate"
onPress={handleGenerate}
loading={isLoading}
/>
{error && <Text style={{ color: 'red' }}>{error.message}</Text>}
</View>
);
}
Object Generation
Basic Usage
import { useAI } from '@/contexts/AIContext';
import { z } from 'zod';
const { generateObject } = useAI();
const recipeSchema = z.object({
name: z.string(),
ingredients: z.array(z.object({
name: z.string(),
amount: z.string(),
})),
steps: z.array(z.string()),
cookingTime: z.number(),
});
const recipe = await generateObject(
[{ role: 'user', content: 'Give me a pasta carbonara recipe' }],
recipeSchema
);
// recipe is typed as { name: string, ingredients: [...], ... }
With Specific Provider
const recipe = await generateObject(
[{ role: 'user', content: 'Give me a pasta carbonara recipe' }],
recipeSchema,
{
provider: 'gemini',
model: 'gemini-1.5-pro',
temperature: 0.5,
}
);
import { useAI } from '@/contexts/AIContext';
import { z } from 'zod';
const { generateObject } = useAI();
const receiptSchema = z.object({
storeName: z.string(),
date: z.string(),
items: z.array(z.object({
name: z.string(),
price: z.number(),
})),
total: z.number(),
});
const receiptData = await generateObject(
[
{
role: 'user',
content: [
{ type: 'text', text: 'Extract the data from this receipt' },
{ type: 'image', image: receiptImageBase64 },
],
},
],
receiptSchema,
{ provider: 'openai', model: 'gpt-4o' }
);
Create an Agent
import { createRorkTool, useRorkAgent } from '@rork-ai/toolkit-sdk';
import { z } from 'zod';
function TodoAssistant() {
const [input, setInput] = useState('');
const { todos, addTodo, toggleTodo, deleteTodo } = useTodos();
const { messages, sendMessage, error } = useRorkAgent({
tools: {
addTodo: createRorkTool({
description: 'Add a new task',
zodSchema: z.object({
title: z.string().describe('Task title'),
priority: z.enum(['low', 'medium', 'high']).optional(),
dueDate: z.string().optional().describe('Date in ISO format'),
}),
execute(input) {
addTodo({
title: input.title,
priority: input.priority || 'medium',
dueDate: input.dueDate,
});
return { success: true, message: `Task "${input.title}" added` };
},
}),
completeTodo: createRorkTool({
description: 'Mark a task as completed',
zodSchema: z.object({
todoId: z.string().describe('Task ID'),
}),
execute(input) {
toggleTodo(input.todoId);
return { success: true };
},
}),
listTodos: createRorkTool({
description: 'List all tasks',
zodSchema: z.object({}),
execute() {
return { todos };
},
}),
},
});
const handleSend = () => {
if (input.trim()) {
sendMessage(input);
setInput('');
}
};
return (
<View style={styles.container}>
<FlatList
data={messages}
renderItem={({ item }) => <MessageBubble message={item} />}
keyExtractor={(item) => item.id}
/>
<View style={styles.inputContainer}>
<TextInput
value={input}
onChangeText={setInput}
placeholder="Type a message..."
style={styles.input}
/>
<Button title="Send" onPress={handleSend} />
</View>
</View>
);
}
Image Generation
API Endpoint
POST https://toolkit.rork.com/images/generate/
Content-Type: application/json
{
"prompt": "An astronaut cat in space, cartoon style",
"size": "1024x1024" // optional: 1024x1024, 1024x1792, 1792x1024
}
Response:
{
"image": {
"base64Data": "...",
"mimeType": "image/png"
},
"size": "1024x1024"
}
Using the Hook
import { useImageGeneration } from '@/contexts/AIContext';
function ImageGenerator() {
const { generateImage, isGenerating, generateError } = useImageGeneration();
const [image, setImage] = useState<string | null>(null);
const handleGenerate = async () => {
const result = await generateImage(
'Professional avatar, blue gradient background',
'1024x1024'
);
setImage(`data:${result.image.mimeType};base64,${result.image.base64Data}`);
};
return (
<View>
<Button
title="Generate Avatar"
loading={isGenerating}
onPress={handleGenerate}
/>
{image && (
<Image source={{ uri: image }} style={{ width: 200, height: 200 }} />
)}
</View>
);
}
Image Editing
API Endpoint
POST https://toolkit.rork.com/images/edit/
Content-Type: application/json
{
"prompt": "Change the background to a sunset on the beach",
"images": [
{ "type": "image", "image": "base64..." }
],
"aspectRatio": "16:9" // optional
}
Response:
{
"image": {
"base64Data": "...",
"mimeType": "image/png",
"aspectRatio": "16:9"
}
}
Using the Hook
import { useImageGeneration } from '@/contexts/AIContext';
function ImageEditor() {
const { editImage, isEditing } = useImageGeneration();
const handleEdit = async (originalImage: string) => {
const result = await editImage(
'Add a rainbow in the sky',
[{ type: 'image', image: originalImage }],
'16:9'
);
return `data:${result.image.mimeType};base64,${result.image.base64Data}`;
};
// ...
}
Speech-to-Text
API Endpoint
POST https://toolkit.rork.com/stt/transcribe/
Content-Type: multipart/form-data
FormData:
- audio: File (mp3, wav, m4a, etc.)
- language: string (optional)
Response:
{
"text": "Transcribed text",
"language": "en"
}
Using the Hook
import { useSpeechToText } from '@/contexts/AIContext';
function VoiceInput() {
const { transcribeAudio, isLoading, error } = useSpeechToText();
const handleTranscribe = async (audioUri: string) => {
const result = await transcribeAudio(audioUri, 'en');
console.log('Transcribed:', result.text);
};
// ...
}
Recording Component
import { useState, useRef } from 'react';
import { Audio } from 'expo-av';
import { useSpeechToText } from '@/contexts/AIContext';
export function useVoiceRecording() {
const [isRecording, setIsRecording] = useState(false);
const recording = useRef<Audio.Recording | null>(null);
const { transcribeAudio, isLoading } = useSpeechToText();
const startRecording = async () => {
try {
await Audio.requestPermissionsAsync();
await Audio.setAudioModeAsync({
allowsRecordingIOS: true,
playsInSilentModeIOS: true,
});
const { recording: newRecording } = await Audio.Recording.createAsync(
Audio.RecordingOptionsPresets.HIGH_QUALITY
);
recording.current = newRecording;
setIsRecording(true);
} catch (error) {
console.error('Failed to start recording:', error);
}
};
const stopRecording = async (): Promise<string | null> => {
if (!recording.current) return null;
try {
await recording.current.stopAndUnloadAsync();
const uri = recording.current.getURI();
recording.current = null;
setIsRecording(false);
if (uri) {
const result = await transcribeAudio(uri);
return result.text;
}
return null;
} catch (error) {
console.error('Failed to stop recording:', error);
setIsRecording(false);
return null;
}
};
return {
isRecording,
isTranscribing: isLoading,
startRecording,
stopRecording,
};
}
Practical Examples
1. Multi-Provider Chat
import { useAI, useAIProviderSettings } from '@/contexts/AIContext';
function AIChat() {
const { generateText, isGeneratingText } = useAI();
const { currentProvider, setProvider, availableProviders } = useAIProviderSettings();
const [messages, setMessages] = useState<Message[]>([]);
const [input, setInput] = useState('');
const sendMessage = async () => {
if (!input.trim()) return;
const newMessages = [...messages, { role: 'user' as const, content: input }];
setMessages(newMessages);
setInput('');
const response = await generateText({ messages: newMessages });
setMessages([...newMessages, { role: 'assistant' as const, content: response }]);
};
return (
<View style={{ flex: 1 }}>
<Picker
selectedValue={currentProvider}
onValueChange={(value) => setProvider(value)}
>
{availableProviders.map(p => (
<Picker.Item key={p.id} label={p.name} value={p.id} />
))}
</Picker>
<FlatList
data={messages}
renderItem={({ item }) => (
<View style={item.role === 'user' ? styles.userMsg : styles.aiMsg}>
<Text>{item.content as string}</Text>
</View>
)}
/>
<View style={styles.inputRow}>
<TextInput value={input} onChangeText={setInput} />
<Button title="Send" onPress={sendMessage} loading={isGeneratingText} />
</View>
</View>
);
}
2. Sentiment Analysis with Provider Selection
import { useAI } from '@/contexts/AIContext';
import { z } from 'zod';
const sentimentSchema = z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number().min(0).max(1),
keywords: z.array(z.string()),
});
function SentimentAnalyzer() {
const { generateObject } = useAI();
const analyze = async (text: string, provider: AIProvider = 'openai') => {
return generateObject(
[{ role: 'user', content: `Analyze the sentiment: "${text}"` }],
sentimentSchema,
{ provider, model: provider === 'openai' ? 'gpt-4o-mini' : undefined }
);
};
// ...
}
3. Compare Providers
import { useAI } from '@/contexts/AIContext';
function ProviderComparison() {
const { generateText } = useAI();
const compareProviders = async (prompt: string) => {
const providers: AIProvider[] = ['openai', 'anthropic', 'gemini'];
const results = await Promise.allSettled(
providers.map(provider =>
generateText({
messages: [{ role: 'user', content: prompt }],
options: { provider }
})
)
);
return providers.map((provider, i) => ({
provider,
result: results[i].status === 'fulfilled'
? results[i].value
: results[i].reason.message,
}));
};
// ...
}
Implementation Checklist
API Keys Setup