AI & Intelligence Nodes
Integrate AI providers like OpenAI, Claude, Gemini, and Ollama into your workflows
AI & Intelligence Nodes
DeepChain lets you add AI superpowers to your workflows. Whether you need text generation, analysis, classification, or custom reasoning, these nodes connect you to the best AI models available.
AI Agent Node
When to use: When you need AI to generate text, answer questions, analyze content, classify data, or reason through a problem.
The AI Agent Node is your gateway to multiple AI providers. One node, many models. You configure which provider and model to use, give it a prompt, and it returns the AI's response.
Configuration
Configuration:
provider: openai | anthropic | google | ollama
model: gpt-4 | claude-3-sonnet | gemini-pro | llama2
prompt: "Summarize: {{ input.text }}"
system_instructions: "You are a helpful assistant."
temperature: 0.7
max_tokens: 1000
tools: [] # Function calling
Supported AI Providers
| Provider | Best Models | Key Features | Cost |
|---|---|---|---|
| OpenAI | gpt-4o, gpt-4-turbo, gpt-3.5-turbo | Fast, reliable, function calling, JSON mode | $0.30-$30/1M tokens |
| Anthropic | claude-3-opus, sonnet, haiku | Long context (200K), very accurate | $0.80-$15/1M tokens |
| gemini-pro, gemini-flash | Multimodal (images), fast | $0.075-$2/1M tokens | |
| Ollama | llama2, mistral, neural-chat | Local, free, no API calls | Free (self-hosted) |
Example 1: Summarize Customer Feedback
When to use: Process lots of customer reviews or feedback quickly.
Incoming data:
{
"review": "The product is absolutely amazing! Love the design and quality. Only wish it came in more colors. Overall very happy with my purchase and would recommend to friends."
}
AI Agent Configuration:
provider: openai
model: gpt-3.5-turbo
prompt: |
Summarize this review in one sentence:
{{ input.review }}
system_instructions: "You are a concise summarization expert. Be brief."
temperature: 0.3
max_tokens: 100
Output:
{
"summary": "Customer loves the product's design and quality but wishes for more color options."
}
Example 2: Classify Support Tickets
When to use: Automatically route support tickets by priority or category.
Incoming ticket:
{
"ticket_id": "TKT-001",
"subject": "Payment failed on checkout",
"description": "I tried to buy your premium plan but my credit card was declined. Error message says try again later."
}
AI Agent Configuration:
provider: anthropic
model: claude-3-haiku
prompt: |
Classify this support ticket:
Subject: {{ input.subject }}
Description: {{ input.description }}
Return JSON with:
- category: "billing", "technical", "account", "other"
- priority: "critical", "high", "normal", "low"
- confidence: 0-1
system_instructions: "You are a support ticket classifier. Return only valid JSON."
temperature: 0.1
max_tokens: 100
Output:
{
"category": "billing",
"priority": "high",
"confidence": 0.95
}
Then use an If or Switch node to route: critical → alert manager, high → 1-hour response, normal → next day, etc.
Example 3: Generate Email Response
When to use: Auto-generate personalized customer emails.
Incoming customer inquiry:
{
"customer_name": "Alice",
"question": "Do you offer student discounts?",
"tone": "friendly"
}
AI Agent Configuration:
provider: openai
model: gpt-4o
prompt: |
Write a {{ input.tone }} email response to this student inquiry:
Customer: {{ input.customer_name }}
Question: {{ input.question }}
Email body only (no subject, no greeting). Keep under 150 words.
system_instructions: |
You are a friendly customer service representative.
Include relevant information about our student program.
temperature: 0.7
max_tokens: 200
Output:
{
"response": "Great question! Yes, we absolutely offer a 20% student discount on all annual plans. Just verify your enrollment status with a valid .edu email address or student ID. The discount applies immediately after verification. If you have any other questions, feel free to reach out. Thanks for choosing us!"
}
Example 4: Extract Data from Text (Structured Output)
When to use: Parse semi-structured text into clean data.
Incoming email:
{
"email_body": "Hi, I'd like to cancel my subscription. I've been a customer since January 2023. Please send confirmation to alice@example.com. Thanks!"
}
AI Agent Configuration:
provider: anthropic
model: claude-3-opus
prompt: |
Extract the following information from this email:
{{ input.email_body }}
Return JSON with:
- action: "cancel", "update", "other"
- customer_email: string
- start_date: ISO date or null
system_instructions: "Extract only information explicitly mentioned. Return valid JSON."
temperature: 0.0
max_tokens: 100
Output:
{
"action": "cancel",
"customer_email": "alice@example.com",
"start_date": "2023-01-01"
}
Example 5: Local AI with Ollama (Free)
When to use: You want free AI without API calls or external dependencies.
Incoming data:
{
"user_input": "What's the capital of France?"
}
AI Agent Configuration:
provider: ollama
model: mistral
prompt: "{{ input.user_input }}"
system_instructions: "You are a helpful assistant."
temperature: 0.7
max_tokens: 500
Ollama runs locally on your machine. No API costs, no rate limits, no external dependencies. Perfect for sensitive data or high-volume workflows!
Tip: Use
temperature: 0.0-0.3for factual tasks (classification, extraction). Usetemperature: 0.7-1.0for creative tasks (writing, brainstorming).
Cost Comparison
Scenario: Process 1,000 customer reviews (avg 200 tokens each)
| Provider | Model | Cost |
|---|---|---|
| OpenAI | gpt-3.5-turbo | ~$0.30 |
| gemini-flash | ~$0.15 | |
| Anthropic | claude-3-haiku | ~$0.80 |
| Ollama | mistral (local) | Free |
Scenario: Extract data from 10,000 support tickets (avg 300 tokens)
| Provider | Model | Cost |
|---|---|---|
| OpenAI | gpt-4-turbo | ~$30 |
| Anthropic | claude-3-sonnet | ~$24 |
| gemini-pro | ~$6 | |
| Ollama | llama2 (local) | Free |
Learning Node
When to use: When you want to train a model on patterns in your data or make predictions.
The Learning Node handles pattern recognition and optimization. Train it on historical data, then use it to make predictions on new data.
Configuration
Configuration:
mode: train | predict | optimize
model_type: classification | regression | clustering
Example: Predict Customer Churn
Training (one time):
mode: train
model_type: classification
training_data:
- features: { account_age: 12, monthly_spend: 150, support_tickets: 2, last_login: 5 }
label: churned
- features: { account_age: 24, monthly_spend: 500, support_tickets: 0, last_login: 1 }
label: retained
Prediction (ongoing):
mode: predict
model_type: classification
input_features: { account_age: 8, monthly_spend: 50, support_tickets: 10, last_login: 30 }
Returns: { prediction: "churned", confidence: 0.87 }
Contextual Memory Node
When to use: When you want to remember previous interactions with a user (for chatbots, personalized workflows, etc.).
The Contextual Memory Node stores and retrieves context across workflow executions. Build stateful, context-aware workflows.
Configuration
Configuration:
operation: store | retrieve | clear
memory_type: short_term | long_term
key: "conversation_{{ input.session_id }}"
Example: Chatbot with Memory
First message:
{
"session_id": "USER-123",
"message": "My name is Alice and I like coffee"
}
Memory Node (store):
operation: store
memory_type: long_term
key: "conversation_{{ input.session_id }}"
Second message (5 minutes later):
{
"session_id": "USER-123",
"message": "Tell me a joke about my favorite drink"
}
Memory Node (retrieve):
operation: retrieve
memory_type: long_term
key: "conversation_{{ input.session_id }}"
Returns: { name: "Alice", preference: "coffee", ... }
Pass this to the AI Agent node so it knows Alice's context!
Workflow Assistant Node
When to use: When you want AI to suggest optimizations or improvements to your workflow.
The Workflow Assistant Node analyzes your workflow execution and suggests improvements—find bottlenecks, optimize loops, suggest caching, etc.
Configuration
Configuration:
analysis_type: performance | security | cost
Example: Performance Analysis
analysis_type: performance
workflow_execution_data: { total_duration: 45000, node_timings: {...}, ... }
Returns suggestions like:
- "HTTP Request to API-X is slow (8s). Consider caching results."
- "Loop has 10,000 iterations. Consider batch processing instead."
- "Database query has no index on user_id. Add index to speed up 40% of queries."
Common AI Patterns
Pattern 1: Batch Classification
Start (with array of items)
↓
Loop (for each item)
├─ AI Agent (classify item)
├─ Switch (route by classification)
├─ Database (store with classification)
└─ Next item
↓
Log summary
Pattern 2: Intelligent Routing
Start (with user request)
↓
AI Agent (understand intent and extract info)
↓
Set (store extracted info as variables)
↓
Switch (route by detected intent)
├─ Path 1: Handle cancellation request
├─ Path 2: Handle billing inquiry
└─ Path 3: Handle technical support
Pattern 3: Chatbot with Memory
Start (user message)
↓
Memory (retrieve conversation history)
↓
AI Agent (respond with context)
↓
Memory (store new message in history)
↓
Email/Notification (send response)
Prompt Engineering Tips
Good Prompts
- Be specific about format: "Return JSON with fields: name, email, priority"
- Give examples: "Examples: urgent→1, normal→2, low→3"
- Add constraints: "Response under 100 words", "Use only approved categories"
Bad Prompts
- Too vague: "Analyze this text" (analyze what? return what?)
- Missing format: "Respond about this email" (what format?)
- Ambiguous: "Is this good?" (good for what purpose?)
Temperature Settings
| Temperature | Use Case | Examples |
|---|---|---|
| 0.0 | Deterministic, factual | Classification, extraction, coding |
| 0.3-0.5 | Slightly creative, consistent | Summarization, data generation |
| 0.7 | Creative, balanced | Writing, brainstorming, emails |
| 1.0+ | Very creative, varied | Poetry, creative writing |
Cost Optimization Strategies
Start with cheaper models - Use
gpt-3.5-turboorgemini-flashinstead of premium models. Only upgrade togpt-4if quality isn't sufficient.Use shorter prompts - Every token costs. "Summarize this" is cheaper than "Please analyze and create a detailed summary of..."
Cache results - Store AI responses using Cache Storage Node. Avoid re-running expensive analysis.
Batch processing - Process multiple items in one prompt rather than one-at-a-time. "Classify these 10 items" < "Classify item 1" x 10.
Run locally - Use Ollama for internal workflows where cost matters more than cutting-edge performance.
Set max_tokens appropriately - Don't set 4000 tokens if you only need 200. Saves money and latency.
Error Handling
AI can be unpredictable. Always add error handling:
AI Agent
├─ Success → Process response
└─ Error → Log error, send alert, use fallback
Use Try/Catch in Code Node to handle:
- Invalid JSON responses
- Timeouts
- Rate limit errors
- Malformed output
Next Steps
- Need to make decisions? Use If/Switch nodes to route based on AI output
- Extracting structured data? Follow up with Data Parser or Code Node
- Sending results? Use Email or Notification nodes
- Want more context? Check out Contextual Memory Node above
- Building a chatbot? Combine AI Agent + Memory nodes for stateful conversations