Make lightweight AI requests with direct model access - cheaper than AI agents but without built-in context
Feature | What it does |
---|---|
Output References | Use AI response as input for other actions in your workflow |
Model Selection | Choose different models based on task complexity and cost |
Temperature Control | Fine-tune creativity vs consistency for different use cases |
Token Management | Control costs by limiting response length |
Batch Processing | Make multiple LLM calls for different data points |
Write Clear System Prompts
Consider Context Trade-offs
Optimize for Cost
Test Different Temperatures