Model: Select a model or use Auto to automatically choose the best model
System Prompt: Instructions that guide the model’s behavior
Prompts: The main input to the model, can include variables from previous steps, including system, user, and assistant prompts
Response Format: Text or JSON (use JSON when expecting structured data)
Temperature: Controls randomness (lower values = more deterministic responses)
JSON Editor: Optional editor for configuring messages as JSON
Web Search: Enable web search for the model before generating a response. This gives the model access to the latest information. Note that not all models support web search - this option will only appear if the selected model supports it.
You might see some Tools and Iteration options in the model step settings.
These are related to Workflow Agents, which we will cover in the
Agents section.
// When response format is TextmodelStep.output.message; // Text string response// When response format is JSONmodelStep.output.message.propertyName; // Direct access to JSON properties
Cortex allows you to connect your own AI provider accounts to use their models in your workflows.To connect your own AI provider:
Click “Connect Provider” in the model selection dropdown
Select a provider from the modal (OpenAI, Anthropic, Google AI, Azure AI, xAI, Open Router, Groq, etc.)
Enter the required configuration details (e.g., API Key)
Click “Test Connection” to verify your credentials
Click “Add Integration” to complete the setup
Once connected, all models from that provider will be available in the model selector.You can manage all your integrations including AI providers in the Cortex Settings.
// Transform data from previous stepsconst items = httpStep.output.body.items;const filteredItems = items.filter((item) => item.price > 100);// Return any data type - becomes the step outputreturn { filtered: filteredItems, count: filteredItems.length,};