Selecting a provider
Go to Edit → Project Settings → Plugins → Node to Code and set the LLM Provider field to your preferred option. Each provider has its own settings section under Node to Code | LLM Services in the same panel, where you can enter API keys, choose a model, and adjust provider-specific options.Cloud providers
Cloud providers offer the strongest models available today. They require an account and API key from the respective service, and each request is billed based on token usage.OpenAI
GPT-4.1, o3, o4 Mini, and more. o4 Mini is recommended for the best price-to-performance ratio.
Anthropic Claude
Claude 4 Sonnet is the default provider and model — strong results across a wide range of Blueprint graphs.
Google Gemini
Gemini 2.5 Flash recommended for price-performance. Gemini 2.5 Pro for maximum capability.
DeepSeek
DeepSeek R1 is a cost-effective reasoning model. Strong results at a competitive price point.
Local providers
Local providers run models directly on your machine. No API key is required and no data is sent to external servers — ideal for proprietary codebases or air-gapped environments.Ollama
Run open-source models locally via Ollama. Default model:
qwen3:32b. Full configuration control including context window, sampling parameters, and more.LM Studio
Load any GGUF model in LM Studio and connect via its local OpenAI-compatible server. Default endpoint:
http://localhost:1234.Provider comparison
| Provider | API key required | Runs locally | Default model |
|---|---|---|---|
| OpenAI | Yes | No | o4 Mini |
| Anthropic | Yes | No | Claude 4 Sonnet |
| Gemini | Yes | No | Gemini 2.5 Flash Preview |
| DeepSeek | Yes | No | DeepSeek R1 |
| Ollama | No | Yes | qwen3:32b |
| LM Studio | No | Yes | qwen3-32b |