The Node to Code button is not visible in the Blueprint Editor toolbar
The Node to Code button is not visible in the Blueprint Editor toolbar
Translation fails with an API error
Translation fails with an API error
Check the following in order:
- Verify your API key — Go to Edit → Project Settings → Node to Code → LLM Services → [Provider] → API Key and confirm the key is entered correctly with no extra spaces.
- Verify the correct provider is selected — The active provider in LLM Provider must match the API key you entered.
- Check your internet connection — Cloud providers (OpenAI, Anthropic, Gemini, DeepSeek) require an active internet connection.
- Check your API account — Ensure your account has credits and the selected model is available on your plan.
Info and check the Output Log in the Unreal Editor for the full error message.Translation returns empty or malformed output
Translation returns empty or malformed output
This can happen when the Blueprint is very complex or the model has trouble with the schema. Try these steps:
- Reduce translation depth — Set Translation Depth to
0(default). Higher depths send significantly more tokens and can exceed context windows. - Test with a simpler Blueprint — Try a small function with just a few nodes to confirm the pipeline works.
- Switch to a more capable model — For complex Blueprints, try Claude 4 Sonnet or Gemini 2.5 Pro instead of a smaller/cheaper model.
- Check for unsupported node types — Uncommon node types may produce unexpected output. See Supported Node Types.
Translation is slow or times out
Translation is slow or times out
LLM responses can take time, especially for large Blueprints or reasoning models. The request timeout is set to 3600 seconds (1 hour) to accommodate complex translations.To speed things up:
- Reduce translation depth — Each additional depth level can multiply token count.
- Select fewer nodes — Translate a subset of your Blueprint at a time rather than the entire graph.
- Use a faster model — Gemini 2.5 Flash and o4 Mini are optimized for speed and cost without sacrificing quality.
- Remove large reference files — Large reference source files increase prompt size. Check Estimated Reference File Tokens in settings.
Token usage and costs are higher than expected
Token usage and costs are higher than expected
Several settings directly impact cost:
- Translation Depth — Each level multiplies the amount of Blueprint data sent. Keep at
0unless you specifically need nested translation. - Reference Source Files — Large
.h/.cppfiles add significant tokens. The Estimated Reference File Tokens field in settings shows the current total. Remove files you don’t need. - Model selection — Reasoning models (o1, o3, DeepSeek R1) use more output tokens. For most translations, o4 Mini, Gemini 2.5 Flash, or Claude 4 Sonnet offer the best cost-to-quality ratio.
Ollama is not connecting
Ollama is not connecting
Confirm Ollama is running
Open a terminal and run
ollama serve. If it is already running, you will see a message indicating the server is active.Pull the model
Run
ollama pull qwen3:32b (or whichever model you have configured) to ensure the model is downloaded locally.Verify the endpoint
In Project Settings → Node to Code → LLM Services → Ollama, confirm the host and port match the Ollama server (default:
http://localhost:11434).LM Studio is not connecting
LM Studio is not connecting
Open LM Studio and load a model
Launch LM Studio, go to the My Models tab, and load the model you want to use.
Verify the endpoint
The default server endpoint is
http://localhost:1234. Confirm this matches the Server Endpoint setting in Project Settings → Node to Code → LLM Services → LM Studio.The plugin is not available on my platform
The plugin is not available on my platform
Node to Code supports Win64 and Mac only. This is defined in It will not load on Linux, Android, iOS, or console platforms. If you need to ship to those platforms, the plugin is editor-only and will not be included in packaged builds regardless.
NodeToCode.uplugin:Enabling detailed logging
To see full debug output in the Unreal Output Log:- Go to Edit → Project Settings → Node to Code → Logging
- Set Min Severity to
Info - Reproduce the issue and check the Output Log (
Window → Output Log) for[NodeToCode]entries
Getting more help
Discord Community
Ask questions and get help from the community and developer
GitHub Issues
Report bugs or request features on GitHub