Skip to main content
Open the settings panel from the Unreal Engine menu bar: Edit → Project Settings → Plugins → Node to Code. Settings are stored in Saved/Config/[Platform]/NodeToCode.ini using the Config = NodeToCode storage config. API keys are stored separately — see API Key Management.

LLM provider

Provider
enum
default:"Anthropic"
The active LLM provider. All translation requests are sent to this provider.
ValueDescription
AnthropicClaude models via Anthropic API
OpenAIGPT and o-series models via OpenAI API
GeminiGemini models via Google AI API
DeepSeekDeepSeek models via DeepSeek API
OllamaLocal models via Ollama
LMStudioLocal models via LM Studio

LLM services

Model and credential settings for each provider. Only the section matching your selected provider is used for translations.

Anthropic

AnthropicModel
enum
default:"Claude 4 Sonnet"
The Anthropic model used for translation.
Display nameAPI identifier
Claude 4 Opusclaude-4-opus-20250514
Claude 4 Sonnetclaude-4-sonnet-20250514
Claude 3.7 Sonnetclaude-3-7-sonnet-20250219
Claude 3.5 Sonnetclaude-3-5-sonnet-20241022
Claude 3.5 Haikuclaude-3-5-haiku-20241022
API Key
string
Your Anthropic API key. Stored in the user secrets file, not in NodeToCode.ini. See API Key Management.

OpenAI

OpenAI_Model
enum
default:"o4 Mini"
The OpenAI model used for translation.
Display nameAPI identifier
o4 Minio4-mini
GPT-4.1gpt-4.1
o3o3
o3 Minio3-mini
o1o1
o1 Previewo1-preview-2024-09-12
o1 Minio1-mini-2024-09-12
GPT-4ogpt-4o-2024-08-06
GPT-4o Minigpt-4o-mini-2024-07-18
o1 Preview and o1 Mini do not support system prompts. All other OpenAI models listed here, including o1, o3, o4 Mini, and GPT-4.1, do support system (developer) prompts.
API Key
string
Your OpenAI API key. Stored in the user secrets file. See API Key Management.

Gemini

Gemini_Model
enum
default:"Gemini 2.5 Flash Preview"
The Gemini model used for translation.
Display nameAPI identifier
Gemini 2.5 Pro Previewgemini-2.5-pro-preview-05-06
Gemini 2.5 Flash Previewgemini-2.5-flash-preview-05-20
Gemini 2.0 Flashgemini-2.0-flash
Gemini 2.0 Flash-Lite Previewgemini-2.0-flash-lite-preview-02-05
Gemini 1.5 Flashgemini-1.5-flash
Gemini 1.5 Progemini-1.5-pro
Gemini 2.0 Pro Exp 02-05gemini-2.0-pro-exp-02-05
Gemini 2.0 Flash Thinking Exp 01-21gemini-2.0-flash-thinking-exp-01-21
API Key
string
Your Gemini API key. Stored in the user secrets file. See API Key Management.

DeepSeek

DeepSeekModel
enum
default:"DeepSeek R1"
The DeepSeek model used for translation.
Display nameAPI identifier
DeepSeek R1deepseek-reasoner
DeepSeek V3deepseek-chat
API Key
string
Your DeepSeek API key. Stored in the user secrets file. See API Key Management.

Ollama

OllamaModel
string
default:"qwen3:32b"
The model name to use with Ollama. Must match a model you have pulled locally (e.g., llama3.2, mistral, qwen3:32b).
Ollama requires no API key. The plugin connects to your local Ollama instance. See the Ollama provider guide for setup instructions.

LM Studio

LMStudioModel
string
default:"qwen3-32b"
The model name as configured in LM Studio.
LMStudioEndpoint
string
default:"http://localhost:1234"
The base URL of your LM Studio server. Change this if you have configured LM Studio to run on a non-default port or a remote host.
LMStudioPrependedModelCommand
string
default:""
Optional text prepended to the start of every user message. Useful for model-specific commands such as /no_think to disable extended thinking on reasoning models. Leave blank if not needed.
LM Studio requires no API key. See the LM Studio provider guide for setup instructions.

Code generation

TargetLanguage
enum
default:"C++"
The output language for Blueprint translations.
ValueDescription
C++Unreal Engine C++
PythonPython
JavaScriptJavaScript
C#C#
SwiftSwift
PseudocodeLanguage-agnostic pseudocode
TranslationDepth
integer
default:"0"
Maximum depth for nested graph translation. When a Blueprint calls a function defined in another graph, the plugin can recursively translate those referenced graphs up to this depth.
  • 0 — No nested translation. Only the selected graph is translated.
  • 1–5 — Translate referenced graphs up to this many levels deep.
Higher depth values significantly increase token usage and API cost. Start at 0 and increase only when you need the full call hierarchy translated.
ReferenceSourceFilePaths
FFilePath[]
A list of .h and .cpp source files provided as context to the LLM. Use this to supply your project’s coding conventions, base classes, or utility headers so the translated output matches your codebase style.Accepts C++ files (*.h, *.cpp).
EstimatedReferenceFileTokens
integer
Read-only. Displays the estimated token count for all currently configured reference files, calculated as character count / 4. Use this to gauge the context overhead added to every translation request.
CustomTranslationOutputDirectory
FDirectoryPath
If set, translated output files are saved to this directory. If left blank, translations are saved to Saved/NodeToCode/Translations/ inside your project folder.

Logging

MinSeverity
enum
default:"Info"
The minimum log severity level written to the Output Log under the LogNodeToCode category.
ValueDescription
DebugVerbose diagnostic messages
InfoGeneral operational messages
WarningPotential issues that do not stop translation
ErrorFailures that prevented translation from completing
FatalUnrecoverable errors
Set to Warning or Error to reduce log noise in production.

Pricing

The plugin tracks per-request cost estimates using input and output token pricing for each cloud provider. Pricing maps are pre-populated with current rates and can be edited if pricing changes.
Ollama and LM Studio are local providers and always report a cost of $0.00.
SettingDescription
OpenAIModelPricingInput/output cost per 1M tokens for each OpenAI model
AnthropicModelPricingInput/output cost per 1M tokens for each Anthropic model
GeminiModelPricingInput/output cost per 1M tokens for each Gemini model
DeepSeekModelPricingInput/output cost per 1M tokens for each DeepSeek model
Cost estimates appear in the translation output window after each request.

Theming

The integrated code editor window uses per-language syntax highlighting themes. See Code Editor Themes for the full list of built-in themes and instructions for creating custom themes.
SettingLanguage
CPPThemesC++
PythonThemesPython
JavaScriptThemesJavaScript
CSharpThemesC#
SwiftThemesSwift
PseudocodeThemesPseudocode
Each setting is a map of theme names to color definitions (FN2CCodeEditorColors). You can add custom entries to any map directly in Project Settings.

Build docs developers (and LLMs) love