LM Studio is free to use for local inference. Your Blueprint code and source files never leave your machine.
Setup
Download LM Studio
Go to lmstudio.ai and download the installer for your platform.
Load a model
Open LM Studio, search for a model (e.g.,
qwen3-32b), and download it. Once downloaded, load it into memory using the model selector.Start the local server
In LM Studio, navigate to the Developer tab (or Local Server tab depending on your version) and click Start Server. The server listens on
http://localhost:1234 by default.Configure in Unreal Engine
Open Edit → Project Settings → Plugins → Node to Code → LLM Services → LM Studio and set:
- Server Endpoint to your LM Studio server URL (default:
http://localhost:1234) - Model Name to the model identifier shown in LM Studio (default:
qwen3-32b)
Configuration
All LM Studio settings are under Node to Code | LLM Services | LM Studio in Project Settings.| Setting | Default | Description |
|---|---|---|
| Server Endpoint | http://localhost:1234 | The base URL of your LM Studio local server. |
| Model Name | qwen3-32b | The model identifier to request. This must match the model loaded in LM Studio. |
| Prepended Model Command | (empty) | Text prepended to the start of every user message. |
Prepended model command
Some models support special commands that control their behavior. You can enter these in the Prepended Model Command field and the plugin will insert the text at the beginning of each user message automatically. A common example is/no_think, which disables extended thinking on reasoning models that support it: