The Model class represents a loaded ONNX Runtime GenAI model. It is the primary entry point for loading models and creating other components like tokenizers and generators.
Constructor
Model(string modelPath)
Creates a new Model instance from a model directory.
Path to the directory containing the model files
using Microsoft.ML.OnnxRuntimeGenAI;
using Model model = new Model("/path/to/model");
Model(Config config)
Creates a new Model instance from a configuration object.
Configuration object with model settings and provider options
using Microsoft.ML.OnnxRuntimeGenAI;
using Config config = new Config("/path/to/model");
config.AppendProvider("cuda");
using Model model = new Model(config);
Methods
GetModelType()
Returns the type of the model (e.g., “phi3v”, “phi4mm”, “qwen2_5_vl”).
Returns: string - The model type identifier
string modelType = model.GetModelType();
Console.WriteLine($"Model type: {modelType}");
Dispose()
Releases the resources used by the model. Models implement IDisposable and should be disposed when no longer needed.
model.Dispose();
// Or use 'using' statement for automatic disposal
using Model model = new Model("/path/to/model");
Complete Example
Here’s a complete example of loading a model and using it with a tokenizer and generator:
using Microsoft.ML.OnnxRuntimeGenAI;
string modelPath = "/path/to/model";
// Load model with default configuration
using Model model = new Model(modelPath);
// Get model type
string modelType = model.GetModelType();
Console.WriteLine($"Loaded model type: {modelType}");
// Create tokenizer from model
using Tokenizer tokenizer = new Tokenizer(model);
// Create generator params
using GeneratorParams generatorParams = new GeneratorParams(model);
generatorParams.SetSearchOption("max_length", 1024);
// Create generator
using Generator generator = new Generator(model, generatorParams);
// Model is automatically disposed when leaving the 'using' scope
Example with Custom Configuration
Load a model with custom execution provider settings:
examples/csharp/ModelChat/Program.cs:586-590
using Microsoft.ML.OnnxRuntimeGenAI;
string modelPath = "/path/to/model";
string executionProvider = "cuda";
// Create configuration
using Config config = new Config(modelPath);
config.ClearProviders();
config.AppendProvider(executionProvider);
// Create model from configuration
using Model model = new Model(config);
Console.WriteLine($"Model loaded with {executionProvider} provider");
See Also