Skip to main content
H2O lets you export trained models as self-contained artifacts that score data without requiring a running H2O cluster. Two formats are available: MOJO and POJO.

What is a MOJO?

A MOJO (Model OBject, Optimized) is a binary zip archive that contains the model’s parameters and tree structures in a compact, optimized format. MOJOs are read at runtime by generic tree-walking code in h2o-genmodel.jar, so they have no size restriction and no compilation step. Supported algorithms: GBM, DRF, GLM, GAM, GLRM, Deep Learning, K-Means, PCA, Stacked Ensembles, SVM, Word2Vec, Isolation Forest, XGBoost, CoxPH, RuleFit, and all AutoML-generated models.
MOJOs are only supported for encodings that are either default or enum. MOJO predict cannot parse columns enclosed in double quotes (for example, "2").

What is a POJO?

A POJO (Plain Old Java Object) is generated Java source code that embeds the model’s logic directly. The generated class extends GenModel from h2o-genmodel.jar and can be compiled and loaded into any JVM without any H2O dependency beyond that jar. Limitations: POJOs are not supported for source files larger than 1 GB. They are also not available for GLRM, Stacked Ensembles, or Word2Vec models.

When to use each

MOJOPOJO
Size restrictionNoneMax ~1 GB source
Compilation requiredNoYes (javac)
Cold scoring speed10–40× faster than POJOSlower on first JVM run
Hot scoring speed2–3× faster than POJO≈10% faster for small binomial/regression models
Algorithm supportBroader (includes ensembles)Narrower
FormatBinary .zipJava .java source file
For new deployments, prefer MOJO unless you have a specific reason to require source-level transparency.

The h2o-genmodel.jar dependency

Both formats require h2o-genmodel.jar at compile time and runtime. This jar is the only external dependency needed to score in production — no H2O cluster, no Hadoop, no network. Download it from a running H2O instance:
curl http://localhost:54321/3/h2o-genmodel.jar > h2o-genmodel.jar
For XGBoost MOJOs, also add h2o-genmodel-ext-xgboost:
Maven pom.xml
<dependency>
    <groupId>ai.h2o</groupId>
    <artifactId>h2o-genmodel-ext-xgboost</artifactId>
    <version>3.x.x.x</version>
</dependency>
<dependency>
    <groupId>ai.h2o</groupId>
    <artifactId>h2o-genmodel</artifactId>
    <version>3.x.x.x</version>
</dependency>

Downloading a MOJO

1

Train a model

Train any supported model in Python or R.
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator

h2o.init()
h2o_df = h2o.load_dataset("prostate.csv")
h2o_df["CAPSULE"] = h2o_df["CAPSULE"].asfactor()

model = H2OGradientBoostingEstimator(
    distribution="bernoulli",
    ntrees=100,
    max_depth=4,
    learn_rate=0.1
)
model.train(
    y="CAPSULE",
    x=["AGE", "RACE", "PSA", "GLEASON"],
    training_frame=h2o_df
)
2

Download the MOJO and genmodel jar

Use the full absolute path — relative paths are not reliable here.
modelfile = model.download_mojo(path="~/experiment/", get_genmodel_jar=True)
print("Model saved to " + modelfile)
# Model saved to /Users/user/GBM_model_python_1475248925871_888.zip
The get_genmodel_jar=True flag downloads h2o-genmodel.jar into the same directory.

Downloading a POJO

import h2o
from h2o.estimators.glm import H2OGeneralizedLinearEstimator

h2o.init()
h2o_df = h2o.import_file(
    "http://s3.amazonaws.com/h2o-public-test-data/smalldata/prostate/prostate.csv.zip"
)
h2o_df["CAPSULE"] = h2o_df["CAPSULE"].asfactor()

model = H2OGeneralizedLinearEstimator(family="binomial")
model.train(
    y="CAPSULE",
    x=["AGE", "RACE", "PSA", "GLEASON"],
    training_frame=h2o_df
)

h2o.download_pojo(model)
You can also click Download POJO on any model card in the H2O Flow web UI at http://localhost:54321.

Scoring with a MOJO in Java

1

Write the scoring program

Create main.java in your experiment directory:
main.java
import java.io.*;
import hex.genmodel.easy.RowData;
import hex.genmodel.easy.EasyPredictModelWrapper;
import hex.genmodel.easy.prediction.*;
import hex.genmodel.MojoModel;

public class main {
  public static void main(String[] args) throws Exception {
    EasyPredictModelWrapper model = new EasyPredictModelWrapper(
        MojoModel.load("GBM_model_R_1475248925871_74.zip")
    );

    RowData row = new RowData();
    row.put("AGE", "68");
    row.put("RACE", "2");
    row.put("DCAPS", "2");
    row.put("VOL", "0");
    row.put("GLEASON", "6");

    BinomialModelPrediction p = model.predictBinomial(row);
    System.out.println("Prediction: " + p.label);
    for (int i = 0; i < p.classProbabilities.length; i++) {
      if (i > 0) System.out.print(",");
      System.out.print(p.classProbabilities[i]);
    }
    System.out.println();
  }
}
2

Compile

javac -cp h2o-genmodel.jar -J-Xms2g -J-XX:MaxPermSize=128m main.java
3

Run

java -cp .:h2o-genmodel.jar main
Expected output:
Prediction: 0
Class probabilities: 0.8059929056296662,0.19400709437033375
To enable Shapley contributions or leaf node assignments, configure EasyPredictModelWrapper with a Config object:
EasyPredictModelWrapper.Config config = new EasyPredictModelWrapper.Config()
    .setModel(MojoModel.load("model.zip"))
    .setEnableLeafAssignment(true)
    .setEnableContributions(true);
EasyPredictModelWrapper model = new EasyPredictModelWrapper(config);
These features add computation time.

Scoring with a MOJO from the command line

Use PrintMojo to convert a MOJO to a human-readable graph (requires Graphviz):
# Download h2o.jar from https://www.h2o.ai/download/
java -cp h2o.jar hex.genmodel.tools.PrintMojo \
    --tree 0 \
    -i path/to/model.zip \
    -o model.gv \
    -f 20 \
    -d 3
dot -Tpng model.gv -o model.png
To produce a PNG directly without Graphviz (requires Java 8+):
java -cp h2o-genmodel.jar hex.genmodel.tools.PrintMojo \
    --tree 0 \
    -i path/to/model.zip \
    -o tree.png \
    --format png

Frequently asked questions

Yes. Both MOJOs and POJOs are fully thread safe and can be shared across threads.
Yes. The POJO contains only scoring math with no Spark or H2O dependencies. Use it inside a Spark map that calls the POJO’s predict method for each row and writes the result to a new column.
A subset of MOJOs can be converted using the onnxmltools Python package. Currently only GBM models are supported, and poisson/gamma/tweedie distributions and categorical splits are not supported. See the onnxmltools project for examples.
This happens when the generated .java source file exceeds 1 GB. Switch to MOJO format, which has no size restriction.
GBM has a C++ runtime with a C# wrapper for .NET, but this is not part of the open-source offering and requires a support contract.

Build docs developers (and LLMs) love