All tools return JSON responses. Long-running operations (training, pipeline execution, digital twin sync) return task IDs; use
wait_for_task or get_work_task to poll for completion.Projects (8 tools)
Projects are the top-level organizational unit in Mimir. Each project groups pipelines, ontologies, ML models, digital twins, and storage configurations.list_projects
List all projects in the platform. Parameters:status(optional): Filter by status (active,archived,draft)
id, name, description, version, status, tags, created_at, updated_at.
get_project
Get details of a specific project. Parameters:id(required): Project ID
create_project
Create a new project. Parameters:name(required): Project name (3-50 alphanumeric characters, hyphens, underscores)description(optional): Human-readable descriptionversion(optional): Semantic version string (e.g.,1.0.0)tags(optional): Comma-separated list of tags (e.g.,production,ml,iot)
update_project
Update an existing project’s metadata. Parameters:id(required): Project IDdescription(optional): New descriptionversion(optional): New version (e.g.,2.0.0)status(optional): New status (active,archived,draft)tags(optional): Comma-separated replacement tag list
delete_project
Delete (archive) a project. Parameters:id(required): Project ID
{"success": true}
add_project_component
Associate a component with a project. Parameters:project_id(required): Project IDcomponent_type(required): One ofpipeline,ontology,ml_model,digital_twin,storagecomponent_id(required): ID of the component to associate
{"success": true}
remove_project_component
Remove a component association from a project. Parameters:project_id(required): Project IDcomponent_type(required): One ofpipeline,ontology,ml_model,digital_twin,storagecomponent_id(required): ID of the component to disassociate
{"success": true}
clone_project
Clone an existing project under a new name. Parameters:id(required): Source project IDnew_name(required): Name for the cloned project
Pipelines (6 tools)
Pipelines define ordered sequences of data processing steps (ingestion → processing → output).list_pipelines
List pipelines, optionally filtered by project. Parameters:project_id(optional): Filter by project ID
get_pipeline
Get details of a specific pipeline. Parameters:id(required): Pipeline ID
create_pipeline
Create a new pipeline. Parameters:project_id(required): Project IDname(required): Pipeline nametype(required): One ofingestion,processing,outputsteps(required): JSON array of pipeline stepsdescription(optional): Pipeline description
update_pipeline
Update an existing pipeline. Parameters:id(required): Pipeline IDdescription(optional): New descriptionsteps(optional): Replacement JSON array of stepsstatus(optional): New status (active,inactive,archived)
delete_pipeline
Delete a pipeline. Parameters:id(required): Pipeline ID
{"success": true}
execute_pipeline
Enqueue a pipeline for asynchronous execution. Parameters:id(required): Pipeline ID
wait_for_task or get_work_task to poll for completion.
Schedules (5 tools)
Schedules are cron-based triggers that enqueue pipelines on a recurring basis.list_schedules
List schedules, optionally filtered by project. Parameters:project_id(optional): Filter by project ID
get_schedule
Get details of a specific schedule. Parameters:id(required): Schedule ID
create_schedule
Create a new cron-based schedule. Parameters:project_id(required): Project IDname(required): Schedule namecron_schedule(required): Cron expression (e.g.,"0 * * * *"for hourly)pipeline_ids(required): Comma-separated list of pipeline IDs to triggerenabled(optional): Enable immediately (defaulttrue)
update_schedule
Update an existing schedule. Parameters:id(required): Schedule IDname(optional): New namecron_schedule(optional): New cron expressionpipeline_ids(optional): Comma-separated replacement list of pipeline IDsenabled(optional): Enable/disable (trueorfalse)
delete_schedule
Delete a schedule. Parameters:id(required): Schedule ID
{"success": true}
ML Models (7 tools)
ML models are trained on data from storage backends and used for inference. Supported types: decision tree, random forest, regression, neural network.list_ml_models
List ML models for a project. Parameters:project_id(required): Project ID
get_ml_model
Get details of a specific ML model. Parameters:id(required): ML model ID
create_ml_model
Create a new ML model definition. Parameters:project_id(required): Project IDontology_id(required): Ontology ID that defines the model’s domainname(required): Model nametype(required): One ofdecision_tree,random_forest,regression,neural_networkdescription(optional): Model descriptionconfig(optional): JSON training config
update_ml_model
Update an existing ML model’s metadata. Parameters:id(required): ML model IDname(optional): New namedescription(optional): New descriptionstatus(optional): New status (created,training,trained,failed,deprecated)
delete_ml_model
Delete an ML model. Parameters:id(required): ML model ID
{"success": true}
train_ml_model
Start asynchronous training for an ML model. Parameters:model_id(required): ML model IDstorage_ids(optional): Comma-separated list of storage config IDs to use as training data
training. Training runs asynchronously as a worker job.
run_inference
Enqueue an ML inference job. Parameters:model_id(required): Trained ML model IDstorage_id(required): Storage config ID containing data to run inference on
wait_for_task or get_work_task to poll for completion.
recommend_model
Recommend the best ML model type for a project based on its ontology and data. Parameters:project_id(required): Project IDontology_id(required): Ontology ID describing the data domain
Digital Twins (7 tools)
Digital twins are live in-memory entity graphs initialized from an ontology and synchronized from storage. Queryable via SPARQL.list_digital_twins
List digital twins, optionally filtered by project. Parameters:project_id(optional): Filter by project ID
get_digital_twin
Get details of a specific digital twin. Parameters:id(required): Digital twin ID
create_digital_twin
Create a new digital twin. Parameters:project_id(required): Project IDontology_id(required): Ontology ID that defines the twin’s entity modelname(required): Digital twin namedescription(optional): Description
update_digital_twin
Update an existing digital twin’s metadata. Parameters:id(required): Digital twin IDname(optional): New namedescription(optional): New descriptionstatus(optional): New status (active,inactive,archived)
delete_digital_twin
Delete a digital twin. Parameters:id(required): Digital twin ID
{"success": true}
sync_digital_twin
Enqueue a digital twin sync job to update entities from storage. Parameters:id(required): Digital twin ID
wait_for_task or get_work_task to poll for completion.
query_digital_twin
Execute a SPARQL query against a digital twin’s entity graph. Parameters:id(required): Digital twin IDsparql_query(required): SPARQL query stringlimit(optional): Maximum number of results (default 100)
Ontologies (6 tools)
Ontologies define entity types, properties, and relationships in OWL/Turtle format.list_ontologies
List ontologies for a project. Parameters:project_id(required): Project ID
get_ontology
Get details of a specific ontology. Parameters:id(required): Ontology ID
create_ontology
Create a new ontology. Parameters:project_id(required): Project IDname(required): Ontology namecontent(required): OWL/Turtle ontology content as a stringdescription(optional): Descriptionversion(optional): Version string (default1.0.0)
update_ontology
Update an existing ontology. Parameters:id(required): Ontology IDname(optional): New namedescription(optional): New descriptionversion(optional): New version (e.g.,2.0.0)content(optional): Replacement OWL/Turtle contentstatus(optional): New status (active,deprecated)
delete_ontology
Delete an ontology. Parameters:id(required): Ontology ID
{"success": true}
generate_ontology_from_text
Generate an OWL ontology by extracting entity types and relationships from a text description. Parameters:project_id(required): Project IDname(required): Name for the generated ontologytext(required): Domain description text
extract_and_generate_ontology
Extract entities from storage backends and generate an OWL ontology. Diffs against any existing active ontology and flags for review if changes are detected. Parameters:project_id(required): Project IDstorage_ids(required): Comma-separated list of storage config IDsontology_name(required): Name for the generated ontologyinclude_structured(optional): Include structured data extraction (defaulttrue)include_unstructured(optional): Include unstructured/text extraction (defaulttrue)
extract_from_storage
Extract entities and relationships from one or more storage backends without generating an ontology. Parameters:project_id(required): Project IDstorage_ids(required): Comma-separated list of storage config IDsinclude_structured(optional): Include structured data extraction (defaulttrue)include_unstructured(optional): Include unstructured/text extraction (defaulttrue)
Storage (10 tools)
Storage tools manage backend configurations and CIR (Common Internal Representation) data operations.list_storage_configs
List storage configurations for a project. Parameters:project_id(required): Project ID
get_storage_config
Get a specific storage configuration. Parameters:id(required): Storage config ID
create_storage_config
Create a new storage configuration. Parameters:project_id(required): Project IDtype(required): Storage plugin type (filesystem,postgresql,mysql,mongodb,s3,redis,elasticsearch,neo4j)config(required): JSON object with plugin-specific config
update_storage_config
Update a storage configuration. Parameters:id(required): Storage config IDconfig(optional): JSON object with updated plugin-specific configactive(optional): Set active state (trueorfalse)
{"success": true}
delete_storage_config
Delete a storage configuration. Parameters:id(required): Storage config ID
{"success": true}
store_data
Store one or more CIR (Common Internal Representation) records. Parameters:storage_id(required): Storage config IDdata(required): JSON array of CIR objects
retrieve_data
Retrieve CIR records from a storage backend. Parameters:storage_id(required): Storage config IDentity_type(optional): Filter by entity typelimit(optional): Maximum number of records (default 100)
update_data
Update CIR records matching a query. Parameters:storage_id(required): Storage config IDquery(required): JSON CIRQuery to select recordsupdates(required): JSON CIRUpdate with fields to set
delete_data
Delete CIR records matching a query. Parameters:storage_id(required): Storage config IDquery(required): JSON CIRQuery to select records for deletion
storage_health
Check whether a storage backend is reachable and healthy. Parameters:storage_id(required): Storage config ID
Tasks (3 tools)
Work tasks represent asynchronous jobs (pipeline execution, ML training/inference, digital twin sync).list_work_tasks
List work tasks in the queue. Parameters:status(optional): Filter by status (queued,scheduled,spawned,executing,completed,failed,timeout,cancelled)type(optional): Filter by type (pipeline_execution,ml_training,ml_inference,digital_twin_update)
get_work_task
Get the current state of a specific work task. Parameters:id(required): Work task ID
wait_for_task
Poll a work task until it reaches a terminal state (completed, failed, timeout, cancelled) or the timeout expires. Parameters:id(required): Work task IDtimeout_seconds(optional): Maximum seconds to wait (default 300, max 600)
- If terminal state reached: Work task object with final status
- If timeout:
{"task": {...}, "timed_out": true, "message": "task did not reach a terminal state within the timeout"}
System (1 tool)
health_check
Check the health status of the Mimir AIP platform. Parameters: None Example:Common Patterns
Chaining Operations
Many workflows require chaining multiple tool calls:Handling Async Operations
Tools that returntask_id are asynchronous:
execute_pipelinetrain_ml_modelrun_inferencesync_digital_twin
wait_for_task or poll get_work_task to track progress:
Error Handling
All tools return errors as:"id is required"— Missing required parameter"project not found"— Referenced resource doesn’t exist"config must be valid JSON"— Malformed JSON parameter"storage backend unreachable"— External system error