@mo.cache will cache its value based on the function’s arguments, closed-over values, and the notebook code.
Usage
Function Decorator
Context Manager
Signature
As Decorator
As Context Manager
Parameters
If
True, the cache will be invalidated if module versions differ.The loader to use for the cache. Defaults to
MemoryLoader.The name of the cache, used to set saving path. To manually invalidate the cache, change the name.
Benefits over functools.cache
mo.cache is similar to functools.cache, but with three key benefits:
-
mo.cachepersists its cache even if the cell defining the cached function is re-run, as long as the code defining the function and ancestors (excluding comments and formatting) has not changed. -
mo.cachekeys on closed-over values in addition to function arguments, preventing accumulation of hidden state associated withfunctools.cache. -
mo.cachedoes not require its arguments to be hashable (only pickleable), meaning it can work with lists, sets, NumPy arrays, PyTorch tensors, and more.
mo.cache obtains these benefits at the cost of slightly higher overhead than functools.cache, so it is best used for expensive functions.
Like functools.cache, mo.cache is thread-safe.
The cache has an unlimited maximum size. To limit the cache size, use @mo.lru_cache. mo.cache is slightly faster than mo.lru_cache, but in most applications the difference is negligible.
Async Functions
mo.cache automatically detects and supports async functions:
Context Manager
Themo.cache context manager lets you delimit a block of code in which variables will be cached to memory when they are first computed.
By default, the cache is stored in memory and is not persisted across kernel runs. For persistent caching, use mo.persistent_cache.