Execute and cache#
MyST-NB can automatically run and cache notebooks contained in your project using jupyter-cache. Notebooks can either be run each time the documentation is built, or cached locally so that re-runs occur only when code cells have changed.
Execution and caching behaviour is controlled with configuration at a global or per-file level, as outlined in the configuration section. See the sections below for a description of each configuration option and their effect.
Notebook execution modes#
To trigger the execution of notebook pages, use the global nb_execution_mode
configuration key, or per-file execution_mode
key:
Mode |
Description |
---|---|
|
Do not execute the notebook |
|
Always execute the notebook (before parsing) |
|
Execute notebooks with missing outputs (before parsing) |
|
Execute notebook and store/retrieve outputs from a cache |
|
Execute the notebook during parsing (allows for variable evaluation) |
By default this is set to:
nb_execution_mode = "auto"
This will only execute notebooks that are missing at least one output. If a notebook has all of its outputs populated, then it will not be executed.
To force the execution of all notebooks, regardless of their outputs, change the above configuration value to:
nb_execution_mode = "force"
To cache execution outputs with jupyter-cache, change the above configuration value to:
nb_execution_mode = "cache"
See Cache execution outputs for more information.
To execute notebooks inline during parsing, change the above configuration value to:
nb_execution_mode = "inline"
This allows for the use of eval
roles/directives to embed variables, evaluated from the execution kernel, inline of the Markdown.
See Inline variable evaluation (eval) for more information.
To turn off notebook execution, change the above configuration value to:
nb_execution_mode = "off"
Exclude notebooks from execution#
To exclude certain file patterns from execution, use the following configuration:
nb_execution_excludepatterns = ['list', 'of', '*patterns']
Any file that matches one of the items in nb_execution_excludepatterns
will not be executed.
Cache execution outputs#
As mentioned above, you can cache the results of executing a notebook page by setting:
nb_execution_mode = "cache"
In this case, when a page is executed, its outputs will be stored in a local database.
This allows you to be sure that the outputs in your documentation are up-to-date, while saving time avoiding unnecessary re-execution.
It also allows you to store your .ipynb
files (or their .md
equivalent) in your git
repository without their outputs, but still leverage a cache to save time when building your site.
Tip
You should only use this option when notebooks have deterministic execution outputs:
You use the same environment to run them (e.g. the same installed packages)
They run no non-deterministic code (e.g. random numbers)
They do not depend on external resources (e.g. files or network connections) that change over time
When you re-build your site, the following will happen:
Notebooks that have not seen changes to their code cells or metadata since the last build will not be re-executed. Instead, their outputs will be pulled from the cache and inserted into your site.
Notebooks that have any change to their code cells will be re-executed, and the cache will be updated with the new outputs.
By default, the cache will be placed in the parent of your build folder.
Generally, this is in _build/.jupyter_cache
, and it will also be specified in the build log, e.g.
Using jupyter-cache at: ./docs/_build/.jupyter_cache
You may also specify a path to the location of a jupyter cache you’d like to use:
nb_execution_cache_path = "path/to/mycache"
The path should point to an empty folder, or a folder where a jupyter cache already exists.
Once you have run the documentation build, you can inspect the contents of the cache with the following command:
$ jcache notebook -p docs/_build/.jupyter_cache list
See jupyter-cache for more information.
Execute with a different kernel name#
If you require your notebooks to run with a different kernel, to those specified in the actual files, you can set global aliases with e.g.
nb_kernel_rgx_aliases = {"oth.*": "python3"}
The mapping keys are regular expressions so, for example oth.*
will match any kernel name starting with oth
.
Executing in temporary folders#
By default, the command working directory (cwd) that a notebook runs in will be the directory it is located in.
However, you can set nb_execution_in_temp=True
in your conf.py
, to change this behaviour such that, for each execution, a temporary directory will be created and used as the cwd.
Execution timeout#
The execution of notebooks is managed by nbclient.
The nb_execution_timeout
sphinx option defines the maximum time (in seconds) each notebook cell is allowed to run.
If the execution takes longer an exception will be raised.
The default is 30 s, so in cases of long-running cells you may want to specify an higher value.
The timeout option can also be set to None
or -1 to remove any restriction on execution time.
This global value can also be overridden per notebook by adding this to you notebooks metadata:
{
"metadata": {
"execution": {
"timeout": 30
}
}
Raise errors in code cells#
In some cases, you may want to intentionally show code that doesn’t work (e.g., to show the error message). You can achieve this at “three levels”:
Globally, by setting nb_execution_allow_errors=True
in your conf.py
.
Per notebook (overrides global), by adding this to you notebooks metadata:
{
"metadata": {
"execution": {
"allow_errors": true
}
}
Per cell, by adding a raises-exception
tag to your code cell.
This can be done via a Jupyter interface, or via the {code-cell}
directive like so:
```{code-cell}
:tags: [raises-exception]
print(thisvariabledoesntexist)
```
Which produces:
print(thisvariabledoesntexist)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[1], line 1
----> 1 print(thisvariabledoesntexist)
NameError: name 'thisvariabledoesntexist' is not defined
Error reporting: Warning vs. Failure#
When an error occurs in a context where nb_execution_allow_errors=False
,
the default behaviour is for this to be reported as a warning.
This warning will simply be logged and not cause the build to fail unless sphinx-build
is run with the -W
option.
If you would like unexpected execution errors to cause a build failure rather than a warning regardless of the -W
option, you can achieve this by setting nb_execution_raise_on_error=True
in your conf.py
.
Execution statistics#
As notebooks are executed, certain statistics are stored in a dictionary, and saved on the sphinx environment object in env.metadata[docname]
.
You can access this in a post-transform in your own sphinx extensions, or use the built-in nb-exec-table
directive:
```{nb-exec-table}
```
which produces:
Document |
Modified |
Method |
Run Time (s) |
Status |
---|---|---|---|---|
2024-10-02 11:45 |
cache |
4.5 |
✅ |
|
2024-10-02 11:45 |
cache |
2.16 |
✅ |
|
2024-10-02 11:45 |
cache |
1.34 |
✅ |
|
2024-10-02 11:45 |
cache |
4.17 |
✅ |
|
2024-10-02 11:45 |
cache |
1.33 |
✅ |
|
2024-10-02 11:45 |
cache |
2.38 |
✅ |
|
2024-10-02 11:45 |
cache |
3.21 |
✅ |
|
2024-10-02 11:45 |
cache |
2.4 |
✅ |
|
2024-10-02 11:45 |
inline |
1.89 |
✅ |
|
2024-10-02 11:45 |
cache |
3.01 |
✅ |
|
2024-10-02 11:45 |
cache |
1.21 |
✅ |