This tutorial explains how to build a GPU‑ready container image using cotainr, install PyTorch and other Python packages inside it, and expose it as a custom Jupyter kernel in an HPC environment using Open OnDemand.
1. Create the Conda Environment File (env.yml)
Create a file named env.yml with the following content:
name: torch-gpu
channels:
- pytorch
- nvidia
- conda-forge
- defaults
dependencies:
- python=3.10
- pip
- pytorch
- torchvision
- torchaudio
- pytorch-cuda=12.1
- ipykernel
### optional packages! ###
- pip:
- timm
- torchinfo
- thop
- "flwr[simulation]"
- flwr-datasets
Save it in your home directory, e.g. ~/tutorial/env.yml.
2. Build the Apptainer .sif Image Using cotainr
Use a CUDA runtime image so cotainr can install Conda cleanly:
cotainr build pytorchkernel.sif --base-image=docker://nvidia/cuda:12.1.1-runtime-ubuntu22.04 --accept-licenses --conda-env=~/tutorial/env.yml
cotainr will install Miniforge, create the Conda environment torch-gpu, and install all required packages.
3. Create the Kernel Specification
Create the folder:
mkdir -p ~/.local/share/jupyter/kernels/torch-gpu-apptainer
Create kernel.json inside it:
{
"argv": [
"/home/YOURUSER/.local/share/jupyter/kernels/torch-gpu-apptainer/init_kernel.sh",
"-f",
"{connection_file}"
],
"display_name": "PyTorch GPU (Apptainer)",
"language": "python"
}
Replace YOURUSER with your username.
4. Create the init_kernel.sh Script
Create the file:
~/.local/share/jupyter/kernels/torch-gpu-apptainer/init_kernel.sh
Add:
#!/bin/bash
SIF="/home/YOURUSER/tutorial/pytorchkernel.sif"
exec apptainer exec --cleanenv --nv "$SIF" conda run -n torch-gpu python -m ipykernel "$@"
Make it executable:
chmod +x ~/.local/share/jupyter/kernels/torch-gpu-apptainer/init_kernel.sh
5. Launch JupyterLab via Open OnDemand
- Open your Open OnDemand portal.
- Go to Interactive Apps → JupyterLab.
- Start a session with GPU resources.
6. Create a New Notebook and Select the Custom Kernel
In JupyterLab:
- Open the Launcher.
- Choose “PyTorch GPU (Apptainer)”.
- Run a test:
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
If everything works, your containerized kernel is ready.
Done!
You now have a reproducible, GPU-enabled Apptainer image and a custom Jupyter kernel fully integrated with Open OnDemand.
ref: