Linux (conda): ImportError “undefined symbol: iJIT_NotifyEvent” when importing PyTorch in a conda env (fix without VTune)

Context
On Ubuntu 25.04 (but this likely applies to other distros), inside an Anaconda/Miniconda environment, importing PyTorch (2.5.1, CUDA 12.1 build) failed with:

ImportError: .../torch/lib/libtorch_cpu.so: undefined symbol: iJIT_NotifyEvent

Outside conda (system Python/pip), PyTorch imported fine. So this looked conda-specific rather than a general PyTorch bug.

Environment
• OS: Ubuntu 25.04
• GPU: NVIDIA GeForce RTX 4070 Ti SUPER
• Driver: recent 570.xx
• Conda env: Python 3.11, PyTorch 2.5.1, pytorch-cuda=12.1
• Symptom arises at import time when torch tries to load libtorch_cpu.so

Diagnosis (short)
The symbol iJIT_NotifyEvent (and friends like iJIT_GetNewMethodID) comes from Intel’s ITT API (VTune). In this conda env, libtorch_cpu.so expected those symbols but conda did not provide libittnotify.so at runtime. Installing “ittapi” from conda-forge didn’t surface a usable libittnotify.so in the env’s lib path, so the dynamic loader still couldn’t resolve the symbols.

Two approaches worked:

  1. Preferred: if your conda package provides a proper libittnotify.so, ensure it’s on the loader path when you import torch.

  2. Practical fallback: build a tiny no-op (stub) libittnotify.so that defines the missing symbols (safe unless you actually use VTune/ITT profiling), and preload it for the env.

What I did (reproducible steps)

  1. Make sure your conda env is active

    conda activate ml

  2. (Optional) Check there’s no execstack problem in the env’s libs

    find “$CONDA_PREFIX/lib” -maxdepth 1 -type f -name “.so” -print0 | xargs -0 execstack -q | grep '^X ’ || echo “No execstack”

If you see any X-flagged libs (rare nowadays), clear them:

execstack -c “$CONDA_PREFIX/lib/libtorch_cpu.so”

  1. Try the official ITT package first (may be sufficient in some envs)

conda install -c conda-forge ittapi -y

Then look for the actual library:

find “$CONDA_PREFIX” -type f -name “libittnotify.so*” 2>/dev/null

• If you find it under $CONDA_PREFIX/lib, try import torch.
• If you don’t find it (or import still fails), proceed to step 3.

Build a minimal stub libittnotify.so inside the env

Create a tiny C file with the required symbols (all no-ops):

cat > "$CONDA_PREFIX/lib/itt_stub.c" <<'EOF'
#ifdef __cplusplus
extern "C" {
#endif

void iJIT_NotifyEvent(...) {}
void iJIT_NotifyEventW(...) {}
int  iJIT_IsProfilingActive(void) { return 0; }
int  iJIT_GetNewMethodID(void) { return 1; }
int  iJIT_GetNewMethodIDEx(void) { return 1; }
void iJIT_NotifyEventStr(...) {}
void iJIT_NotifyEventEx(...) {}

#ifdef __cplusplus
}
#endif
EOF

Compile it:

gcc -shared -fPIC -O2 -o “$CONDA_PREFIX/lib/libittnotify.so” “$CONDA_PREFIX/lib/itt_stub.c”

Verify it’s there:

ls -l “$CONDA_PREFIX/lib/libittnotify.so”

  1. Make the env automatically preload the stub on activation

Create conda activation hooks inside the env:

mkdir -p “$CONDA_PREFIX/etc/conda/activate.d” “$CONDA_PREFIX/etc/conda/deactivate.d”

Activation hook:

cat > “$CONDA_PREFIX/etc/conda/activate.d/itt_preload.sh” <<‘EOF’export _OLD_LD_PRELOAD=“${LD_PRELOAD:-}”export LD_PRELOAD=“$CONDA_PREFIX/lib/libittnotify.so${LD_PRELOAD:+:$LD_PRELOAD}”EOF

Deactivation hook:

cat > “$CONDA_PREFIX/etc/conda/deactivate.d/itt_preload.sh” <<‘EOF’export LD_PRELOAD=“$_OLD_LD_PRELOAD”unset _OLD_LD_PRELOADEOF

Open a new shell, reactivate the env:

conda activate ml

  1. Test

python - <<‘PY’
import torch
print(“PyTorch:”, torch.version)
print(“CUDA build:”, torch.version.cuda)
print(“CUDA available:”, torch.cuda.is_available())
print(“GPU:”, torch.cuda.get_device_name(0) if torch.cuda.is_available() else “—”)
PY

Expected outcome (example)

PyTorch: 2.5.1
CUDA build: 12.1
CUDA available: True
GPU: NVIDIA GeForce RTX 4070 Ti SUPER

Notes and caveats
• The stub is a harmless no-op replacement only for loading; unless you actually use Intel VTune/ITT profiling, those functions aren’t needed at runtime.
• This workaround is env-local: the stub lives in $CONDA_PREFIX/lib, and the LD_PRELOAD is set only when this env is active.
• If your conda packages ever start shipping a proper libittnotify.so at a well-known location, you can remove the stub and activation hooks.
• If import fails with a different missing symbol (e.g., iJIT_GetNewMethodID), make sure your stub defines it (the snippet above already includes the common set).
• If you explicitly installed the conda-forge ittapi and it does provide a shared library on your system, you can prefer that over the stub—just ensure it’s discoverable by the loader (e.g., in $CONDA_PREFIX/lib or via LD_LIBRARY_PATH/LD_PRELOAD).
• For completeness, I also verified there were no lingering execstack flags and confirmed the NVIDIA driver/toolkit versions matched the PyTorch CUDA build (pytorch-cuda=12.1).

Why post this on a conda/Anaconda forum rather than PyTorch?
Because the issue appeared only inside a conda environment, while the same PyTorch build imported fine under a non-conda Python. It looks like a packaging/runtime-linking edge case in some conda setups where ITT symbols aren’t present by default.

Happy to hear cleaner/official ways to surface libittnotify.so in conda—this no-op shim was the most robust, fast fix for me.

In my previous post, I described a workaround using a stub version of libittnotify.so (a minimal no-op shared library that only defined the missing iJIT_... symbols).
That workaround worked fine when running Python from Bash inside the Conda environment ml,
but PyCharm still treated the same environment as broken — SDK validation failed and PyTorch couldn’t be installed.

Later, I switched to the official Intel ITT API source instead of the stub.
The repository (https://github.com/intel/ittapi) normally builds only static libraries (.a files):
libittnotify.a and libjitprofiling.a.
I didn’t need both — libittnotify.a alone contains all required symbols.

So after building with:

git clone https://github.com/intel/ittapi
cd ittapi
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_POSITION_INDEPENDENT_CODE=ON .
make -j$(nproc)

I manually created a shared library from the first archive:

gcc -shared -fPIC -Wl,–whole-archive libittnotify.a -Wl,–no-whole-archive -ldl -lpthread -o libittnotify.so

and then installed it system-wide:

sudo cp libittnotify.so /usr/local/lib/
sudo ldconfig

After that, I created a new Conda environment (ml1), selected it as the project interpreter in PyCharm, and installed torch directly from within the IDE.
This time the installation succeeded, PyTorch imported cleanly, and CUDA was detected —
no more iJIT_NotifyEvent errors.
Apparently, the previous environment (ml) stayed broken because it was created before the library was available system-wide.

1 Like

This is awesome work, thanks for posting. It fixed my issue.