Over the past few days, I’ve been trying to set up a clean and stable PyTorch environment for my research work. I read online that I the best and most stable thing to do is to install PyTorch with conda installso that it manages any versions of CUDA, NumPy, or other libraries that might clash with what’s already in my environment.
To avoid all of that, I’m hoping to install PyTorch using Conda instead.
I’m still struggling to get the correct commands and channels set up. Previously in the the official guide Pytorch had a conda option for installation which is now gone and only pip3 is available. I’m not entirely sure how to handle this to avoid accidentally mixing pip and conda packages in the same environment.
Thanks in advance — hoping to finally get a stable setup running!
Hello @Victor16 !
Thank you for reaching out here. How are you installing conda? And would you also mind sharing what Platform, Architecture and GPU you are working with?
If you are using miniconda or Anaconda Distribution installers your channel config will automatically be pointed at the Anaconda “defaults” channels. Here is a link for the Anaconda main channel which is part of “defaults”: main
Regardless, the commands below could be used to create a fresh conda environment with and then install the gpu variant of PyTorch.
conda create -n pytorch_env python=3.13 -c defaults
conda activate pytorch_env
conda install pytorch-gpu -c defaults
You can verify the installation with something like this while using the environment.
import torch
print(torch.__version__)
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0) if torch.cuda.is_available() else "No GPU")
Happy to chat further if you have any other questions!
Hello Tyler,
Thank you for your response. I’m using the Windows 11 distribution of Anaconda installed from the official website (x86_64). My GPU is a 5060 Ti with Driver Version 591.74 and CUDA Version 13.1. I’m also using VS Code as my IDE.
Unfortunately, this still hasn’t resolved the issue. I followed your instructions (except for using Python 3.12), and the output I received was:
InvalidArchiveError("Error with archive C:\Users\jwb20148\AppData\Local\anaconda3\pkgs\pytorch-2.7.0-gpu_cuda128")
I then switched to the installation command provided on the PyTorch website:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130
With this approach, I successfully installed PyTorch in the virtual environment. I also conda installed matplotlib, scikit‑learn, pandas, and OpenCV( through conda‑forge.)
Unfortunately, my concerns were correct and there is some conflict now. The kernel now crashes whenever I try to import PyTorch. VS Code reports:
Error:
The Kernel crashed while executing code in the current cell or a previous cell.
Please review the code in the cell(s) to identify a possible cause of the failure.
Click here for more info.
View Jupyter log for further details.
Jupyter log here:
16:17:44.748 [error] Disposing session as kernel process died ExitCode: 3, Reason: OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
At this point I’m unsure how to proceed, so any further guidance would be greatly appreciated.
@Victor16 ,
Thanks for this added context. We are currently working to build PyTorch 2.8, 2.9 and 2.10 this quarter. This is a major priority for us currently.
The latest version we have available on the main channel today for Windows CUDA is 2.7. PyTorch 2.7 does not support CUDA >= 13.0 which is likely causing this problem.
Could you try the same command installing from conda-forge? conda-forge is a community driven repository of conda packages. They are currently ahead of us for PyTorch releases. These packages are hosted at conda-forge | Anaconda.org.
conda create -n pytorch_env python=3.12 -c conda-forge
conda activate pytorch_env
conda install pytorch-gpu -c conda-forge
Verifying installation
import torch
print(torch.__version__)
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0) if torch.cuda.is_available() else "No GPU")
@Victor16 Some additional info.
The team did some additional investigation. Based on this error:
16:17:44.748 [error] Disposing session as kernel process died ExitCode: 3, Reason: OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
It seems like there is more than one OpenMP implementation installed in your environment.
That pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130 command is probably what triggered the OpenMP error; the upstream PyTorch wheels likely include their own copies of the Intel OpenMP DLLs, and that’s going to be a problem.
The InvalidArchive error is normally a sign that something messed up in the package download.
I would just suggest deleting the environment (conda remove --all -n <env name>), running conda clean -ay, and then trying to recreate the environment with the commands mentioned above.
Your preinstalled CUDA 13.1 should actually be irrelevant. Conda will install necessary CUDA packages based on your drivers into the conda environment. Anaconda main channel has up to PyTorch 2.7 currently and conda-forge has up to PyTorch 2.10.
Hello Tyler,
Thank you and your team for the support. Unfortunately, this approach doesn’t work either, however, on the bright side I might have some new clues to the case.
-
The command conda install pytorch-gpu -c defaults fails to install pytorch InvalidArchiveError("Error with archive C:.....\AppData\Local\anaconda3\pkgs\pytorch-2.7.0-gpu_cuda128)even in a brand new environment. Therefore I believe pytorch and pytorch-gpu libraries are incompatible with something even when I stick entirely to the default channels.
-
Using conda install pytorch-gpu -c conda-forgeI successfully install pytorch 2.10.0 and it works
However for the project I’m running at the moment I require other libraries like pandas, matplotlib and scikit (all of them very popular and widely used). I tried installing them separately and had the following findings:
For the following statement it is true that I have installed pytorch first thing in the venv and it is verified that works.
-
Installing matplotlib didn’t lead to any issues → I ran the script taking off pandas and scikit functions worked fine
-
However installing either scikit or pandas leads to a problem. Following an installation of the packages in environment which already has pytorch installed leads to the change in the following libraries which then consecutively break the kernel:
During pandas installation with conda install pandas
The following packages will be REMOVED:
libblas-3.11.0-5_hf2e6a31_mkl
libcblas-3.11.0-5_h2a3cdd5_mkl
liblapack-3.11.0-5_hf9ab0e9_mkl
libmagma-2.9.0-hb6a17ea_3
vcomp14-14.44.35208-h818238b_34
The following packages will be UPDATED:
cudnn conda-forge::cudnn-9.10.2.21-h32ff316~ --> pkgs/main::cudnn-9.17.0.29-h0faa65f_0
libabseil conda-forge::libabseil-20250512.1-cxx~ --> pkgs/main::libabseil-20250814.1-cxx17_hcd311fc_0
libcudnn conda-forge::libcudnn-9.10.2.21-hca89~ --> pkgs/main::libcudnn-9.17.0.29-he1bb4ab_0
libcudnn-dev conda-forge::libcudnn-dev-9.10.2.21-h~ --> pkgs/main::libcudnn-dev-9.17.0.29-he1bb4ab_0
libprotobuf conda-forge::libprotobuf-6.31.1-hdcda~ --> pkgs/main::libprotobuf-6.33.0-h2a56892_1
vc conda-forge::vc-14.3-h41ae7f8_34 --> pkgs/main::vc-14.42-haa95532_5
The following packages will be SUPERSEDED by a higher-priority channel:
libtorch conda-forge::libtorch-2.10.0-cuda128_~ --> pkgs/main::libtorch-2.7.0-gpu_cuda128_h95907a7_302
libzlib conda-forge::libzlib-1.3.1-h2466b09_2 --> pkgs/main::libzlib-1.3.1-h02ab6af_0
llvm-openmp conda-forge::llvm-openmp-21.1.8-h4fa8~ --> pkgs/main::llvm-openmp-20.1.8-h29ce207_0
mkl conda-forge::mkl-2025.3.0-hac47afa_455 --> pkgs/main::mkl-2025.0.0-h5da7b33_930
numpy conda-forge::numpy-2.4.1-py313hce7ae6~ --> pkgs/main::numpy-2.4.1-py313h050da96_0
pytorch conda-forge::pytorch-2.10.0-cuda128_m~ --> pkgs/main::pytorch-2.7.0-gpu_cuda128_py313h0e49702_302
pytorch-gpu conda-forge::pytorch-gpu-2.10.0-cuda1~ --> pkgs/main::pytorch-gpu-2.7.0-gpu_cuda12_hb8205d0_302
tbb conda-forge::tbb-2022.3.0-h3155e25_2 --> pkgs/main::tbb-2022.0.0-h214f63a_0
vc14_runtime conda-forge::vc14_runtime-14.44.35208~ --> pkgs/main::vc14_runtime-14.44.35208-h4927774_10
During scikit-learn installation with conda install anaconda::scikit-learn==1.8.0
The following packages will be REMOVED:
libblas-3.11.0-5_hf2e6a31_mkl
libcblas-3.11.0-5_h2a3cdd5_mkl
liblapack-3.11.0-5_hf9ab0e9_mkl
libmagma-2.9.0-hb6a17ea_3
vcomp14-14.44.35208-h818238b_34
The following packages will be UPDATED:
cudnn conda-forge::cudnn-9.10.2.21-h32ff316~ --> pkgs/main::cudnn-9.17.0.29-h0faa65f_0
libabseil conda-forge::libabseil-20250512.1-cxx~ --> pkgs/main::libabseil-20250814.1-cxx17_hcd311fc_0
libcudnn conda-forge::libcudnn-9.10.2.21-hca89~ --> pkgs/main::libcudnn-9.17.0.29-he1bb4ab_0
libcudnn-dev conda-forge::libcudnn-dev-9.10.2.21-h~ --> pkgs/main::libcudnn-dev-9.17.0.29-he1bb4ab_0
libprotobuf conda-forge::libprotobuf-6.31.1-hdcda~ --> pkgs/main::libprotobuf-6.33.0-h2a56892_1
vc conda-forge::vc-14.3-h41ae7f8_34 --> pkgs/main::vc-14.42-haa95532_5
The following packages will be SUPERSEDED by a higher-priority channel:
libtorch conda-forge::libtorch-2.10.0-cuda128_~ --> pkgs/main::libtorch-2.7.0-gpu_cuda128_h95907a7_302
libzlib conda-forge::libzlib-1.3.1-h2466b09_2 --> pkgs/main::libzlib-1.3.1-h02ab6af_0
llvm-openmp conda-forge::llvm-openmp-21.1.8-h4fa8~ --> pkgs/main::llvm-openmp-20.1.8-h29ce207_0
mkl conda-forge::mkl-2025.3.0-hac47afa_455 --> pkgs/main::mkl-2025.0.0-h5da7b33_930
numpy conda-forge::numpy-2.4.1-py313hce7ae6~ --> pkgs/main::numpy-2.4.1-py313h050da96_0
pytorch conda-forge::pytorch-2.10.0-cuda128_m~ --> pkgs/main::pytorch-2.7.0-gpu_cuda128_py313h0e49702_302
pytorch-gpu conda-forge::pytorch-gpu-2.10.0-cuda1~ --> pkgs/main::pytorch-gpu-2.7.0-gpu_cuda12_hb8205d0_302
tbb conda-forge::tbb-2022.3.0-h3155e25_2 --> pkgs/main::tbb-2022.0.0-h214f63a_0
vc14_runtime conda-forge::vc14_runtime-14.44.35208~ --> pkgs/main::vc14_runtime-14.44.35208-h4927774_10
Briefly looking over the log I notice that they both try to supersede Pytorch 2.10 to pytorch 2.7 which is unavailable at first place
TL;DR
For anyone facing similar problems and looking for immediate solutions: I temporarily solved the problem by installing older versions of the packages namely: pandas 2.2.3, pytorch 2.6.0, scikit 1.5.2 by simply running these commands, all of that is run on conda environment with python 3.10
pip install torch torchvision torchaudio --index-url ``https://download.pytorch.org/whl/cu126
Looking in indexes: ``https://download.pytorch.org/whl/cu126
followed by:
conda install matplotlib ipykernel pandas==2.2.3
conda install anaconda::scikit-learn==1.5.1
Hello Tyler,
I am having similar problems with Victor. For reference my setup is also made of a RTX 5060 Ti, CUDA 12.9 and Driver Version: 576.88. I tried both of your suggestions;
conda install pytorch-gpu -c defaults
and
conda install pytorch-gpu -c conda-forge
but everytime I print torch.cuda.get_device_name(0) I get the following:
NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_89 sm_90 compute_90.
If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(
'NVIDIA GeForce RTX 5060 Ti
Additionally, conda-forge install pytorch 2.6.0 and conda defaults installed pytorch 2.7.0
Hello Ioannis,
I would recommend the following which seem to be working for me:
- Update the nvidia drivers using the studio version (for stability)
- create a conda environment
- Install pytorch using the official website Get Started. Since after the update you will have 13.1 use the cuda 13.0 in the website
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130
- Install older versions of the libraries that you might need like scikit-learn and pandas which are compatible with the version of pytorch you installed (2.10) - I chose to use this
conda install matplotlib ipykernel pandas==2.2.3 etc.
- As a rule of thumb after running the command
conda install <arbitrary package>
avoid adding the -y so that you can manually check if what you are installing wants to interfere with (update, remove, supersede …) the packages you already installed. In one of my earlier replies to this post earlier you can see how after trying to install latest release of pandas, it tries to amend (supersede or even remove) a bunch of packages that pytorch added (Im sure that pytorch installed them because it is the only package I have previously installed in that environment). Once you have manually checked the packages you can opt out or confirm the installation and therefore protect the functionality of your environment.
Hi @Victor16 ,
The reason packages are being downgraded here is because you are combining channels. You got PyTorch from conda-forge by using the -c flag. You then tried to pull other packages from main. These likely came from main because the -c flag was not present to specify conda-forge. When -c isn’t present the channel defaults to the channel priority in your conda config.
When you do this the solver will overwrite many of the conda-forge package with packages from main causing this downgrading issue. Mixing dependencies across main and conda-forge can cause issues.
You can find conda documentation on managing channel configuration and priority here - Managing channels — conda 25.11.2.dev93 documentation
We recommend you update your channel priority or use the -c flag consistently when installing the additional packages into the conda environment to avoid these issues.
The solution - pick one channel and stick with it:
Option 1: Use conda-forge for everything (has newer PyTorch)
bash
conda create -n myenv python=3.13
conda install pytorch-gpu pandas scikit-learn matplotlib -c conda-forge
# Always use -c conda-forge when adding packages later
Option 2: Set conda-forge as default priority
bash
conda config --add channels conda-forge
conda config --set channel_priority strict
Then you don’t need -c conda-forge on every command.
About mixing pip and conda:
Your workaround of using pip for PyTorch and conda for other packages works temporarily, but can cause dependency conflicts long-term. The single-channel approach is more reliable.
About the InvalidArchiveError:
If it persists after conda clean -ay, it might be a Windows permissions issue. Try running your terminal as administrator when creating the environment.
Hi @ioannis
Good news - you can ignore that warning message.
Our engineering team confirmed that while the warning mentions sm_120 incompatibility, CUDA 12.8 does support compute capability 12.0. The warning appears because PyTorch wasn’t explicitly compiled with sm_120 in its architecture list, but CUDA’s forward compatibility allows your RTX 5060 Ti to work anyway.
Evidence it’s working: At the end of that warning message, you should see your GPU name displayed correctly: “NVIDIA GeForce RTX 5060 Ti”. This confirms PyTorch detected and can use your GPU.
Why the warning exists: We may have missed updating the architecture list when we updated from CUDA 12.4 to CUDA 12.8. The team is investigating this to remove the warning in future builds.
What you should do: Proceed with your work - the GPU is functional despite the warning. Performance should be normal, though we’ll ensure future PyTorch builds explicitly include sm_120 to eliminate this warning.
If you notice any actual GPU computation failures (not just warnings), please let us know.
You should also take a look at my recommendations to Victor about mixing channels and using -c and channel priority.
1 Like
Hello Tyler,
Thank you for your help over the last couple of days! The ladder now works the only thing that I noticed different between what you recommended and my other post about the temporary solution that I had is that the one you recommended runs on cuda12.8 instead of cuda13.0 (which is the case for the pip3 install version from the post above)
When I set conda-forge as default priority in an arbitrary env (e.g. env_1) using the commands provided
conda config --add channels conda-forge
conda config --set channel_priority strict
does that modification apply only to the specific environment I’m working in ( env_1), while leaving the base environment untouched? Or does it update the channel priority for my entire Anaconda installation?
Additionally, if I clone env_1 after changing its channel priority, will the new environment automatically inherit the same conda‑forge priority settings, or will it revert to the default channel configuration?
Just 2 more small things. Does anaconda recommend/require installation of CUDA toolkit (nvcc). Currently I do not have them installed and it seems to be working fine, however, what is the official recommendation regarding that?
Secondly I found that when I run conda env list I end up with duplicate environments base C:\Users\jwb20148>conda env list
conda environments:
* → active
+ → frozen
base C:\Users\jwb20148\AppData\Local\anaconda3
base_torch_env C:\Users\jwb20148\AppData\Local\anaconda3\envs\base_torch_env
torch130_pandas C:\Users\jwb20148\AppData\Local\anaconda3\envs\torch130_pandas
base_torch_env C:\Users\jwb20148\AppData\Local\anaconda3\envs\base_torch_env
torch130_pandas C:\Users\jwb20148\AppData\Local\anaconda3\envs\torch130_pandas
Hi @Victor16 ,
Great to hear you got it working! I will try my best to address your remaining questions. Happy to help.
1. Channel configuration scope (global vs. environment-specific)
The commands you used:
conda config --add channels conda-forge
conda config --set channel_priority strict
These modify your global user configuration file (~/.condarc or C:\Users\<username>\.condarc on Windows), which affects all environments including base - not just the environment you were in when you ran the commands.
If you want environment-specific channel settings, you have two options:
Documentation: Using the .condarc configuration file Also see: conda config command reference
2. Cloning environments and channel priority
Channel configuration is not part of an environment’s package specification - it’s stored separately in .condarc files. When you clone an environment with conda create --clone, you’re copying packages only. The clone will use whatever channel configuration exists in your .condarc at that time, not any configuration from the source environment.
Documentation: Managing environments - Cloning
3. CUDA Toolkit (nvcc) recommendation
You’re correct that things work fine without installing the full CUDA toolkit. The cudatoolkit packages installed by conda with PyTorch only include runtime libraries needed to execute CUDA operations - they don’t include development tools like nvcc. You may need additional packages if you are doing work outside of the normal workflows using PyTorch like compiling custom CUDA C/C++ code or building PyTorch extensions that require compilation. Basically, it depends what you are trying to do.
4. This seems tied to conda which is an OSS project we help steward. The best way to get help would be to create a bug report in GH for the maintainers to review. You can find some instructions on that here - conda/CONTRIBUTING.md at main · conda/conda · GitHub. I messaged the team and they weren’t familiar with this issue. Providing as much detail in the GH issue as possible would help them investigate.
1 Like