Monday, December 15, 2025

miniforge wahab, part1

Below is a no-sudo, Miniforge-based setup that works cleanly on the Wahab cluster at ODU. This assumes a typical HPC environment (login node + compute nodes, shared home filesystem).


1. Choose an install location (home directory)


Use a path under your home directory so no admin privileges are needed:

cd ~
mkdir -p software
cd software


2. Download Miniforge (Linux, x86_64)


Wahab nodes are Linux-based. Use Miniforge3:

wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh

If wget is unavailable, use:

curl -LO https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh


3. Install Miniforge (no sudo)


Run the installer and point it to your home directory:

bash Miniforge3-Linux-x86_64.sh

When prompted:

  • Install location → something like

/home/<your_username>/software/miniforge3


  • Initialize Miniforgeyes


If you prefer not to auto-modify shell files, answer no and do it manually (shown below).


4. Initialize Conda manually (recommended on HPC)


If you answered no to auto-init, add this to your ~/.bashrc (or ~/.bash_profile depending on Wahab):

# Miniforge
export PATH="$HOME/software/miniforge3/bin:$PATH"

Then reload:

source ~/.bashrc

Verify:

conda --version


5. Configure Conda for HPC use (important)


Disable auto-activation of base and use conda-forge only:

conda config --set auto_activate_base false
conda config --add channels conda-forge
conda config --set channel_priority strict


6. Create a virtual environment


Example for Python 3.10:

conda create -n hqin_env python=3.10

Activate:

conda activate hqin_env

Confirm:

which python
python --version


7. Install packages (safe for no-sudo)


Examples:

conda install numpy scipy pandas scikit-learn matplotlib
conda install pytorch torchvision torchaudio cpuonly

If Wahab has GPUs and CUDA modules:

conda install pytorch torchvision torchaudio pytorch-cuda=11.8

(Load the CUDA module first if Wahab uses environment modules.)


8. Using Miniforge inside Slurm jobs


In your Slurm script, always initialize Conda explicitly:

#!/bin/bash
#SBATCH --job-name=hqin_test
#SBATCH --time=01:00:00
#SBATCH --mem=16G

source $HOME/software/miniforge3/etc/profile.d/conda.sh
conda activate hqin_env

python your_script.py

This avoids common “conda not found” errors on compute nodes.


9. Optional: faster environments with mamba


Miniforge includes mamba:

mamba create -n hqin_env2 python=3.11 numpy pandas

Much faster than conda on shared filesystems.


10. Common Wahab-specific tips

  • Never install under /usr, /opt, or /shared unless explicitly allowed.

  • Keep environments small; many tiny envs > one giant env.

  • If storage quotas are tight:

conda clean --all


  • If Wahab uses Lmod:

module purge

  • before activating conda to avoid library conflicts.


If you want, I can:

  • Tailor this to GPU partitions on Wahab

  • Provide a ready-to-use Slurm template for your AI / genomics workflows

  • Help you mirror this setup on AWS / ODU GPU clusters for consistency

 



No comments:

Post a Comment