Friday, December 26, 2025

*** miniforge3 build conda enviroments (yolo 11n run passed )


On wahab, need to switch to bash first.

source ~/miniforge3/etc/profile.d/conda.sh 

conda create -n venv_yolo python=3.10 -y

With chatGPT and gemine it took a while to have workable slurm job with gpu request and miniforge environment. 

It took about 14 conversation in chatGPT 5.2plus to get a functional slurm job that use miniforge to run yolo 11n inference. 


==== the workable slurm job is below===
#!/bin/bash

#SBATCH --job-name=yolo11n_inf_gpu

#SBATCH --partition=timed-gpu

#SBATCH --time=00:10:00

#SBATCH --mem=8G

#SBATCH --cpus-per-task=4

#SBATCH --gres=gpu:1

#SBATCH -o test_yolo11n_gpu.%j.out

#SBATCH -e test_yolo11n_gpu.%j.err


set -euxo pipefail


enable_lmod

module purge

module load container_env

module load python3/2024.2-py310


crun bash -lc '

  set -eo pipefail

  set -x


  # Ensure we run in the directory you submitted from (~/yolo_carla)

  cd "$SLURM_SUBMIT_DIR"

  echo "PWD=$(pwd)"

  ls -lh bus.jpg || true


  # Avoid inheriting container conda state

  unset CONDA_SHLVL CONDA_PREFIX CONDA_DEFAULT_ENV CONDA_PROMPT_MODIFIER CONDA_PREFIX_1


  source "$HOME/miniforge3/etc/profile.d/conda.sh"

  set +u

  conda activate venv_yolo

  set -u


  echo "=== GPU CHECK ==="

  echo "CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES"

  nvidia-smi -L

  nvidia-smi


  echo "=== TORCH CUDA CHECK ==="

  python - <<'"'"'PY'"'"'

import torch

print("torch:", torch.__version__)

print("cuda available:", torch.cuda.is_available())

print("device count:", torch.cuda.device_count())

if torch.cuda.is_available():

    print("gpu0:", torch.cuda.get_device_name(0))

PY


  echo "=== YOLO11N INFERENCE (GPU) ==="

  if [ ! -f yolo11n.pt ]; then

    echo "ERROR: yolo11n.pt not found in $PWD"

    exit 2

  fi

  if [ ! -f bus.jpg ]; then

    echo "ERROR: bus.jpg not found in $PWD"

    exit 3

  fi


  python - <<'"'"'PY'"'"'

from ultralytics import YOLO

import ultralytics


print("ultralytics:", ultralytics.__version__)

model = YOLO("yolo11n.pt")


# Run on GPU 0

res = model.predict(source="bus.jpg", device=0, imgsz=640, conf=0.25, verbose=False)


r0 = res[0]

n = 0 if r0.boxes is None else len(r0.boxes)

print("boxes:", n)


out_img = "yolo11n_bus_pred.jpg"

r0.save(filename=out_img)

print("saved:", out_img)

PY

'



No comments:

Post a Comment