Using the GPU partition for GPU-Intensive Jobs

Created by Bent Petersen, Modified on Mon, 3 Mar at 4:02 PM by Bent Petersen

The gpuqueue partition is designed for GPU-based computations, such as machine learning, deep learning, and GPU-accelerated bioinformatics tools.


Key Features

    •    Max Runtime: Up to 14 days (336 hours).

    •    Available GPUs: NVIDIA A100 GPUs (up to 4 per node).

    •    Max CPU Allocation: Subject to available GPUs per job.

    •    Memory Limit: Subject to available GPU and node memory.

    •    Priority: Standard fair-share based scheduling.

    •    Best for: TensorFlow, PyTorch, CUDA, GPU-accelerated computations.


How to Submit a Job to the GPU Queue


To submit a GPU job, specify the gpuqueue partition and request GPU resources:


sbatch --partition=gpuqueue --qos=normal --gres=gpu:1 --cpus-per-task=8 --mem=64G --time=12:00:00 --wrap="your_gpu_command_here"

Best Practices

    •    Always request GPUs with --gres=gpu:N (replace N with 1-4 as needed).

    •    Ensure your application supports GPU acceleration before using this queue.

    •    Do not use the GPU queue for CPU-only jobs; they belong in cpuqueue.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article