The gpuqueue partition is designed for GPU-based computations, such as machine learning, deep learning, and GPU-accelerated bioinformatics tools.
Key Features
• Max Runtime: Up to 14 days (336 hours).
• Available GPUs: NVIDIA A100 GPUs (up to 4 per node).
• Max CPU Allocation: Subject to available GPUs per job.
• Memory Limit: Subject to available GPU and node memory.
• Priority: Standard fair-share based scheduling.
• Best for: TensorFlow, PyTorch, CUDA, GPU-accelerated computations.
How to Submit a Job to the GPU Queue
To submit a GPU job, specify the gpuqueue partition and request GPU resources:
sbatch --partition=gpuqueue --qos=normal --gres=gpu:1 --cpus-per-task=8 --mem=64G --time=12:00:00 --wrap="your_gpu_command_here"
Best Practices
• Always request GPUs with --gres=gpu:N (replace N with 1-4 as needed).
• Ensure your application supports GPU acceleration before using this queue.
• Do not use the GPU queue for CPU-only jobs; they belong in cpuqueue.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article