====== Cluster Hardware Overview ====== This page provides an overview of the hardware and partitions available on the cluster. The cluster is equipped with modern Intel CPUs and NVIDIA GPUs to support a wide range of high-performance and GPU-accelerated workloads. ===== Partitions ===== The cluster is divided into several partitions, each with different CPU and GPU resources. All partitions currently have **no time limit** for running jobs. ^ Partition ^ CPU Cores (Threads) ^ GPU(s) ^ Notes ^ | **unite** | Up to 192 (384) | 5 x NVIDIA Tesla T4 | General-purpose, high-throughput jobs | | **gpu_nm** | Up to 32 (64) | 1 x NVIDIA Tesla T4 | GPU-accelerated jobs, smaller scale | | **gpu_ai** | Up to 192 (384) | 5 x NVIDIA Tesla T4 | AI/ML workloads, high-performance | | **a40** | Up to 64 (128) | 2 x NVIDIA A40 | AI/visualization, GPU-intensive | ===== CPU Specifications ===== The cluster features several generations of high-performance Intel Xeon processors. * **8 x Intel(R) Xeon(R) Gold 6148 @ 2.40GHz** - 20 cores / 40 threads per CPU - Suitable for parallel and memory-intensive tasks * **2 x Intel(R) Xeon(R) Gold 6142 @ 2.60GHz** - 16 cores / 32 threads per CPU - Higher base frequency for faster single-thread performance * **2 x Intel(R) Xeon(R) Platinum 8358 @ 2.60GHz** - 32 cores / 64 threads per CPU - High core density for large parallel workloads ===== GPU Specifications ===== The cluster supports GPU-accelerated workloads using NVIDIA Tesla and A40 GPUs. * **5 x NVIDIA Tesla T4** - 16 GB GDDR6 memory - Optimized for inference and lightweight training - Supported in `unite`, `gpu_nm`, and `gpu_ai` partitions * **2 x NVIDIA A40** - 48 GB GDDR6 memory each - Excellent for large model training, rendering, and visualization - Available in the `a40` partition ===== Summary ===== This cluster provides a flexible and powerful environment for both CPU and GPU workloads. The modular partition setup allows users to select resources best suited for their tasks, whether that’s compute-heavy simulations, AI training, or smaller-scale GPU processing.