University Deploys 'Magus' – High-Performance Computing Cluster
Shiv Nadar Institution of Eminence added another feather to its cap with the launch of 'Magus' - High-Performance Computing Cluster.
The state-of-the-art supercomputer is designed to meet the growing demands of data-intensive scientific and engineering research. The new generation High-Performance Computing (HPC) Cluster is designed to facilitate and aid research by providing powerful computing resources for processing large amounts of data quickly and accurately. The new HPC Cluster has been ranked 32nd in India's list of high-performance supercomputers.
The HPC cluster will be used in many research areas, including engineering, physics, chemistry, AI, and ML. The HPC can simulate physical phenomena, analyze and process large datasets, and develop new algorithms and software.
Such computing resources are becoming essential in higher education as it enables researchers to solve complex problems that would otherwise be impossible to process on conventional computers. It will also allow students to gain hands-on experience with advanced computing technologies and simulate real-world scenarios, allowing them to understand the underlying concepts better.
"We are delighted to offer our faculty and students access to the new High-Performance Computing Cluster. With this advanced system, our researchers can perform complex simulations, more easily analyze large data sets and enhance their research. The facility represents a significant investment in research at our Institution. We hope that our research will continue to impact the world in meaningful way," said Dr. Ananya Mukherjee, Vice-Chancellor, Shiv Nadar IoE
Rajesh Dawar, Director – IT, Shiv Nadar IOE, added, "This cluster's design, conceptualization, and project deployment was initiated during the pandemic and involved collaboration between various original equipment manufacturers (OEMs), chip makers, and system integrators. The Institution has witnessed significant improvement in core-to-core performance and capacity with the launch of the new cluster. The High-Performance Computing Cluster features the latest hardware and software technologies, including the latest processors, memory, and storage devices."
Deepak Agrawal, Project and Data Centre Lead, highlighted the specifications and said, "The system is designed to be flexible and scalable, allowing it to grow and adapt to the changing needs of researchers."
The key specifications of the new cluster are:
- Total cores - 8064 (AMD)
- 57 Compute Nodes, 2 Login Nodes, and 2 Master Nodes
- 12 High memory nodes with local SSD
- 2 GPGPU Nodes with 4 x A100 Tesla GPU
- 4GB Memory Per core for Compute Nodes, with a total memory - 29,184 GB
- 16GB Memory Per core for High Memory Nodes, with total memory - 12,288 GB
LINPACK Benchmark Results:
- The theoretical Rpeak – 320.4 TFlops
- The observed Rmax – 228.65 TFlops