The Akka cluster is comprised of 672 nodes, each equipped with two low power Intel Xeon quad-core L5420 CPUs and 16 GB of RAM for a total of 5376 cores and 10.7 TB RAM. The interconnects are 10 Gbps Cisco Infiniband with full bisectional bandwith and Gigabit Ethernet. Attached to the system is 1 PB of fast disk storage. Akka has a theoretical peak performance of 53.8 Teraflops and has reached 46.04 Teraflops with the HP LINPACK benchmark (85.6% efficiency).
In addition, Akka was ranked 16 on the Green500 list of the most energy-efficient supercomputers in the world, which serves as a complementary view to the TOP500 list.
At the time it went into production, Akka was the most powerful supercomputer in Europe and the 2nd most powerful system in the world to provide a dual-boot style environment which makes it possible to run both Windows HPC Server 2008 and Linux operating systems.
Akka is located in a new machine room, with a new revolutionary design, optimized for high density computing clusters. Organized in 12 racks and 48 BladeCenters with 14 Blades in each BladeCenter Akka is a fairly compact and space efficient system.
"The choice of low-power processors and the highly energy-efficient design of our new machine room shows our commitment to become a green data center"
Professor Bo Kågström
A small part of the new cluster will be using IBM Power microprocessors and Cell Broadband Engines. The Power and Cell blades will primarily be used for development of new parallel algorithms.
"This is the first supercomputer in Sweden with both Linux and Windows operating systems. It will be very exciting to see how new results can be achieved by combining and utilizing these different hardware and operating systems."
Professor Bo Kågström
The Akka system with Linux has been in use since 2008-06-25.
From the beginning, Akka ran CentOS 5, but in early 2014, it was changed to Ubuntu 12.04 (Precise Pangolin). At the same time, the batch system was changed from Torque to Slurm. You can read more about SLURM here (official website) or under batch systems.
For more information please contact HPC2N Support.