High Performance Computing: Meet Charlie

ITBigelow Laboratory for Ocean Sciences has a High Performance Computer Cluster designed to handle a diverse range of scientific data processing needs. The compute component of the cluster consists of a shared memory super computer system that has 160 processor cores and 1.28TB of RAM at its disposal and is scalable to a maximum of 4096 cores and 64TB of RAM. The data warehouse component of the cluster consists of over 200TB of high performance, highly available storage being served up via a multitude of open protocols, allowing for a great deal of flexibility in the environment to support all operating systems seamlessly. The networking component of the cluster consists of the NUMALink 6 interconnect to connect all compute nodes at speeds of 56Gbps, while client side networking is served via 10GbE. Bigelow Laboratory for Ocean Sciences currently has a 500Mbps connection to the Internet to facilitate fast transfers of large datasets, with the ability to scale to 1Gbps on both the public Internet as well as Internet2.

The High Performance Computer Cluster was supported by a grant from the National Science Foundation's Division of Biological Infrastructure.

The Cluster has extensive memory, storage space, bandwith, and networking capabilities that allow it to handle a diverse range of scientific data processing needs.

Its compute component consists of a shared memory super computer that has 160 processor cores and 1.28 terabytes of memory, which is scalable to higher computing power as processing needs grow. Its data warehouse component consists of more than 300 terabytes of high performance, highly available storage, allowing for a great deal of flexibility to seamlessly support all operating systems at the Laboratory. These computing capabilities also are available to external clients who need fast, accurate processing of huge amounts of data without investing in a supercomputer of their own.

The Cluster’s architecture is designed to be flexible to adapt to changing processing needs. It's vast and infnitely scalable capacity can be accessed via a multitude of open protocols. Networking is made possible using NUMALink 6 interconnect to connect all compute nodes, while client side networking is served via 10GbE. The current 500Mbps connection to the Internet facilitates fast transfers of large datasets, with the capacity to be scaled up to 1Gbps on both the public Internet as well as Internet2.