Supercomputing, or advanced high-performance computing, is a process for solving extremely complex data-laden problems by using the concentrated processing power of multiple, parallel computers. Applications range from genomics to astronomical calculations to discovering medical drugs.
However, so far, these impressive tools have struggled with storage platforms limited by rigid frameworks that force users to choose either customization of features or high availability. Now, Virginia Tech researchers have finally come to the rescue.
Performing at the exascale
A novel first-of-its-kind storage framework, called BespoKV, has been developed that holds the unique capacity to give supercomputing, or high-performance computing (HPC), data systems the flexibility to thrive at an incredibly high rate. In fact, the researchers claim their system could one day achieve the noteworthy HPC goal of performing at the exascale (1 billion billion calculations per second).
As a point of reference, the best systems in operation today function at a petaflop (a quadrillion calculations per second). What this means is that these new systems can offer supremely efficient storage mediums.
The key to achieving this lies in the fact that the innovative systems are engineered to load content from the faster in-memory store nearby rather than the commonly used far-away storage server. This improvement results in a system equipped with very high performance when it comes to completing requests.
“I got interested in key value systems because this very fundamental and simple storage platform has not been exploited in high-performance computing systems where it can provide a lot of benefits,” said in a statement Ali Anwar, first author on the paper being presented and a recent Virginia Tech graduate who is currently employed at IBM Research. “BespoKV is a novel framework that can enable HPC systems to provide a lot of flexibility and performance and not be chained to rigid storage design.”
Via: Virginia Tech