Now, as an ever-increasing amount of information is
collected and stored, scientific data management confronts a problem: The
application that manages the latest generation of supercomputers was not
designed for the scalability requirements that are expected in coming years. In
fact, in less than a decade, these frameworks are going to be two orders of
magnitude faster than current supercomputers.
"Today, these applications are encountering big
problems of performance and scalability due to the tremendous increase of data
as a result of better instruments, the growing ubiquity of sensors and greater
connectivity between devices," explained professor Florin Isaila, from the
ARCOS in the Department of Computer Science. "These days, a radical
redesign of the computational infrastructures and management software is
necessary to adapt them to the new model of science, which is based on the
massive prepared of data."
The objective of the project, "Cross-Layer Abstractions
and Run-time for I/O Software Stack of Extreme-scale systems" (CLARISSE),
is to boost the performance, scalability, programmability and robustness of the
data management of scientific applications to underpin the design of
next-generation supercomputers.
Historically, data management application has been developed
in layers with little coordination in the global management of resources.
"Nowadays, this lack of coordination is one of the huge obstacles to
increasing the productivity of current systems. With CLARISSE, we research
solutions to these problems through the design of new mechanisms for
coordinating the data management of the different flags," said Professor
Isaila.
Jesús Carretero, the project's main researcher, UC3M full
professor and head of ARCOS, explained, "At present, ARCOS is actively
involved in several initiatives over the world to remodel the management
software of future supercomputers, including the coordination of the CLARISSE
project and the research collaboration network NESUS.
No comments:
Post a Comment