EMerging Open Storage Systems and Solutions
for Data Intensive Computing 2021


Part of: HPDC’21 (Stockholm/Virtual)


Zoom sessions provided by Whova

Jun 25th, 2021

More details Coming soon!

Storage systems have been continuously evolving to keep pace with the changing needs and assumptions of applications, compute architectures, heterogeneity, and storage device technologies.  Parallel file systems and associated data management tools and middleware for HPC have been evolving for the last couple of decades. However, there have been many developments in new infrastructure paradigms such as Object storage that are showing very strong signs of viability for the new classes of use cases from data intensive extreme scale computing, AI and Deep Learning, etc. This is also coupled with new data and storage management techniques in the realm of scientific computing, that is applicable for handling extremely large volumes of data never seen before. This typically is closely tied in to the new infrastructure paradigms.  Both the incremental and disruptive approaches would potentially have a path for the future, but we need to look carefully at the new disruptive approaches in relation to the needs of the use cases, and, the features and capabilities that they offer.

Storage, I/O and data management are becoming ever more relevant for High performance parallel and distributed computing systems – considering the  data intensive nature of applications and workflows deployed on them. Data driven scientific innovation has been happening with the proliferation of new infrastructures such as edge computing, sensors and IoT that are now part of the scientific workflow – feeding in data to core HPC data centres and their storage infrastructures in different ways. How can data storage and data management be effectively handled in such new environments?  What are the new assumptions? Such questions are ever more pressing and deserve consideration.

The workshop will focus on these emerging open storage systems that have been developed by the community (including their usage, by application examples), and data management solutions, focusing on addressing I/O and storage for upcoming scientific applications. The workshop will look into the differences between these approaches and problems where these systems are most applicable for today’s I/O needs – so that we gather the best lessons from these solutions heading into the future.

We have invited experts working on storage systems, technologies (Including users) to speak about their experiences – and look forward to a very engaging discussions with them!

Talk 1: DAOS: Nextgen Storage Stack for AI, Big Data and HPC Convergence (45 Minutes)

Speaker Bio: Johann Lombardi is a senior principal engineer in the Cloud & Enterprise Solution Group (CESG) at Intel. He started to work on Lustre in 2003 and led the sustaining team in charge of the Lustre file system worldwide support for more than 5 years. He then transitioned to research programs (Fast Forward, ESSIO, CORAL & Path Forward) to lead the development of a storage stack for Exascale HPC, Big Data and AI called DAOS.

Link to the talk [Provided after the event]


Talk 2: CORTX: An Object storage platform for the Data Intensive Era (45 Minutes)

Speaker Bio: Dr. Nikita Danilov has 20 years of experience in distributed storage and filesystems. He has a PhD in mathematics (MIPT). Nikita was an architect for Lustre file system that runs on 9 out of 10 Top10 supercomputers. Then he became more ambitious and joined Clusterstor (acquired by Xyratex, acquired by Seagate) to build an object store that scales to infinity! He still tries to find time to code, but mostly does reviews, presentations and designs for Motr.

Link to the talk [Provided after the event]


Talk 3: Rucio: open-source scientific data management (45 Minutes)

Speaker Bio: Dr. Martin Barisits (m) is a CERN staff member and currently the project leader for the scientific data management system Rucio. In his role he focuses on the evolution of the software architecture to meet the needs of ATLAS data taking during HL-LHC, as well as the needs of other scientific experiments using Rucio. An ATLAS member for more than 10 years, he worked in ATLAS computing in different roles, where he focused on leading the design and development of distributed systems. He is one of the architects of the distributed data management system Rucio.
Martin holds a PhD degree with distinction in computer science and a MSc degree in computational intelligence, both from the Vienna University of Technology.

Link to the talk [Provided after the event]


Talk 4: Object storage combined with time parallelization - a unique pathway to Exascale (30 Minutes)

Speaker Bio: Debasmita Samaddar is the Exascale Algorithm Specialist at the Culham Centre for Fusion Energy (CCFE), UK. Debasmita responsibilities include leading the development of algorithms targeting Exascale machines. Debasmita earned her BSc in Physics from the University of Calcutta, India, a MSc from the University of Delaware, US and a PhD in Physics from the University of Alaska Fairbanks, US. She worked at the ITER Organization in Cadarache, France as a Monaco Postdoctoral Fellow. Debasmita is enthusiastic about applications of novel algorithms and approaches that enable simulations on supercomputing devices to address many of the world's critical problems and important scientific questions. Debasmita strongly believes in a collaborative approach to solving problems and works closely with experts across a variety of disciplines in both academia and industry.

Link to the talk [Provided after the event]


Talk 5: Global Memory Abstraction solutions for Emerging storage systems (30 Minutes)

Speaker Bio: Philippe Couvée is working at HPC R&D since more than 20 years. He is leading a team of 17 researchers and engineers developing products that facilitates data access from large supercomputers. Its recent focus is on data centric solutions that combine cache and acceleration technics with advanced instrumentation and data analytics. He is also teaching computer architecture and system programming at CNAM.

Link to the talk [Provided after the event]

The talks will be followed by a panel discussion (30 Minutes). Timings can be adjusted based on the needs of the workshop.

Workshop organizers:

The workshop will be organised and co-ordinated by Dr. Sai Narasimhamurthy, who co-ordinates the Sage2 EC FETHPC (Future and Emerging Technologies) Research project (Grant Agreement No. 800999).  The workshop is planned to be organised as a dissemination activity supported by the Sage2 project.

Dr. Sai Narasimhamurthy is Engineering Director, Seagate Systems, working on Research and Development for next generation storage systems and responsible for EU R&D for the Seagate Systems business.  Sai currently also holds the position of vice-chair of industry for the ETP4HPC organization and co-leads the storage and I/O working group for developing ETP4HPC’s Strategic Research Agenda (SRA). He has also actively led and contributed to many European R&D consortia (SAGE, Sage2, Maestro, etc) in the area of HPC focused on I/O and storage. Previously (2005 - 2009) , Sai was CTO and Co-founder at 4Blox, inc in the area of Storage Area Networks. During the course of his doctoral dissertation at Arizona State University (2001 – 2005) Sai  has worked on Storage Area Networking protocols focusing on solutions for bulk data transfer over IP networks.

Please contact sai.narasimhamurthy@seagate.com for any further information on the workshop.