Industrial talks

Industrial talks

Joost Kouwenhoven, Simtech

Stateful Applications in Containers: Why and How

Software-defined storage (SDS) is rapidly increasing in popularity. The market has surpassed 2$ billion in 2016 and is estimated to hit nearly $20 billion by 2024. At the same time, Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production. For companies that have implemented containers, among the top issues encountered were a lack of sufficient data management services and a lack of persistent storage for key applications. Leveraging the growing popularity of containers, several vendors have started offering container-native storage as a solution to these issues.

While traditional storage can support monolithic applications in containers through plug-ins, large-scale projects built on microservices architectures require better integration with container orchestration systems. Container-native storage solutions are designed for the level of portability, availability, and performance required by such projects. Unlike more generic solutions, container-native storage enables creating fine-grained data services. To reap the full benefits of the cloud and build true cloud-native applications, it is essential to understand container-native storage and the ecosystem around it.

In this presentation, we will discuss the benefits of moving stateful applications into containers and why cloud-native, software-defined storage is essential in this journey. To help companies make informed decisions, we select three of the most popular container-native storage solutions: StorageOS, GlusterFS, and Portworx. We evaluate their capabilities and, based on the results, provide insights and recommendations.

Valeriu Codreanu, SURFsara

The Convergence of HPC and AI on Intel® Based Supercomputers

Motivated by the increased development of Deep Learning (DL) algorithms and frameworks, we have started to witness the convergence of High Performance Computing (HPC) with Machine Learning (ML). This opens the possibility to address high complexity, large data, real problems that were considered unsolvable in the past.

In this talk we will present several use-cases going from synthetic to science driven for image classification and generation tasks using both 2-D and 3-D data. The focus will be on the scale-out behavior and best practices, while also giving details into the bottlenecks encountered in the various use-cases. Jointly working with Intel (IPCC-Intel Parallel Computing Centers), we will present SURFsara’s collaborations with CERN, GENCI, DellEMC, CINES, Max Plank Institute, RUMC-NKI, demonstrating how large memory HPC systems help real world AI applications, in both training and inference phases. We also show how the HPC-ML convergence aims to significantly reduce the compute requirements and time for HPC simulation applications.

We will focus on multiple real use-cases, all performed on large CPU-based supercomputers:

(1) Replacing or displacing high compute needs of Monte-Carlo simulations with 3D-GANs for High Energy Physics particle shower together with CERN.
(2) Very high accuracy for detecting thoracic pathologies from Chest X-Ray images scaled to 256 nodes.
(3) Very large TByte sized dataset for 300K classifications of worldwide plants on 1000+ of Xeon nodes.
(4) Large image inference/training/generation for medical datasets with Max Plank Institute and RUMC-NKI (Netherlands Cancer Institute).

Ben van Werkhoven, Netherlands eScience Center

Testing and Auto-Tuning GPU code with Kernel Tuner

A very common problem in GPU programming is that some combination of thread block dimensions and other code optimization parameters, like tiling or unrolling factors, results in dramatically better performance than other kernel configurations. To obtain highly-efficient kernels it is often required to search vast and discontinuous search spaces that consist of all possible combinations of values for all tunable parameters. This talk presents Kernel Tuner, an easy-to-use tool for testing and auto-tuning GPU code with support for many search optimization algorithms that accelerate the tuning process. Kernel Tuner introduces the application of many new solvers and global optimization algorithms for auto-tuning GPU applications. We illustrate how Kernel Tuner can be used in a wide range of application scenarios, such as auto-tuning pipelines of GPU kernels.