Keynote speakers

Keynote speakers


Confirmed Keynote Speakers:

Maarten van Steen

Bio:

Maarten van Steen is currently Scientific Director of the Digital Society Institute at the University of Twente, and as such involved in strategic management of the universitiy’s digitalization research. Next to many memberships in (inter)national committees, he currently chairs the ICT research Platform Netherlands (IPN). He has published extensively in the field of networked computer systems, with an emphasis on wireless systems as well as more traditional distributed systems. In recent years, his research has shifted more to security and privacy preservation, as well as incorporating data-analytics solutions in systems research. He is well-known for the text book “Distributed Systems”, co-authored with Andrew Tanenbaum, recently updated to its 3rd edition. His research is characterized by questions regarding scale and simplicity of systems-oriented solutions.

Where blockchains fail (and why HPC is of no help)

Download presentation

Abstract:

Blockchains have become immensly popular and are high on the list of national and international research and innovation agenda’s. This seems to be partly caused by the numerous interesting applications, combined with the promise of full decentralization and high scalability (among others). However, there are some fundamental problems with blockchains, notably when it comes to scalabilty. In this presentation I will focus on these problems and argue that we need to temper expectations concerning blockchains until some of their fundamental issues have been adequately addressed. As computer scientists, we have a special responsibility as the hype around blockchains at points seems to truly unfounded.

Henri Bal

Bio:

Henri Bal is Full Professor (Chair) at the Computer Systems dept. at VU. Bal is a Member of the Academia Europeana and is the winner of the Euro-Par 2014 Achievement Award. Bal has also coordinated the Distributed ASCI Supercomputer (DAS) project for the past 14 years. DAS has been used for over a 100 PhD theses and numerous papers and awards. He is also an invited member of the Scientific Committee of the related French project Grid’5000. Prof. Bal is the current scientific director of the ASCI research school, which trains the Ph.D. students of computer and imaging technology in the Netherlands, and thus provides an excellent national counterpart to the ExtremeDC project. Prof. Bal’s research focuses on programming environments for large-scale distributed systems. He studies the underlying fundamental problems, but always uses current technology and real-world applications to apply his solutions. Technology has changed over the years from cluster computers, to grids, clouds, hybrid systems, mobile systems, and sensors. Applications from numerous domains have been addressed over the years, including search algorithms, model checking, multimedia, semantic web, bioinformatics, astronomy, climate modelling, digital forensics, and e-health. Bal is the author of 3 books (including Modern Compiler Design, 2012) and over 175 published articles. He has been the promotor of 27 PhD students, including Amazon.com CTO Werner Vogels. He is an Associate Editor of the leading journal of his field, IEEE TPDS, and program vice-chair for CCGrid 2018. His group produced well-known programming environments for datacenters such as the Orca language (for cluster computers), MagPIe (for multi-clusters), Ibis (for hybrid systems), WebPIE (distributed reasoning) and SWAN (for smartphone-based sensors). The Ibis distributed programming software is now being deployed by the Netherlands eScience Center, which uses it for various application domains.

High Performance Computing for Distributed Sensing Applications

Abstract:

The field of distributed sensing is evolving rapidly. Numerous applications use smartphones and wearables for sensing human health, buildings, air quality, traffic, and safety. Internet of Things (IoT) infrastructures are becoming widely available, leading to billions of inexpensive sensor devices on the Internet. This development will come with many new challenges for Computer Science, as sensor applications typically deal with highly dynamic, widely distributed data and often need fast response times.
This talk will first sketch some relevant technological developments in sensors and IoT communication technology. The wide diversity of sensor and communication technology calls for transparent mechanisms to easily process sensor data in a uniform way. The extreme distribution of sensors and the many hierarchy levels in modern distributed computing systems make the scheduling of computations highly complex. Also, higher level analytics tools should be adapted to handle dynamic (streaming) sensor data. The talk will discuss these challenges and illustrate them with examples from our recent research on programming systems and applications.

Lydia Chen

Bio:

Lydia Y. Chen is an Associate Professor in the Department of Computer Science at the Delft University of Technology. Prior to joining TU Delft, she was a research staff member at the IBM Zurich Research Lab from 2007 to 2018. She received her Ph.D. from the Pennsylvania State University, and her B.A. from the National Taiwan University in 2006, and 2002, respectively. Her research interests center around dependability management, resource allocation, and privacy enhancement for large scale data processing systems and services. More specifically, her work focuses on developing stochastic and machine learning models, and applying these techniques to application domains, such as datacenters and AI systems. 

She has published more than 80 papers in journals, e.g., IEEE Transactions on Distributed Systems, IEEE Transactions on Service Computing, and conference proceedings, e.g., INFOCOM, SIGMETRICS, DSN, and EUROSYS. She was a co-recipient of the best paper awards at DSN’14, ICAC’16, and ICAC’17. She received a TU Delft technology fellowship in 2018. She was program co-chair for Middleware Industry Track 2017, and IEEE ICAC 2019, and track vice-chair for ICDCS 2018. She has served on the editorial boards of IEEE Transactions on Service Computing, and IEEE Transactions on Network and Service Management. She is a Senior Member of IEEE.

Machine Learning for Resource Management

Abstract:

The practice of collecting big performance data has changed how infrastructure providers model and manage the system in the past decade. There is a methodology shift from domain-knowledge based models, e.g., queuing and simulation, to data-driven models, e.g., machine learning. I will present such a game change for resource management from workload characterization, dependability prediction to sprinting policy, with examples from IBM datacenters. I will conclude the talk with future directions of performance models and challenging resource management problems in machine learning clusters. 

Cristina Abad

Bio:

Cristina L. Abad is a Professor in the Department of Electrical Engineering and Computer Science at Escuela Superior Politecnica del Litoral in Ecuador, where she leads the Distributed Systems Research Lab and co-directs the Big Data Research Group. She received her Ph.D. in 2014 from the University of Illinois at Urbana-Champaign. For three years during her PhD, she was a member of the Hadoop Core Team at Yahoo, where she worked improving the performance of HDFS and had the opportunity to contribute to the Apache Hadoop codebase. Her research interests lie at the intersection of Distributed Systems and Performance Engineering. In particular, her contributions focus on designing and building distributed systems that can self-adapt to workload changes and maximize performance. She is particularly interested in improving the systems supporting Big Data applications and cloud computing architectures. Her international funding sources include Vlir-UOS, Google, Microsoft, Amazon Web Services and AT&T Labs Research. She has received a Fulbright Fellowship, a UIUC CS Excellence Fellowship and two Google Faculty Research Awards. She is a member of IEEE, ACM, SPEC RG and Usenix.

Caching: past, present and future

Abstract:

Caching data close to its consumers is a time-proven technique that reduces latency, increases throughput, and improves overall performance, in general. Even though caching has been in use in computers for more than 50 years, there are still open challenges surrounding these techniques. In this talk, I will summarize some key milestones in caching research, describe some of the most exciting recent advances, and discuss current open challenges.

Matei Ripeanu

Bio:

Matei Ripeanu is a Professor at the University of British Columbia where he works together with a fantastic group of students. Read more about his current projects.

Artemis: Proactive Defences Against Large-scale Automated Cyber Intrusions

Abstract:

Attacks on large socio-technical systems (e.g., e-mail, online social networks, complex ecosystems of online services like those offered by Google) are increasing in frequency, scale, and complexity. One of the key vectors in such attacks is automated social engineering. This relies on unsafe decisions by individual users, e.g., following a phishing link, opening a malicious attachment, or accepting a friendship request from a social-bot. As a case in point, one such attack, phishing, is currently the fastest growing online crime and caused over $1B of financial losses yearly.

The orthodox paradigm to defend against automated social-engineering attacks is reactive and victim-agnostic: defenses generally focus on identifying the attacks/attackers to block the attack.

Our project rests on two hypotheses: First, we postulate that it is possible to identify, even if imperfectly, the vulnerable user population, that is, the users that are likely to fall victim to such attacks. Second, we postulate that once identified, information about the vulnerable population can be used in multiple ways to improve system resilience: for example (i) to establish more comprehensive system-wide defences; (ii) to nudge users towards making better decisions, and (iii) to achieve faster and more accurate detection of compromised assets, leading to more effective remediation of large-scale attacks. This talk will present our progress testing these two hypotheses and the elements of a high-performance distributed infrastructure we have built to this end.