Friday 12 June

Chair

  • Sem Borst – Eindhoven University of Technology
  • Jan-Pieter Dorsman – University of Amsterdam

Societal infrastructure networks continue to increase in size and complexity and tend to be subject to significant uncertainty and stochastic variability.  This raises an urgent need for algorithms which can provide excellent performance in large and complex networks and achieve high levels of efficiency in the presence of randomness.  Given the huge size of these networks, it is particularly critical that these algorithms only involve a limited computation effort and information exchange that can be sustained in large-scale operations.  This track features several talks on methods for analyzing and designing such scalable algorithms in a variety of settings.

Speakers

Srikant, University of Illinois at Urbana-Champaign

The Drift Method

Abstract: A useful technique to obtain bounds on the performance of algorithms for stochastic systems is to set the steady-state drift of an appropriately chosen Lyapunov function equal to zero. Over the last few years, several new mathematical tools have been introduced to make the technique broadly applicable to many regimes of interest, such as the heavy-traffic regime, the mean-field regime, and the small parameter regime. We will present an overview of these new tools, with an emphasis on applications to data centers, cloud computing systems, and reinforcement learning. Based on joint work with Atilla Eryilmaz and Siva Theja Maguluri on switch scheduling, joint work with Lei Ying on TD learning, and the work of Lei Ying on load balancing.

Richard Combes, Centrale-Supelec

On efficient estimation of Markov chain mixing time

Abstract: In this talk we consider the important problem of estimating mixing properties of Markov chains with a large state space from sample paths. This is a fundamental problem in Markov Chain Monte Carlo, which is a fundamental tool in many branches of science including statistical physics, computer science, performance evaluation, Bayesian statistics to name but a few. We propose the UCPI (Upper Confidence Power Iteration) algorithm for this problem, a low-complexity algorithm which estimates the spectral gap in a computationally efficient manner. This is in stark contrast with most known methods which cannot be applied to large state spaces.  We also discuss how to adapt our approach to parallel implementation.

Céline Comte, Eindhoven University of Technology

Adaptive Load Balancing under Assignment Constraints

Abstract: The design and analysis of load-balancing algorithms has been a popular research topic over the past three decades. The fundamental challenge consists of taking the best assignment decision upon the arrival of each job, despite uncertainty on the characteristics of ongoing and future jobs. However, the growing use of cloud-computing platforms for data-intensive applications has added further constraints on load balancing. Not only does the large scale of the system call for decentralized algorithms that perform well in the presence of many dispatchers, but also data locality restricts the set of servers to which each job can be assigned.

Kristen Gardner, Amherst College

Scalable Load Balancing in the Presence of Heterogeneous Servers

Abstract: Heterogeneity is becoming increasingly ubiquitous in modern large-scale computer systems. Developing good load balancing policies for systems whose resources have varying speeds and capabilities is crucial in achieving low response times for jobs using these systems. Indeed, how best to dispatch jobs to servers is a classical and well-studied problem in the queueing literature.

Yet much of the previous work on large-scale systems assumes that the servers are homogeneous; unfortunately, policies that perform well in the homogeneous setting can cause unacceptably poor performance—or even instability—in heterogeneous systems. In this work, we propose new, heterogeneity-aware versions of the “power of d choices” Join the Idle Queue and Join the Shortest Queue policies. Unlike their heterogeneity-unaware counterparts, our policies use server speed information both when choosing which servers to query and when deciding where (among the queried servers) to dispatch jobs. We derive response time analyses that are exact as the number of servers approaches infinity, under standard assumptions. Our results demonstrate that our policies achieve excellent performance, often outperforming well-known dispatching rules including the heterogeneity-aware Shortest Expected Delay policy.

Stefan Schmid, University of Vienna

From Structure to Optimization: Toward Demand-Aware Optical Networks

Bio: Full Professor at the University of Vienna, Austria. MSc and PhD at ETH Zurich, Postdoc at TU Munich and University of Paderborn, Senior Research Scientist at  T-Labs in Berlin, and Associate Professor at Aalborg University, Denmark. Stefan Schmid received the IEEE Communications Society ITC Early Career Award 2016.