Programme
All times are British Summer Time (BST).
All talks will take place on Zoom; links have been sent via email to all registered participants.
Click on a talk for abstracts or other information.
Monday, May 31
11:00 — 12:00  Contributed talks session 1 
12:00 — 12:30  Contributed talks Q&A (Gather) 
12:30 — 13:30  Welcome Meet & Greet (Gather) 
13:30 — 14:30 
Rachel Cummings — Differential Privacy in Practice: Policy Decisions and User ExpectationsDifferential privacy (DP) is a parameterized notion of database privacy that gives a mathematically rigorous worstcase bound on the maximum amount of information that can be learned about an individual's data from the output of a computation. The privacy community has developed a diverse toolkit of DP algorithms, which are now being used in practice by major tech companies and government agencies. The widespread deployment of these algorithms brings about new policy challenges, such as how to choose the value of the privacy parameters, and how to explain the mathematicallynuanced privacy guarantees to nonexperts. This talk will first give an introduction to differential privacy, will then survey recent behavioral studies measuring the response of users to varying differential privacy parameters and the efficacy of nontechnical DP descriptions, and will conclude with a discussion of open problems and future challenges in bringing the theory of differential privacy into practice. 
14:30 — 14:45  Break 
14:45 — 15:30 
Steve Hanneke — Proper Learning, Helly Number, and an Optimal SVM BoundThe classical PAC sample complexity bounds are stated for any Empirical Risk Minimizer (ERM) and contain an extra logarithmic factor log(1/ϵ) which is known to be necessary for ERM in general. It has been recently shown by Hanneke (2016) that the optimal sample complexity of PAC learning for any VC class C does not include this log factor and is achieved by a particular improper learning algorithm, which outputs a specific majorityvote of hypotheses in C. This leaves the question of when this bound can be achieved by proper learning algorithms, which are restricted to always output a hypothesis from C. In this work we aim to characterize the classes for which the optimal sample complexity can be achieved by a proper learning algorithm. We identify that these classes can be characterized by the dual Helly number, which is a combinatorial parameter that arises in discrete geometry and abstract convexity. In particular, under general conditions on C, we show that the dual Helly number is bounded if and only if there is a proper learner that obtains the optimal dependence on ϵ. As further implications of our techniques we resolve a longstanding open problem posed by Vapnik and Chervonenkis (1974) on the performance of the Support Vector Machine by proving that the sample complexity of SVM in the realizable case is Θ((n/ϵ)+(1/ϵ)log(1/δ)), where n is the dimension. This gives the first optimal PAC bound for Halfspaces achieved by a proper learning algorithm, and moreover is computationally efficient. Joint work with Olivier Bousquet, Shay Moran, and Nikita Zhivotovskiy, which appeared at COLT 2020. 
Nathan Klein — A (Slightly) Improved Approximation Algorithm for Metric TSPI will describe recent joint work with Anna Karlin and Shayan Oveis Gharan in which we show a randomized 3/2ε approximation algorithm for metric TSP, for some ε > 10^{36}. This slightly improves over the classical 3/2 approximation algorithm due to Christofides [1976] and Serdyukov [1978]. I will briefly talk about the major ingredients of our analysis, which relies on foundational work in the geometry of polynomials and the structure of near minimum cuts. Joint work with Anna R. Karlin and Shayan Oveis Gharan 

Elaine Shi — Optimal Oblivious RAM and Sorting Circuits for Short KeysOblivious RAM (ORAM), first proposed by Goldreich and Ostrovsky in 1987, is an algorithmic technique for provably hiding a program's access patterns. To date, ORAM has been applied in multiple fields, including cryptography, secure processors, secure cloud outsourcing, secure databases, and blockchains. In the original work of Goldreich and Ostrovsky, they prove a log n lower bound for the overhead of any ORAM scheme, where n is the space of the program. A longstanding open problem is, can we construct an optimal ORAM scheme with logarithmic overhead, matching the wellknown lower bound? We resolve this longstanding open question. Assuming the existence of oneway functions, we construct an optimal ORAM scheme called OptORAMa which achieves O(log n) overhead. To achieve this result, we unravel an intimate connection between optimal ORAM and sorting circuits for short keys. Sorting circuit complexity is another longstanding question in the theoretical computer science literature. In a landmark result in 1983, Ajtai, Komlos and Szemeredi constructed a sorting circuit with O(n log n) comparator gates. While this is optimal in the comparisonbased model (even for 1bit keys), it remains unclear whether we can construct o(n log n) sorting circuits if we forego the comparisonbased restriction: no such construction is known, and proving any superlinear unconditional lower bound for sorting circuits is beyond the reach of current techniques. Inspired by the techniques behind OptORAMa, we show that using noncomparisonbased techniques, one can construct a circuit of size O(n k) that can correctly sort kbit keys. When k = o(log n), our result overcomes the n log n barrier of sorting circuits. We prove that assuming the LiLi network coding conjecture, our circuit size is optimal for any choice of k. To the best of our knowledge, we are the first to use noncomparisonbased techniques to get a nontrivial sorting circuit result. Our result also leads to other interesting byproducts, for example, median can be computed using a linearsized circuit (c.f., the textbook lineartime median algorithm by Blum et al. works only in a RAM model, and cannot be easily converted to a circuit). Joint work with Gilad Asharov, Ilan Komargodski, WeiKai Lin, Kartik Nayak, and Enoch Peserico 

Santosh Vempala — Solving Sparse Linear Systems Faster than Matrix MultiplicationCan linear systems be solved faster than matrix multiplication? A celebrated line of work shows that the important special case of Laplacian linear systems can be solved in nearly linear time. In the general setting, however, the bit complexity of solving an nbyn linear system Ax=b has been n^{ω}, where ω < 2.372864 is the matrix multiplication exponent. Even for sparse linear systems with poly(n) condition number, which arise throughout scientific computing, the famous Conjugate Gradient/Lanczos methods require Ω(n) bits for intermediate numbers; improving the resulting n^{2}·nnz(A) complexity has been an open problem. We present an algorithm that solves linear systems with sparse A asymptotically faster than matrix multiplication for any ω > 2. This speedup holds for any input matrix A with o(n^{ω1}/log(κ(A))) nonzeros, where κ(A) is the condition number of A. For poly(n)conditioned matrices with O(n) nonzeros, and the current value of ω, the bit complexity of our algorithm to solve to within any 1/poly(n) error is O(n^{2.331645}). Our algorithm can be viewed as an efficient randomized implementation of the block Krylov method via recursive lowdisplacementrank factorizations. It is inspired by the algebraic algorithm of [EberlyGiesbrechtGiorgiStorjohannVillard ISSAC 06, 07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anticoncentration techniques to lower bound the smallest singular value and the smallest gap in singular values of semirandom matrices. Joint work with Richard Peng 

15:30 — 15:50  Q&A session (Zoom breakout rooms) 
16:00 — 17:00  Women in Algorithms EventOrganizer: Monika Henzinger 
Tuesday, June 1
11:00 — 12:00  Contributed talks session 2 
12:00 — 12:30  Contributed talks Q&A (Gather) 
12:30 — 13:30  Break 
13:30 — 14:30 
Venkatesan Guruswami — Recent Progress on Binary DeletionCorrecting CodesIn the worstcase (bit)deletion noise model, a subset of up to t arbitrarily chosen bits are deleted from a sequence of n codeword bits. Crucially, the locations of the deleted bits are not known to the receiver who receives a subsequence of the transmitted bitstring. The goal is to design codes of low redundancy that allow recovery of the deleted bits and the original codeword. The study of deletioncorrecting codes itself is quite old, dating back to optimal singledeletion codes in the 60s, and positive rate codes to correct t = Ω(n) deletions in the 90s. However, many basic questions remained open and our understanding of deletioncorrecting codes significantly lagged the vast literature concerning errorcorrecting codes to correct bit flips. After a long hiatus, there has been much progress on deletioncorrecting codes in the last 67 years, covering regimes when the number of deletions t is a small constant, a small constant fraction of n, and a large proportion of n, as well as the listdecoding model. The talk will survey some of this progress. The focus will be on binary codes for bitdeletions, but time permitting we will also mention results for larger alphabets as well as the closely related "insdel" model where symbols may be inserted and deleted. 
14:30 — 14:45  Break 
14:45 — 15:30 
Mohsen Ghaffari — Distributed Derandomization and Network DecompositionThis talk provides a brief overview of a recent line of work that provides an efficient distributed derandomization method [Rozhoň and Ghaffari STOC 2020; Ghaffari, Harris, and Kuhn FOCS 2018; and Ghaffari, Kuhn, and Maus STOC 2017]. Concretely, the derandomization method shows that any (locallycheckable) graph problem that admits a polylogarithmictime randomized distributed algorithm also admits a polylogarithmictime deterministic distributed algorithm. This result resolved several central and decadesold open problems in distributed graph algorithms. Joint work with David G. Harris, Fabian Kuhn, Yannic Maus and Vaclav Rozhon 
Euiwoong Lee — The KargerStein Algorithm is Optimal for kcutIn the kcut problem, we are given an edgeweighted graph and want to find the leastweight set of edges whose deletion breaks the graph into k connected components. It is easy to see that the elegant randomized contraction algorithm of Karger and Stein for global mincut (k=2) can be naturally extended for general k with the running time of O(n^{2k2}). Using tree packings, Thorup gave a deterministic algorithm that has the same running time. In this work, we show that for any fixed k ≥ 2, the KargerStein algorithm outputs any fixed minimum kcut with probability at least Ω(n^{k}). This also gives an extremal bound of O(n^{k}) on the number of minimum kcuts in an nvertex graph and an algorithm to compute a minimum kcut in similar runtime. Both are essentially tight. The first main ingredient in our result is a finegrained analysis of how the graph shrinks — and how the average degree evolves — under the KargerStein process. The second ingredient is an extremal result bounding the number of cuts of size at most (2δ)OPT/k, using the Sunflower lemma. Joint work with Anupam Gupta and Jason Li 

Shachar Lovett — Recent Progress on the Sunflower Conjecture, and Connections to CSThe sunflower conjecture is an open problem in combinatorics, originally asked by Erdos and Rado in 1960, and without much progress until recently. I will describe recent work which gets closer to proving the conjecture. The techniques involved build upon surprising connections to computer science, in particular to the study of DNFs in computational complexity. I will describe these connections, and potential new applications in CS of these techniques. Joint work with Ryan Alweiss, Kewen Wu and Jiapeng Zhang 

John Wright — MIP*=REQuantum complexity theory gives us a powerful lens to study basic resources and properties that arise in quantum computing. One of the most beguiling of these is the mysterious phenomenon of quantum entanglement, in which two farapart quantum systems can affect each other faster than the speed of light. To study this, researchers in 2004 introduced the complexity class MIP*, which connected entanglement to a classic notion in complexity theory known as multiprover interactive proofs. Since then, determining the power of MIP* has remained a major open problem in the field of quantum complexity theory. In this talk, I will describe recent work giving a solution to this problem: MIP* = RE, the complexity class containing the halting problem and those languages which reduce to it. This shows that entanglement is a resource of almost unimaginable power, as it can be used to solve problems which are undecidable. The proof involves new techniques that allow a classical verifier to use entanglement to delegate increasingly complex computations to two quantum provers. I will also describe the deep and surprising connections that MIP* has to two other major open problems, Tsirelson's problem from entanglement theory and Connes' embedding problem from operator algebras, and show how this result leads to a resolution of both problems in the negative. Joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and Henry Yuen. 

15:30 — 15:50  Q&A session (Zoom breakout rooms) 
16:00 — 17:00  Junior/senior meetup (Gather) 
Wednesday, June 2
11:00 — 12:00  Contributed talks session 3 
12:00 — 12:30  Contributed talks Q&A (Gather) 
12:30 — 13:30  Break 
13:30 — 14:15 
Shalev BenDavid — A New Minimax Theorem for Randomized AlgorithmsWe introduce a new type of minimax theorem for randomized algorithms, one that gives a hard distribution wellsuited for the analysis of smallbias randomized algorithms. A version of our minimax theorem applies in multiple randomized models of computation: query complexity, communication complexity, randomized circuit models, approximate polynomial degree, approximate rank, and various quantum models. I will briefly discuss these results, as well as the role that proper scoring rules (from statistics) play in our work. Joint work with Eric Blais. 
Omri BenEliezer — A Framework for Adversarially Robust Streaming AlgorithmsWe investigate the adversarial robustness of streaming algorithms; an algorithm is considered robust if its performance guarantees hold even if the stream is chosen by an adaptive adversary, whose future stream updates may depend on previous outputs of the algorithm. Surprisingly little was known until recently about the adversarial robustness of streaming algorithms. Deterministic algorithms are always robust, but in many cases they are inefficient. Randomized algorithms tend to be very efficient, but are they robust? For many of the most widely used algorithms, the answer is either negative or unknown. How can one devise, then, principled methods to obtain streaming algorithms that are both robust and efficient? We address this question by developing a simple and efficient framework allowing one to transform existing algorithms for many important streaming problems, such as norm estimation, distinct elements, heavy hitters, and entropy estimation, into robust ones, while inducing only a small overhead in the space complexity. Since this work was first published in early 2020, a flurry of subsequent work on adversarially robust streaming has been carried, and in the talk I will very quickly mention what is currently known about the area, as well as some intriguing open questions. Joint work with Rajesh Jayaram, David Woodruff and Eylon Yogev. 

Siyao Guo — Data Structures Meet Cryptography: 3SUM with PreprocessingThis paper shows several connections between data structure problems and cryptography against preprocessing attacks. Our results span data structure upper bounds, cryptographic applications, and data structure lower bounds, as summarized next. First, we apply FiatNaor inversion, a technique with cryptographic origins, to obtain a data structure upper bound. In particular, our technique yields a suite of algorithms with space S and (online) time T for a preprocessing version of the Ninput 3SUM problem where S^{3}T = O(N^{6}). This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is no data structure that solves this problem for S=N^{2−δ} and T = N^{1−δ} for any constant δ>0. Secondly, we show equivalence between lower bounds for a broad class of (static) data structure problems and oneway functions in the random oracle model that resist a very strong form of preprocessing attack. Concretely, given a random function F: [N] → [N] (accessed as an oracle) we show how to compile it into a function G^{F}: [N^{2}] → [N^{2}] which resists Sbit preprocessing attacks that run in query time T where ST=O(N^{2−ε}) (assuming a corresponding data structure lower bound on 3SUM). In contrast, a classical result of Hellman tells us that F itself can be more easily inverted, say with N^{2/3}bit preprocessing in N^{2/3} time. We also show that much stronger lower bounds follow from the hardness of kSUM. Our results can be equivalently interpreted as security against adversaries that are very nonuniform, or have large auxiliary input, or as security in the face of a powerfully backdoored random oracle. Thirdly, we give nonadaptive lower bounds for 3SUM which match the best known lower bounds for static data structure problems. Moreover, we show that our lower bound generalizes to a range of geometric problems, such as three points on a line, polygon containment, and others. Joint work with Alexander Golovnev, Thibaut Horel, Sunoo Park, and Vinod Vaikuntanathan 

Saurabh Sawlani — NearOptimal Fully Dynamic Densest SubgraphDense subgraph discovery is an important primitive for many realworld applications such as community detection, link spam detection, distance query indexing, and computational biology. We approach the densest subgraph problem by framing its dual as a graph orientation problem, which we solve using an augmenting pathlike adjustment technique. We give the first fully dynamic algorithm which maintains a (1 − ε)approximate densest subgraph in worstcase time poly(log n,1/ε) per update. Our result improves upon the previous best approximation factor of (1/4 − ε) for fully dynamic densest subgraph [Bhattacharya et. al., STOC '15]. Joint work with Junxing Wang 

14:15 — 14:35  Q&A session (Zoom breakout rooms) 
14:35 — 14:45  Break 
14:45 — 15:45 
Nika Haghtalab — Foundations of ML for a Social and Strategic WorldWhen machine learning systems interact directly with people, they affect the way people behave. In recent years, corporations and government organizations have been steadily increasing their use of machine learning techniques to automate business, social, and economic decisions. These consequential decisions take place in the more complex social and economic context where the learners and objects of learning are often people or organizations that are impacted by the learning algorithm and, in return, can take actions that influence the learning process. A research area in the intersection of machine learning, algorithmic economics, and theoretical computer science has emerged in recent years to set a foundation through which we can design learning algorithms that perform well in presence of everyday societal and economic forces, as well as ensuring the integrity of social and economic forces that are born out of the use of machine learning systems. In this talk, I will survey recent progress in this field, discuss new trends and important directions for the future, and highlight connections to wellestablished research areas including online learning, smoothed analysis, and auctions. 
16:00 — 17:00  Business meeting (Zoom) 
Thursday, June 3
13:30 — 14:15 
Pawel Gawrychowski — Minimum Cut in O(m log^{2} n) TimeIn the (global) minimum cut problem, we are given an undirected edgeweighted graph on n nodes and m edges and seek a partition of the nodes into two nonempty parts that minimize the total weight of edges connecting the parts. In 1996, Karger designed a randomized algorithm that solves this problem in O(m log^{3}n) time (correct with high probability). Since then, there was essentially no progress on obtaining a faster algorithm for sparse graphs (but there was some exciting progress on deterministic/faster algorithms for the unweighted case). I will provide an overview of some recent results that obtain faster algorithms for the general weighted case, in particular in O(m log^{2}n) or O(m logn + n^{1+ε}) time. 
Zhiyi Huang — Online Correlated Selection and Its Applications in Online Matching ProblemsThis work initiates the study of Online Correlated Selection (OCS). Consider a set of ground elements and a sequence of pairs of elements which arrive one at a time. On the arrival of such a pair, the algorithm must select one of the two elements, each with probability half. For example, suppose that we select with a fresh random bit every time. Then, after an element appears in k pairs, it will be selected at least once with probability 12^{k}. Can we do better? Concretely, for any 0 ≤ 𝛾 ≤ 1, a 𝛾semiOCS is an algorithm which guarantees that an element is chosen at least once with probability 12^{k}(1𝛾)^{k1}, after appearing in k pairs. For example, the above naïve independent selection algorithm is a 0OCS; on the other hand, 𝛾 = 1 corresponds to perfect negative correlation in the sense that an element is selected in its second appearance if and only if it is not selected in its first. We show that the largest possible 𝛾 for which a 𝛾semiOCS exists is 𝛾=1/2. Further, we consider a more difficult notion of 𝛾OCS, which considers arbitrary subsets of appearances of an element; in other words, they need not be the first k appearances. We introduce a 𝛾OCS for some 𝛾 > 0.1. As applications of the OCS, we obtain the first online algorithms that break the longstanding 0.5 barrier in the edgeweighted online matching problem (a.k.a., Display Ads) and AdWords. Joint work with Matthew Fahrbach, Runzhou Tao, and Morteza Zadimoghaddam 

Shay Moran — An Equivalence Between Private PAC Learning and Online LearningLet H be a concept class. We prove that H can be PAClearned by an (approximate) differentiallyprivate algorithm if and only if it has a finite Littlestone dimension. This implies an equivalence between online learnability and private PAC learnability. Joint work with Noga Alon, Mark Bun, Roi Livni, and Maryanthe Malliaris 

Eva Rotenberg — FullyDynamic Planarity Testing in Polylogarithmic TimeGiven a dynamic graph subject to insertions and deletions of edges, a natural question is whether the graph presently admits a planar embedding. We give a deterministic fullydynamic algorithm for general graphs, running in amortized O(log^{3} n) time per edge in sertion or deletion, that maintains a bit indicating whether or not the graph is presently planar. Joint work with Jacob Holm 

14:15 — 14:35  Q&A session (Zoom breakout rooms) 
14:35 — 14:45  Break 
14:45 — 15:45 
Yin Tat Lee — Convex Optimization and Dynamic Data StructuresIn the last four years, there are many breakthroughs in optimization such as solving linear programs as fast as matrix multiplication, solving dense maxflow in nearly linear time, and fast optimization algorithms on low treewidth graphs. All of these algorithms are based on a careful combination of optimization techniques and dynamic data structures. In this talk, we will explain the framework underlying all the recent breakthroughs 
16:00 — 17:00  Farewell social (Gather) 
Accepted contributed presentations
Each contributed presentation will consist of a 3½ minute talk. A Q&A session follows each contributed talk session, where participants will be able to discuss with the speaker in Gather.
Monday 11:00 — 12:00
1  Mo Tiwari, Martin Jinye Zhang, James Mayclin, Sebastian Thrun, Chris Piech and Ilan Shomorony — BanditPAM: Almost Linear Time kMedoids Clustering via MultiArmed Bandits 
2  Arturs Backurs, Avrim Blum and Neha Gupta — Active Local Learning 
3  Meike Neuwohner — An Improved Approximation Algorithm for the Maximum Weight Independent Set Problem in dClaw Free Graphs 
4  Susanne Albers, Arindam Khan and Leon Ladewig — Best Fit Bin Packing with Random Order Revisited 
5  Arindam Khan and Madhusudhan Reddy — On guillotine separability of squares and rectangles 
6  Arindam Khan, Arnab Maiti, Amatya Sharma and Andreas Wiese — On Guillotine Separable Packings for the Twodimensional Geometric Knapsack Problem 
7  Lars Rohwedder and Andreas Wiese — A (2+epsilon)approximation algorithm for preemptive weighted flow time on a single machine 
8  Kunal Agrawal, Michael A. Bender, Rathish Das, William Kuszmaul, Enoch Peserico and Michele Scquizzato — Tight Bounds for Parallel Paging and Green Paging 
9  Karl Bringmann and André Nusser — FineGrained Lower Bounds for Hausdorff Distance Under Translation 
10  Nithin Varma and Yuichi Yoshida — Average Sensitivity of Graph Algorithms 
11  Franziska Eberle, Ruben Hoeksma, Nicole Megow, Lukas Nölke, Kevin Schewior and Bertrand Simon — SpeedRobust Scheduling  Rocks, Bricks, and Sand 
12  Mohammad Akbarpour, Yeganeh Alimohammadi, Shengwu Li and Amin Saberi — The Value of Excess Supply in Spatial Matching Markets 
13  Jesper Nederlof, Jakub Pawlewicz, Céline Swennenhuis and Karol Węgrzycki — A Faster Exponential Time Algorithm for Bin Packing With a Constant Number of Bins via Additive Combinatorics 
14  Bartlomiej Dudek, Pawel Gawrychowski and Tatiana Starikovskaya — All Nontrivial Variants of 3LDT Are Equivalent 
15  Mobin Y.Jeloudar, Irene Lo, Tristan Pollner and Amin Saberi — Decentralized Matching in a Probabilistic Environment 
Tuesday 11:00 — 12:00
1  Amit Levi, Ramesh Pallavoor, Sofya Raskhodnikova and Nithin Varma — ErasureResilient SubliinearTime Graph Algorithms 
2  Laurent Feuilloley, José Correa, Andrés Cristi, Alexandros TsigoniasDimitriadis and Tim Oosterwijk — The Secretary Problem with Independent Sampling 
3  Marc Roth, Johannes Schmitt and Philip Wellnitz — Counting Small Induced Subgraphs Satisfying Monotone Properties 
4  Diodato Ferraioli, Paolo Penna and Carmine Ventre — Twoway Greedy: Algorithms for Imperfect Rationality 
5  Daniel Lokshtanov, Saket Saurabh and Vaishali Surianarayanan — A Parameterized Approximation Scheme for Min kCut 
6  Spyros Angelopoulos — Online Search with a Hint 
7  Ronen Gradwohl, Niklas Hahn, Martin Hoefer and Rann Smorodinsky — Algorithms for Persuasion with Limited Communication 
8  Paul Duetting, Federico Fusco, Philip Lazos, Stefano Leonardi and Rebecca Reiffenhäuser — Efficient TwoSided Markets with Limited Information 
9  Bernhard Haeupler, David Wajc and Goran Zuzic — UniversallyOptimal Distributed Algorithms for Known Topologies 
10  Vincent CohenAddad, David Saulpic and Chris Schwiegelshohn — A New Coreset Framework for Clustering 
11  Bernhard Haeupler, D. Ellis Hershkowitz and Goran Zuzic — Tree Embeddings for HopConstrained Network Design 
12  Yeganeh Alimohammadi, Nima Anari, Kirankumar Shiragur and Thuy Duong Vuong — Fractionally LogConcave and SectorStable Polynomials: Counting Planar Matchings and More 
13  Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi and Rebecca Reiffenhäuser — Fast Adaptive NonMonotone Submodular Maximization Subject to a Knapsack Constraint 
14  Ainesh Bakshi and Pravesh Kothari — OutlierRobust Clustering of NonSpherical Mixtures 
15  Richard Cole and Yixin Tao — On the Existence of Pareto Efficient and Envy Free Allocations 
Wednesday 11:00 — 12:00
1  Graham Cormode, Zohar Karnin, Edo Liberty, Justin Thaler and Pavel Veselý — Relative Error Streaming Quantiles 
2  Amir Abboud, Robert Krauthgamer and Ohad Trabelsi — Subcubic Algorithms for GomoryHu Tree in Unweighted Graphs 
3  Waldo Gálvez, Fabrizio Grandoni, Arindam Khan, Diego Ramirez and Andreas Wiese — Improved Approximation Algorithms for 2Dimensional Knapsack: Packing into Multiple LShapes, Spirals, and More 
4  Vladimir Braverman, Robert Krauthgamer, Aditya Krishnan and Shay Sapir — NearOptimal Entrywise Sampling of Numerically Sparse Matrices 
5  Waldo Gálvez, Fabrizio Grandoni, Afrouz Jabal Ameli, Klaus Jansen, Arindam Khan and Malin Rau — A Tight (3/2+ɛ) Approximation for Skewed Strip Packing 
6  Monika Henzinger, Stefan Neumann, Harald Räcke and Stefan Schmid — Tight Bounds for Online Graph Partitioning 
7  Roie Levin and David Wajc — Streaming Submodular Matching Meets the PrimalDual Method 
8  Peter Gartland, Daniel Lokshtanov, Marcin Pilipczuk, Michał Pilipczuk and Paweł Rzążewski — Finding Large Induced Sparse Subgraphs in C_{t}Free Graphs in Quasipolynomial Time 
9  Andreas S. Schulz and Claudio Telha Cornejo — Integer factorization and Riemann’s hypothesis: Why twoitem joint replenishment is hard 
10  Bernadette CharronBost and Patrick LambeinMonette — Average consensus in spite of dynamic communications 
11  Katarzyna Paluch — New Approximation Algorithms for Maximum Asymmetric Traveling Salesman and Shortest Superstring 
12  Grant Schoenebeck and FangYi Yu — Learning and Strongly Truthful MultiTask Peer Prediction: A Variational Approach 
13  Daniel Dadush, László Végh and Bento Natura — Revisiting Tardos's Framework for Linear Programming: Faster Exact Solutions using Approximate Solvers 
14  Kush Bhatia, Peter Bartlett, Anca Dragan and Jacob Steinhardt — Agnostic learning with unknown utilities 
15  Parinya Chalermsook, Syamantak Das, Yunbum Kook, Bundit Laekhanukit, Yang P. Liu, Richard Peng, Mark Sellke and Daniel Vaz — Vertex Sparsification for Edge Connectivity 