Seminář z aproximačních a online algoritmů

Petr Kolman, Jiří Sgall

Předchozí program semináře, zimní semestr 2015 [Previous program, Fall 2015]

11.1. 2016 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)

Pavel Veselý: Packet Scheduling with Lookahead
(joint work with Martin Böhm, Marek Chrobak and Jiří Sgall)

Abstract: In online packet scheduling, packets arrive at a network switch, each with a deadline and a weight. At each integer time step the algorithm can schedule only one packet and the goal is to maximize the total weight of scheduled packets. In other words, packet scheduling is the special case of throughput maximization on one machine in which all jobs have unit length. We study a version of the problem with lookahead in which the algorithm sees all the packets arriving at time $t+1$ while scheduling a packet at time $t$. We design a $1.5$-competitive algorithm for the special case of 2-bounded instances in which the deadline of each packet is at most two steps after the packet arrives. We also prove a lower bound of $1/4+\sqrt{17}/4 \approx 1.28078$ which holds already for 2-bounded instances.

4. 1., 11.1. 2016 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)

Martin Koutecký: Extension Complexity, MSO Logic, and Treewidth
(joint work with
Petr Kolman and Hans Raj Tiwary)

Abstract: We consider the convex hull Pφ(G) of all satisfying assignments of a given MSO formula φ on a given graph G. We show that there exists an extended formulation of the polytope Pφ(G) that can be described by f(|φ|,τ)⋅n inequalities, where n is the number of vertices in G, τ is the treewidth of G and f is a computable function depending only on φ and τ. In other words, we prove that the extension complexity of Pφ(G) is linear in the size of the graph G, with a constant depending on the treewidth of G and the formula φ. This provides a first meta-theorem about the extension complexity of polytopes related to a wide class of problems and graphs.

21. 12. 2015 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)
Zdeněk Hanzálek (ČVUT FEL): Scheduling Problems in Time-triggered Systems

Scheduling of messages on communication channels or computational tasks on processors presents challenging optimization problem with various constraints and optimization criteria. The talk introduces these combinatorial problems, their mathematical properties, computational complexity and some algorithms creating a static schedule for time-triggered systems. In one specific case we will consider an application of the on-line algorithm. A general model, based on the Resource Constrained Project Scheduling with Temporal Constraints, is presented first. The approach will be demonstrated on the communication protocols like Profinet IO IRT (an industrial Ethernet protocol standardized in IEC61158), FlexRay (used in automotive for safety critical functionalities with multiple periods) and IEEE 802.15.4/ZigBee (beacon enabled cluster-tree Wireless Sensor Network). Further, we will focus on a mixed-criticality scheduling problem where each task has a criticality given by a positive integer number and the exact processing time of the task is not known. The schedule with different criticality levels is generated off-line, but its on-line execution switches among the criticality levels depending on the actual values of the processing times. Finally, we will concentrate on the cyclic scheduling problem used to decide the cluster activation in a wireless sensor network while exploiting polynomial-time complexity of a specific sub-problem. We will focus on "surprisingly nice" problem formulations by the ILP, shortest paths and bin packing.  Every listener following the talk till its end will get a little chocolate.

14. 12. 2015 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)

Martin Koutecký: Shifted Integer Programming: Totally Unimodular Extensions, and Applications
(joint work with Shmuel Onn)

Abstract: The so called "shifted problem", introduced by Onn in 2015, is a natural generalization of several classes of problems, such as some graph partitioning problems or vulnerability problems. Generally, we are looking for a set of $n$ objects from some set S (such as $s-t$ paths, spanning trees or dominating sets of a graph) such that their intersection is as small as possible. Specifically, in order to broadcast an important message over a network under threat, one might want to find in advance $n$ spanning trees such that as few edges as possible is used by all of them. Or, in order to solve the domatic number problem, one is looking for a set of $n$ dominating sets such that no two intersect.

It was shown recently by Kaibel, Onn and Sarrabezolles [1] that the shifted problem can be solved in polynomial time of the set S is given by a totally unimodular system (i.e. S = {x | Ax=b, x ∈ {0,1}^d} with A TU). We show that this also holds when S has a totally unimodular extended formulation, that is, S = {x | A(x,y) = b, (x,y) ∈ {0,1}^{d+m}}. Together with a recent result of Kolman, Koutecký and Tiwary [2] (and a small observation) this implies that solving many problems (such as domatic number, triangle partition and others) on graphs of bounded treewidth is fixed-parameter tractable.

30. 11., 7. 12. 2015 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)

Aranyak Mehta, Bo Waggoner, Morteza Zadimoghaddam: Online Stochastic Matching with Unequal Probabilities. SODA 2015: 1388-1404. Also here.
(referuje Morteza Monemizadeh)

Abstract: The online stochastic matching problem is a variant of online bipartite matching in which edges are labeled with probabilities. A match will "succeed" with the probability along that edge; this models, for instance, the click of a user in search advertisement. The goal is to maximize the expected number of successful matches. This problem was introduced by Mehta and Panigrahi (FOCS 2012), who focused on the case where all probabilities in the graph are equal. They gave a 0.567-competitive algorithm for vanishing probabilities, relative to a natural benchmark, leaving the general case as an open question.
This paper examines the general case where the probabilities may be unequal. We take a new algorithmic approach rather than generalizing that of Mehta and Panigrahi: Our algorithm maintains, at each time, the probability that each offline vertex has succeeded thus far, and chooses assignments so as to maximize marginal contributions to these probabilities. When the algorithm does not observe the realizations of the edges, this approach gives a 0.5-competitive algorithm, which achieves the known upper bound for such "non-adaptive" algorithms. We then modify this approach to be "semi-adaptive:" if the chosen target has already succeeded, choose the arrival's "second choice" instead (while still updating the probabilities non-adaptively). With one additional tweak to control the analysis, we show that this algorithm achieves a competitive ratio of 0.534 for the unequal, vanishing probabilities setting. A "fully-adaptive" version of this algorithm turns out to be identical to an algorithm proposed, but not analyzed, in Mehta and Panigrahi (2012); we do not manage to analyze it either since it introduces too many dependencies between the stochastic processes. Our semi-adaptive algorithm thus can be seen as allowing analysis of competitive ratio while still capturing the power of adaptivity.

16. 11., 23. 11. 2015 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)

Deeparnab Chakrabarty, Sanjeev Khanna and Shi Li: On (1,epsilon)-Restricted Assignment Makespan Minimization. SODA 2015. Also http://arxiv.org/abs/1410.7506.
(referuje Pavel Veselý)

Abstract: Makespan minimization on unrelated machines is a classic problem in approximation algorithms. No polynomial time (2-\delta)-approximation algorithm is known for the problem for constant \delta>0. This is true even for certain special cases, most notably the restricted assignment problem where each job has the same load on any machine but can be assigned to one from a specified subset. Recently in a breakthrough result, Svensson [Svensson, 2011] proved that the integrality gap of a certain configuration LP relaxation is upper bounded by 1.95 for the restricted assignment problem; however, the rounding algorithm is not known to run in polynomial time.
In this paper we consider the (1,\varepsilon)-restricted assignment problem where each job is either heavy (p_j=1) or light (p_j=\varepsilon), for some parameter \varepsilon>0. Our main result is a (2-\delta)-approximate polynomial time algorithm for the (1,\epsilon)-restricted assignment problem for a fixed constant \delta> 0. Even for this special case, the best polynomial-time approximation factor known so far is 2. We obtain this result by rounding the configuration LP relaxation for this problem. A simple reduction from vertex cover shows that this special case remains NP-hard to approximate to within a factor better than 7/6. 

26. 10., 2. 11. 2015 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)

Elaine Levey, Thomas Rothvoss: A Lasserre-based (1+eps)-approximation for Pm|p_j=1,prec|C_max. http://arxiv.org/abs/1509.07808.
(referuje Martin Böhm)

Abstract:  In a classical problem in scheduling, one has n unit size jobs with a precedence order and the goal is to find a schedule of those jobs on m identical machines as to minimize the makespan. It is one of the remaining four open problems from the book of Garey & Johnson whether or not this problem is NP-hard for m=3. We prove that for any fixed \varepsilon and m, a Sherali-Adams / Lasserre lift of the time-index LP with a slightly super poly-logarithmic number of r = (\log(n))^{\Theta(\log \log n)} rounds provides a (1+eps)-approximation. This implies an algorithm that yields a (1+eps)-approximation in time n^{O(r)}. The previously best approximation algorithms guarantee a 2-7/(3m+1)-approximation in polynomial time for m>=4 and 4/3 for m=3. Our algorithm is based on a recursive scheduling approach where in each step we reduce the correlation in form of long chains. Our method adds to the rather short list of examples where hierarchies are actually useful to obtain better approximation algorithms.

19. 10. 2015 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)

Guru Guruganesh, Laura Sanita, Chaitanya Swamy: Region-Growing and Combinatorial Algorithms for k-Route Cut Problems. http://arxiv.org/abs/1410.5105.
(referuje Jan Voborník)

Abstract:  We study the k-route generalizations of various cut problems, the most general of which is k-route multicut} (k-MC) problem, wherein we have r source-sink pairs and the goal is to delete a minimum-cost set of edges to reduce the edge-connectivity of every source-sink pair to below k. The k-route extensions of multiway cut (k-MWC), and the minimum s-t cut problem (k-(s,t)-cut), are similarly defined. We present various approximation and hardness results for these k-route cut problems that improve the state-of-the-art for these problems in several cases. (i) For k-route multiway cut}, we devise simple, but surprisingly effective, combinatorial algorithms that yield bicriteria approximation guarantees that markedly improve upon the previous-best guarantees. (ii) For k-route multicut}, we design algorithms that improve upon the previous-best approximation factors by roughly an O(sqrt(log r))-factor, when k=2, and for general k and unit costs and any fixed violation of the connectivity threshold k. The main technical innovation is the definition of a new, powerful region growing lemma that allows us to perform region-growing in a recursive fashion even though the LP solution yields a different metric for each source-sink pair. (iii) We complement these results by showing that the k-route s-t cut problem is at least as hard to approximate as the densest-k-subgraph (DkS) problem on uniform hypergraphs.

12. 10. 2015 (Pondělí [Monday], 12:20, MFF UK Malá Strana S8)

MohammadTaghi Hajiaghayi (University of Maryland at College Park):
Parameterized and Promised Streaming: Making Big Data More Accessible

Abstract: Big networks are constantly growing in both size and relevance: from social networks such as Google+, Facebook, and Twitter, to brain networks, gene regulatory networks, and health/disease networks. The traditional approach to analyzing such big datasets is to use powerful supercomputers (clusters), ideally large enough to store the data in main memory.  The downsides to this approach are that many potential users of big data lack such powerful computational resources (e.g. point-of-sale Bitcoin blockchain analysis), and it can be difficult to solve unexpected problems within such a large infrastructure (e.g. image analysis after the Boston Marathon Bombing).  Our main goal here is to enable the processing of huge datasets on computational devices with a limited amount of fast memory, connected to a relatively slow external data source. In particular in this talk, we introduce two new approaches of parameterized and promised  streaming  for optimization (often for NP-hard) problems over  dynamic graph streams in which edges can be both inserted and deleted. We emphasize that indeed the approaches that we introduce in this talk have relevance in Mapreduce and other distributed models and can be easily implemented in a variety of popular traditional models as well.