Seminář z aproximačních a online algoritmů

Petr Kolman, Jiří Sgall

Předchozí program semináře, letní semestr 2015 [Previous program, Spring 2015]

18. 5. 2015 (Pondělí [Monday], 10:40, MFF UK Malá Strana S8)

Marcin Bienkowski: An Optimal Lower Bound for Buffer Management in Multi-Queue Switches. Algorithmica 68:426-447, 2012. Preliminary version SODA 2011.

Abstract: In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, their excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput.

We present a tight lower bound of e/(e−1)≈1.582 on the competitive ratio of the throughput maximization, which holds even for fractional or randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm Random Schedule. Our result contradicts the claimed performance of the algorithm Random Permutation; we point out a flaw in its original analysis.

11. 5. 2015 (Pondělí [Monday], 10:40, MFF UK Malá Strana S8)

Jan Elffers, Mathijs de Weerdt: Scheduling with two non-unit task lengths is NP-complete.

Abstract: We consider the non-preemptive task scheduling problem with release times and deadlines on a single machine parameterized by the set of task lengths the tasks can have. The identical task lengths case is known to be solvable in polynomial time. We prove that the problem with two task lengths is NP-complete, except for the case in which the short jobs have unit task length, which was already known to be efficiently solvable.

27. 4., 4. 5. 2015 (Pondělí [Monday], 10:40, MFF UK Malá Strana S8)

Morteza Monemizadeh: Prophet Secretary
(joint work with Hossein Esfandiari, MohammadTaghi Hajiaghayi, and Vahid Liaghat)

Abstract: Optimal stopping theory is a powerful tool for analyzing scenarios such as online auctions in which we generally require optimizing an objective function over the space of stopping rules for an allocation process under uncertainty. Perhaps the most classic problems of stopping theory are the prophet inequality problem and the secretary problem. The classical prophet inequality states that by choosing the same threshold OPT/2 for every step, one can achieve the tight competitive ratio of 0.5. On the other hand, for the basic secretary problem, the optimal strategy is to ignore the first n/e elements of the sequence while using the maximum of the first n/e elements as the threshold for the rest of sequence. This strategy achieves the optimal competitive ratio of 1/e=0.36. 

In this paper, we introduce prophet secretary, a natural combination of the prophet inequality problem and the secretary problem. An example of motivations for our problem is as follows. Consider a seller that has an item to sell on the market to a set of arriving customers. The seller knows the types of customers that may be interested in the item and he has a price distribution for each type: the price offered by a customer of a type is anticipated to be drawn from the corresponding distribution. However, the customers arrive in a random order. Upon the arrival of a customer, the seller makes an irrevocable decision to whether sell the item at the offered price. We address thequestion of finding a strategy for selling the item at a high price.

We show that by using a single uniform threshold one cannot break the 0.5 barrier of the prophet inequality for the prophet secretary problem. However, we show that

Our results improve the approximation guarantee of single-item sequential posted pricing mechanisms from 0.5 to (1-1/e) when the order of agents (customers) are chosen randomly.

We also consider the minimization variants of the prophet inequality problem. In particular, we show that, even for the simple case in which numbers are drawn from identical and independent distributions (i.i.d.),  there is no constant competitive online algorithm for the minimization variants of the prophet inequality and prophet secretary problems.

13. 4. 2015 (Pondělí [Monday], 10:40, MFF UK Malá Strana S8)

Andreas Björklund and Thore Husfeldt: Shortest Two Disjoint Paths in Polynomial Time. ICALP 2014 (best paper award).
(referuje Jan Voborník)

Abstract: Given an undirected graph and two pairs of vertices (s_i ,t_i) for i ∈ {1,2} we show that there is a polynomial time Monte Carlo algorithm that finds disjoint paths of smallest total length joining s_i and t_i for i ∈ {1,2} respectively, or concludes that there most likely are no such paths at all. Our algorithm applies to both the vertex- and edge-disjoint versions of the problem. Our algorithm is algebraic and uses permanents over the quotient ring Z_4[X]/(X^m) in combination with Mulmuley, Vazirani and Vazirani’s isolation lemma to detect a solution. We develop a fast algorithm for permanents over said ring by modifying Valiant’s 1979 algorithm for the permanent over Z_{2^t}.

30. 3., 20. 4. 2015 (Pondělí [Monday], 10:40, MFF UK Malá Strana S8)

Nikhil Bansal: Approximating independent sets in sparse graphs. SODA 2015.
(referuje Martin Böhm)

Abstract: We consider the maximum independent set problem on sparse graphs with maximum degree d. The best known result for the problem is an SDP based O(d log log d / log d) approximation due to Halperin. It is also known that no o(d / log^2 d) approximation exists assuming the Unique Games Conjecture. We show the following two results:
(i) The natural LP formulation for the problem strengthened by O(log^4 d) levels of the mixed-hierarchy has an integrality gap of O~(d / log^2 d), where O~() ignores some log log d factors. However, our proof is non-constructive, in particular it uses an entropy based approach due to Shearer, and does not give a O~(d / log2 d) approximation algorithm with sub-exponential running time.
(ii) We give an O(d / log d) approximation based on polylog(d)-levels of the mixed hierarchy that runs in n^O(1) exp(log^O(1) d) time, improving upon Halperin's bound by a modest log log d factor. Our algorithm is based on combining Halperin's approach together with an idea used by Ajtai, Erdos, Komlos and Szemeredi to show that K_r-free, degree-d graphs have independent sets of size Omega_r(n log log d / d).


23. 3. 2015 (Pondělí [Monday], 10:40, MFF UK Malá Strana S8)

Yann Disser and Martin Skutella: The Simplex Algorithm is NP-mighty. SODA 2015. Also http://arxiv.org/abs/1311.5935.
(referuje Pavel Veselý)

Abstract: We propose to classify the power of algorithms by the complexity of the problems that they can be used to solve. Instead of restricting to the problem a particular algorithm was designed to solve explicitly, however, we include problems that, with polynomial overhead, can be solved 'implicitly' during the algorithm's execution. For example, we allow to solve a decision problem by suitably transforming the input, executing the algorithm, and observing whether a specific bit in its internal configuration ever switches during the execution. We show that the Simplex Method, the Network Simplex Method (both with Dantzig's original pivot rule), and the Successive Shortest Path Algorithm are NP-mighty, that is, each of these algorithms can be used to solve any problem in NP. This result casts a more favorable light on these algorithms' exponential worst-case running times. Furthermore, as a consequence of our approach, we obtain several novel hardness results. For example, for a given input to the Simplex Algorithm, deciding whether a given variable ever enters the basis during the algorithm's execution and determining the number of iterations needed are both NP-hard problems. Finally, we close a long-standing open problem in the area of network flows over time by showing that earliest arrival flows are NP-hard to obtain.

9. 3., 16. 3. 2015 (Pondělí [Monday], 10:40, MFF UK Malá Strana S8)

Gábor Braun, Sebastian Pokutta: The matching polytope does not admit fully-polynomial size relaxation schemes. SODA 2015. Also http://arxiv.org/abs/1403.6710.
(referuje Martin Koutecký)

Abstract: The groundbreaking work of Rothvoss [arxiv:1311.2369] established that every linear program expressing the matching polytope has an exponential number of inequalities (formally, the matching polytope has exponential extension complexity). We generalize this result by deriving strong bounds on the polyhedral inapproximability of the matching polytope: for fixed 0<ε<1, every polyhedral (1+ε/n)-approximation requires an exponential number of inequalities, where n is the number of vertices. This is sharp given the well-known ρ-approximation of size O((nρ/(ρ−1))) provided by the odd-sets of size up to ρ/(ρ−1). Thus matching is the first problem in P, whose natural linear encoding does not admit a fully polynomial-size relaxation scheme (the polyhedral equivalent of an FPTAS), which provides a sharp separation from the polynomial-size relaxation scheme obtained e.g., via constant-sized odd-sets mentioned above.
Our approach reuses ideas from Rothvoss [arxiv:1311.2369], however the main lower bounding technique is different. While the original proof is based on the hyperplane separation bound (also called the rectangle corruption bound), we employ the information-theoretic notion of common information as introduced in Braun and Pokutta [this http URL], which allows to analyze perturbations of slack matrices. It turns out that the high extension complexity for the matching polytope stem from the same source of hardness as for the correlation polytope: a direct sum structure.

23. 2. 2015, 2. 3. 2015 (Pondělí [Monday], 10:40, MFF UK Malá Strana S8)

Lukáš Folwarczný: Caching Is Hard: Even For Small Pages

Abstract: General caching (pages have arbitrary sizes and fault costs) in the offline setting has been studied since 1990s. In 2010, it was shown that general caching is strongly NP-hard, even when all pages have fault cost one (this case is known as the fault model). However, pages of arbitrary size (e.g. as big as half of the cache) are needed to prove that the problem is hard. In real-life problems, pages are likely to be small in comparison with the size of the cache. We present a new reduction that proves the strong NP-hardness of caching in the fault model with page sizes restricted to {1, 2, 3}.