## Accepted papers

Integral Mixed Unit Interval Graphs

We characterize graphs that have intersection representations using unit intervals with open or closed ends such that all ends of the intervals are integral in terms of infinitely many minimal forbidden induced subgraphs. Furthermore, we provide a quadratic-time algorithm that decides if a given interval graph admits such an intersection representation.

Algorithms for the Strong Chromatic Index of Halin Graphs,
Distance-hereditary Graphs and Maximal Outerplanner Graphs

We show that there exist linear-time algorithms that computethe strong chromatic index of Halin graphs, of maximal outerplanar graphsand of distance-hereditary graphs.

Making Profit in a Prediction Market

Suppose one would like to estimate the outcome distributionof an uncertain event in the future. One way to do this is to ask for acollective estimate from many people, and prediction markets can be usedto achieve such a task. By selling securities corresponding to the possibleoutcomes, one can infer traders' collective estimate from the market priceif it is updated properly. In this paper, we study prediction markets fromthe perspectives of both traders and market makers. First, we show thatin any prediction market, a trader has a betting strategy which canguarantee a positive expected prot for him when his estimate about theoutcome distribution is more accurate than that from the market price.Next, assuming traders playing such a strategy, we propose a marketwhich can update its price to converge quickly to the average estimate ofall traders if the average estimate evolves smoothly. Finally, we show thata trader in our market can guarantee a positive expected prot when hisestimate is more accurate than the average estimate of all traders if theaverage estimate again evolves in a smooth way.

Lower Bounds against Weakly Uniform Circuits

A family of Boolean circuits $\{C_n\}_{n\geq 0}$ is called \emph{$\gamma(n)$-weakly uniform} if there is a polynomial-time algorithm for deciding the direct-connection language of every $C_n$, given \emph{advice} of size $\gamma(n)$. This is a relaxation of the usual notion of uniformity, which allows one to interpolate between complete uniformity (when $\gamma(n)=0$) and complete non-uniformity (when $\gamma(n)> |C_n|$). Weak uniformity is essentially equivalent to \emph{succinctness} introduced by Jansen and Santhanam~\cite{JS11}.Our main result is that \Perm is not computable by polynomial-size $n^{o(1)}$-weakly uniform $\TC^0$ circuits. This strengthens the results by Allender~\cite{All99} (for \emph{uniform} $\TC^0$) and by Jansen and Santhanam~\cite{JS11} (for weakly uniform \emph{arithmetic} circuits of constant depth). Our approach is quite general, and can be used to extend to the ``weakly uniform'' setting all currently known circuit lower bounds proved for the ``uniform'' setting. For example, we show that \Perm is not computable by polynomial-size $(\log n)^{O(1)}$-weakly uniform threshold circuits of depth $o(\log\log n)$, generalizing the result by Koiran and Perifel~\cite{KP09}.

Complementary vertices and adjacency testing in polytopes

Our main theoretical result is that, if a simple polytope has a pair of complementary vertices (i.e., two vertices with no facets in common), then it has a second such pair. Using this result, we improve adjacency testing for vertices in both simple and non-simple polytopes: given a polytope in the standard form {x ∈ Rn | Ax = b and x ≥ 0} and a list of its V vertices, we describe an O(n) test to identify whether any two given vertices are adjacent. For simple polytopes this test is perfect; for non-simple polytopes it may be indeterminate, and instead acts as a filter to identify non-adjacent pairs. Our test requires an O(n^2 V + n V^2) precomputation, which is acceptable in settings such as all-pairs adjacency testing. These results improve upon the more general O(nV) combinatorial and O(n^3) algebraic adjacency tests from the literature.

On TC0 Lower Bounds for the Permanent

In this paper we consider the problem of proving lower bounds for the permanent. Anongoing line of research has shown super-polynomial lower bounds for slightly-non-uniform small-depth threshold and arithmetic circuits [All99, KP09, JS11, JS12]. We prove a new parameterized lower bound that includes each of the previous results as sub-cases. Our main result implies that the permanent does not have Boolean threshold circuits of the following kinds.1. Depth O(1), poly-log(n) bits of non-uniformity, and size s(n) such that for all constants c, s^{(c)}(n) < 2^n. The size s must satisfy another technical condition that is true of functions normally dealt with (such as compositions of polynomials, logarithms, and exponentials).2. Depth o(log log n), poly-log(n) bits of non-uniformity, and size n^{O(1)}.3. Depth O(1), n^{o(1)} bits of non-uniformity, and size n^{O(1)}.Our proof yields a new “either or” hardness result. One instantiation is that either NP does not have polynomial-size constant-depth threshold circuits that use n^{o(1)} bits of non-uniformity, or the permanent does not have polynomial-size general circuits.

Online Coloring of Bipartite Graphs With and Without Advice

In the online version of the well-known graph coloring problem,the vertices appear one after the other together with the edges tothe already known vertices and have to be irrevocably colored immediatelyafter their appearance. We consider this problem on bipartite, i. e.,two-colorable graphs. We prove that 1.13747*log_2 n colors are necessaryfor any deterministic online algorithm to color any bipartite graphon n vertices, thus improving on the previously known lower bound oflog_2 n + 1 for suciently large n.Recently, the advice complexity was introduced as a method for a fine-grainedanalysis of the hardness of online problems. We apply this methodto the online coloring problem and prove (almost) tight linear upper andlower bounds on the advice complexity of coloring a bipartite graph onlineoptimally or using 3 colors. Moreover, we prove that O(\sqrt{n}) advicebits are sufficient for coloring any graph on n vertices with at mostlog_2 n colors.

Dynamic Programming for $H$-minor-free Graphs (Extended Abstract)

We provide a framework for the design and analysis of dynamic programmingalgorithms for $H$-minor-free graphs with branchwidth at most $k$. Ourtechnique applies to a wide family of problems where standard (deterministic)dynamic programming runs in $2^{O(k\cdot \log k)}\cdot n^{O(1)}$ steps, with$n$ being the number of vertices of the input graph. Extending the approachdeveloped by the same authors for graphs embedded in surfaces, we introduce anew type of branch decomposition for $H$-minor-free graphs, called an {\em$H$-minor-free cut decomposition}, and we show that they can be constructed in$O_{h}(n^{3})$ steps, where the hidden constant depends exclusively on $H$. Weshow that the separators of such decompositions have connected packings whosebehavior can be described in terms of a combinatorial object called\emph{$\ell$-triangulation}. Our main result is that when applied on$H$-minor-free cut decompositions, dynamic programming runs in $2^{O_h(k)}\cdotn^{O(1)}$ steps. This broadens substantially the class of problems that can besolved deterministically in {\em single-exponential} time for $H$-minor-freegraphs.

On the Minimum Degree Hypergraph Problem with Subset Size two and the Red-Blue Hitting Set Problem with the Consecutive Ones Property

Let S be a set and let C_b (blue collection) and C_r (red collection) be two collections of subsets of S. The MDH problem is to find a subset S' of S such that S' \cap B \neq {\0} for all B \in Cb and |S' \cap R| \leq k for all R \in C_r, where k is a given non-negative integer. The RBSC problem is to find a subset S' of S with S' \cap B \neq \{0} for all B \in C_b which minimizes |\{R | R \in C_r, S' \cap R \neq \{0}\}|. In this paper, improved algorithms are proposed for the MDH problem with k = 1 and all sets in C_b having size two and the RBSC problem with C_b \cup C_r having the consecutive ones property. For the first problem, we give an optimal O(|S| + |C_b| + \Sigma _{R \in C_r} |R|)-time algorithm, improving the previous O(|S| + |C_b| + \Sigma _{R \in C_r} |R|^2) bound by Dom et al. Our improvement is obtained by presenting a new representation of a dense directed graph, which may be of independent interest. For the second problem, we give an O(|C_b| + |C_r| lg |S| + |S| lg |S|)-time algorithm, improving the previous O(|C_b||S| + |C_r||S| + |S|^2) bound by Chang et al.

Ramsey numbers for line graphs and perfect graphs

Given two positive integers i and j, the Ramsey number R(i,j) is the smallest integer such that any graph on R(i,j) vertices contains a clique of size i or an independent set of size j. Ramsey numbers are notoriously hard to determine, and the exact values of R(i,j) are known only for very small values of i and j. Given this difficulty, it is natural to impose restrictions on the set of considered graphs. For any graph class G, the Ramsey number R_G(i,j) is the smallest integer such that any graph in G on at least R_G(i,j) vertices has a clique of size i or an independent set of size j. It is known that for planar graphs all Ramsey numbers can be determined by an exact formula, whereas for claw-free graphs there exist Ramsey numbers that are as difficult to determine as for arbitrary graphs. Surprisingly, these seem to be the only known results in this direction. We continue this line of research by giving exact formulas for determining all Ramsey numbers for various classes of graphs. Our main result is an exact formula for all Ramsey numbers for line graphs, which form a large subclass of claw-free graphs and are not perfect. In addition, we determine all Ramsey numbers for perfect graphs and for several subclasses of perfect graphs.

Geometric RAC Simultaneous Drawings of Graphs

In this paper, we study the geometric RAC simultaneous drawingproblem: Given two planar graphs that share a common vertex set buthave disjoint edge sets, a geometric RAC simultaneous drawing is adrawing in which (i) each graph is drawn planar, (ii) there are noedge overlaps, and, (iii) crossings between edges of the two graphsoccur at right-angles. We first prove that two planar graphsadmitting a geometric simultaneous drawing may not admit a geometricRAC simultaneous drawing. We further show that a cycle and amatching always admit a geometric RAC simultaneous drawing, whichcan be constructed in linear time.We also study a closely related problem according to which we aregiven a planar embedded graph G and the main goal is to determinea geometric drawing of G and its dual G* (without theface-vertex corresponding to the external face) such that: (i) Gand G^* are drawn planar, (ii) each vertex of the dual is drawninside its corresponding face of G and, (iii) the primal-dual edgecrossings form right-angles. We prove that it is always feasible toconstruct such a drawing if the input graph is an outerplanarembedded graph.

Maximum number of minimal feedback vertex sets in chordal graphs and cographs

A feedback vertex set in a graph is a set of vertices whose removal leaves the remaining graph acyclic. Given the vast number of published results concerning feedback vertex sets, it is surprising that the related combinatorics appears to be so poorly understood. The maximum number of minimal feedback vertex sets in a graph on n vertices is known to be at most 1.864^n. However, no examples of graphs having 1.593^n or more minimal feedback vertex sets are known, which leaves a considerable gap between these upper and lower bounds on general graphs. In this paper, we close the gap completely for chordal graphs and cographs, two famous perfect graph classes that are not related to each other. We prove that for both of these graph classes, the maximum number of minimal feedback vertex sets is 10^(n/5) ~ 1.585^n, and there is a matching lower bound.

Formula Complexity of Ternary Majorities

It is known that any self-dual Boolean function can be decomposed into compositions of 3-bit majority functions. In this paper, we define a notion of a ternary majority formula, which is a ternary tree composed of nodes labeled by 3-bit majority functions and leaves labeled by literals. We study their complexity in terms of formula size. In particular, we prove upper and lower bounds for ternary majority formula size of several Boolean functions. To devise a general method to prove the ternary majority formula size lower bounds, we give an upper bound for the largest separation between ternary majority formula size and DeMorgan formula size.

Computing Partitions of Rectilinear Polygons with Minimum Stabbing Number

The stabbing number of a partition of a rectilinear polygonP into rectangles is the maximum number of rectangles stabbed by anyaxis-parallel line segment contained in P. We consider the problem offinding a rectangular partition with minimum stabbing number for agiven rectilinear polygon P. First, we impose a conforming constrainton partitions: every vertex of every rectangle in the partition must lieon the polygon’s boundary. We show that finding a conforming rectangularpartition of minimum stabbing number is NP-hard for rectilinearpolygons with holes. We present a rounding method based on a linearprogramming relaxation resulting in a polynomial-time 2-approximationalgorithm. We give an O(n log n)-time algorithm to solve the problemexactly when P is a histogram (some edge in P can see every point inP) with n vertices. Next we relax the conforming constraint and show how to extendthe first linear program to achieve a polynomial-time 2-approximationalgorithm for the general problem, improving the approximation factorachieved by Abam, Aronov, de Berg, and Khosravi (ACM SoCG 2011).

Computing Shapley Value in Supermodular Coalitional Games

Coalitional games allow subsets (coalitions) of players to cooperateto receive a collective payoff. This payoff is then distributed``fairly'' among the members of that coalition according to somedivision scheme. Various solution concepts have been proposed asreasonable schemes for generating fair allocations. The \emph{Shapley value} is one classic solution concept: player $i$'s share isprecisely equal to $i$'s expected marginal contribution if the playersjoin the coalition one at a time, in a uniformly random order. Inthis paper, we consider the class of supermodular games (sometimescalled convex games), and give a fully polynomial-time randomizedapproximation scheme (FPRAS) to compute the Shapley value to within a
$(1 \pm \varepsilon)$ factor in monotone supermodular games. We showthat this result is tight in several senses: no deterministicalgorithm can approximate Shapley value as well, no randomizedalgorithm can do better, and both monotonicity and supermodularity arerequired for the existence of an efficient $(1 \pm\varepsilon)$-approximation algorithm. We also argue that, relativeto supermodularity, monotonicity is a mild assumption, and we discusshow to transform supermodular games to be monotonic.

Monotone paths in planar convex subdivisions

Consider a connected subdivision of the plane into $n$ convex faceswhere every vertex has degree at most $\Delta$. Then, starting from every vertexthere is a path with at least $\Omega(\log_\Delta n)$ vertices that is monotonein some direction. This bound is the best possible.Consider now a connected subdivision of the plane into $n$ convex faceswhere exactly $k$ faces are unbounded. Then, there is a path withat least $\Omega(\log (n/k)/\log \log (n/k))$ vertices that is monotonein some direction. This bound is also the best possible.Our methods are constructive and lead to efficient algorithms forcomputing monotone paths of lengths specified above.

Simultaneous Embeddings with Vertices Mapping to Pre-Specified Points

We discuss the problem of embedding graphs in the plane with restrictions on thevertex mapping. In particular, we introduce a technique for drawing planargraphs with a fixed vertex mapping that bounds the number of times edges bend.An immediate consequence of this technique is that any planar graph can be drawnwith a fixed vertex mapping so that edges map to piecewise linear curves with atmost $3n + O(1)$ bends each. By considering uniformly random planar graphs, weshow that $2n + O(1)$ bends per edge is sufficient on average.To further utilize our technique, we consider simultaneous embeddings of$k$ uniformly random planar graphs with vertices mapping to a fixed, commonpoint set. We explain how to achieve such a drawing so that edges map topiecewise linear curves with $O(n^{1-\frac{1}{k}})$ bends each, which holds withoverwhelming probability. This result improves upon the previously best knownresult of $O(n)$ bends per edge for the case where $k \geq 2$. Moreover, wegive a lower bound on the number of bends that matches our upper bound,proving our results are optimal.

The cost of bounded curvature

We study the motion-planning problem for a car-like robot whoseturning radius is bounded from below by one and which is allowed tomove in the forward direction only (Dubins car). For two robotconfigurations $\sigma, \sigma'$, let $\ell(\sigma, \sigma')$ be thelength of a shortest bounded-curvature path from~$\sigma$to~$\sigma'$. For $d \geq 0$, let $\ell(d)$ be the supremum of$\ell(\sigma, \sigma')$, over all pairs $(\sigma, \sigma')$ that areat Euclidean distance~$d$. We study the function~$\dub(d) = \ell(d)- d$, which expresses the difference between the bounded-curvaturepath length and the Euclidean distance of its endpoints. We showthat $\dub(d)$ decreases monotonically from $\dub(0) = 7\pi/3$ to$\dub(\ds) = 2\pi$, and is constant for $d \geq \ds$. Here $\ds\approx 1.5874$. We describe pairs of configurations that exhibitthe worst-case of $\dub(d)$ for every distance~$d$.

Equilibria of GSP for range auction

Position auction is a well-studied model for analyzing online auctions for internet advertisement, in which a set of advertisers bid for a set of slots in a search result page for putting their advertisement links. In particular, it was proved in \cite{Edelmanetal06,Varian06}that the Generalized Second Price (GSP) mechanism for position auction has many interesting properties.In this paper, we extends these results to range auction, in which a bidder may specify a range of slots he is interested in. We prove GSP for range auction has an envy free equilibrium, which is bidder optimal and has the minimum pay property. Further, this equilibrium is equal tothe outcome of the Vickrey-Clarke-Groves mechanism.We also show that the total valuation of any equilibria of GSPfor range auctions is not far from the optimal;it is at least $1/2$ of the optimal.

Speed Scaling for Maximum Lateness

We consider the power-aware problem of scheduling non-preemptively a set of jobs on a single speed-scaled processor so as tominimize the maximum lateness. We consider two variants of the problem: In the budget variant we aim in finding a schedule minimizing themaximum lateness for a given budget of energy, while in the aggregatedvariant our objective is to find a schedule minimizing a linear combination of maximum lateness and energy. We present polynomial timealgorithms for both variants of the problem without release dates andwe prove that both become strongly NP-hard in the presence of arbitrary release dates. Moreover, we prove that, for arbitrary release dates,there is no O(1)-competitive on-line algorithm for the budget variant andwe propose a 2-competitive one for the aggregated variant.

Multilevel Drawings of Clustered Graphs

The cluster adjacency graph of a flat clustered graph C(G,T) is the graph A whose vertices are the clusters in T and whose edges connect clusters containing vertices that are adjacent in G. A multilevel drawing of a clustered graph C consists of a straight-line c-planar drawing of C in which the clusters are drawn as convex regions and of a straight-line planar drawing of A such that each vertex a\in A is drawn in the cluster corresponding to a and such that no edge (a_1,a_2) in A intersects any cluster different from a_1 and a_2. In this paper, we show that every c-planar flat clustered graph admits a multilevel drawing.

Rainbow Colouring of Split and Threshold Graphs

A rainbow colouring of a connected graph is a colouring of the edges of the graph, such that every pair of vertices is connected by at least one path in which no two edges are coloured the same. Such a colouring using minimum possible number of colours is called an optimal rainbow colouring, and the minimum number of colours required is called the rainbow connection number of the graph. A graph is called chordal if every cycle of length more than 3 has a chord. Split graphs is a subclass of chordal graphs defined by the property that its vertices can be partitioned into a clique and an independent set. A threshold graph is a split graph in which the neighbourhoods of the independent set vertices form a linear order under set inclusion. In this article, we show that 1. The problem of deciding whether a graph can be rainbow coloured using 3 colours remains NP-complete even when restricted to the class of split graphs. However, any split graph can be rainbow coloured in linear time using at most one more colour than the optimum.2. For every integer k >= 3, the problem of deciding whether a graph can be rainbow coloured using k colours remains NP-complete even when restricted to the class of chordal graphs. 3. For every positive integer k, threshold graphs with rainbow connection number exactly k can be characterised based on their degree sequence alone. Further, we can optimally rainbow colour a threshold graph in linear time.

Optimally solving a transportation problem using Voronoi diagrams

Let $S$ be a set of $n$ point sites in $R^d$. A bounded set $C \subset R^d$ is to be distributed among the sites $p \in S$ such that (i), each $p$ receives a subset $C_p$ of prescribed volume and (ii), the average distance of all points $z$ of $C$ from their respective sites $p$ is minimized. In our model, volume is quantified by a measure $\mu$, and the distance between a site $p$ and a point $z$ is given by a function $d_p(z)$. Under quite liberal technical assumptions on $C$ and on the functions $d_p(.)$ we show that a solution of minimum total cost can be obtained by intersecting with $C$ the Voronoi diagram of the sites in $S$, based on the functions $d_p(.)$ equipped with suitable additive weights. Moreover, this optimum partition is unique, up to subsets of $C$ of measure zero. Unlike the the deep analytic methods of classical transportation theory, our proof is based ondirect geometric arguments.

Induced subgraph isomorphism: Are some patterns substantially easier than others?

The complexity of the subgraph isomorphism problem where the pattern graphis of fixed size is well known to depend on the topology of thepattern graph. For instance, the larger the maximum independent setof the pattern graph is the more efficient algorithms are known.The situation seems to be substantially different in the case of induced subgraph isomorphism for pattern graphs of fixed size. We present two results which provide evidence that no topology of an induced subgraphof fixed size can be easier to detect or countthan an independent set of related size.We show that:(i) Any fixed pattern graph that hasa maximum independent set of size k that is disjoint from othermaximum independent sets is not easier to detectas an induced subgraph than an independent set of size k.It follows in particular that an induced path on kvertices is not easier to detect than an independent set on \lceil k/2 \rceil vertices, and that an induced even cycle on k vertices is not easier to detect than an independentset on k/2 vertices. In view of linear time upper boundson induced paths of length three and four, our lower boundis tight. Similar corollaries hold for the detection of induced complete bipartite graphsand induced complete split graphs.(ii) For an arbitrary pattern graph H on k verticeswith no isolated vertices,there is a simple subdivision of H,resulting from splitting each edgeinto a path of length four and attachinga distinct path of length three at eachvertex of degree one,that is not easier to detect or count than an independent seton k vertices, respectively.Finally, we show that the so called diamond, pawand C_4 are not easier to detect as an inducedsubgraph than an independent set on three vertices.Our result on C_4 resolves an open question statedin Eschen et al.

On the kernelization complexity of problems on graphs without long odd cycles

Several NP-hard optimization problems, like Maximum Independent Set, q-Coloring, and Max-Cut are polynomial time solvable on bipartite graphs. An equivalent characterization of bipartite graphs is that it is the set of all graphs that do not contain any odd length cycle. Thus, a natural question here is what happens to the complexity of these problems if we know that the length of the longest odd cycle is bounded by k? Let O_k denote the set of all graphs G such that the length of the longest odd cycle is upper bounded by k. Hsu, Ikura and Nemhauser [ Math. Programming, 1981] studied the effect of avoiding long odd cycle for the Maximum Independent Set problem and showed that a maximum sized independent set on a graph G belongs to O_k on n vertices can be found in time n^O(k) . Later, Grotschel and Nemhauser [ Math.Programming, 1984] did a similar study for Max-Cut and obtained an algorithm with running time n^O(k) on a graph G belongs to O_k on n vertices. In this paper, we revisit these problems together with q-Coloring and observe that all of these problems admit algorithms with running time O(c^k n^O(1) ) on a graph G belongs to O_k on n vertices. Thus, showing that all these problems are fixed parameter tractable when parameterized by the length of the longest odd cycle of the input graph. However, following the recent trend in parameterized complexity, we also study the kernelization complexity of these problems. We show that Maximum Independent Set, q-Coloring for some fixed q greater than or equal to 3 and Max-Cut do not admit a polynomial kernel unless co-NP subset of NP/poly, when parameterized by k, the length of the longest odd cycle.

The Complexity of Unary Subset Sum

Given a stream of numbers and a number B, the subsetsum problem deals with checking whether there exists a subset of thestream that adds to exactly B. The unary subset sum problem, USS, isthe same problem when the input is encoded in unary. We prove thatany p pass randomized algorithm computing USS with error at most1/3 must use space Ω( B/p ). We give a randomized \sqrt{Bn log(Bn)} passalgorithm that computes USS with probability at least 2/3 using spaceO(\sqrt{Bn log(Bn)}(log B + log n)). We give a deterministic one-pass algo-rithm which given an input stream and two parameters B, \epsilon, decideswhether there exist a subset of the input stream that adds to a value inthe range [(1 − \epsilon)B, (1 + \epsilon)B] using space O( (log B)/\epsilon). We observe that USS is monotone (under a suitable encoding) and give a monotone NC2 circuit for USS. We also show that any circuit using \epsilon-approximator gates for USS under this encoding needs \Omega(n/ log n) gates to compute the Disjointness function.

Deep Coalescence Reconciliation with Unrooted Gene Trees: Linear Time Algorithms

Gene tree reconciliation problems infer the minimum number of evolutionary events that reconcile gene evolution within the context of a species tree. Here we focus on the deep coalescence (DC) problem, that is, given an unrooted gene tree and a rooted species tree, find a rooting of the gene tree that minimizes the number of DC events, or DC cost, when reconciling the gene tree with the species tree. We describe an O(n) time and space algorithm for the DC problem, where nis the size of the input trees, which improves on the time complexity of the best-known solution by a factor of n. Moreover, we provide an O(n) time and space algorithm that computes the DC scores for each rooting of the given gene tree. We also describe intriguing properties of the DCcost, which can be used to identify credible rootings in gene trees. Finally, we demonstrate the performance of our new algorithms in an empirical study using data from public databases.

Outerplanar graph drawings with few slopes

We consider straight-line outerplanar drawings of outerplanar graphs in which the segments representing edges are parallel to a small number of directions. We prove that $\Delta-1$ directions suffice for every outerplanar graph with maximum degree $\Delta\ge 4$. This improves the previous bound of $O(\Delta^5)$. The bound is tight: for every $\Delta\ge 4$ there is an outerplanar graph of maximum degree $\Delta$ which requires at least $\Delta-1$ distinct edge slopes for its outerplanar straight-line drawing.

On the $2$-Central Path Problem

In this paper we consider the following $2$-Central Path Problem (2CPP): Given a set of $m$ polygonal curves $\mathcal{P} =\{P_1,P_2,\ldots,P_m\}$ in the plane, find two curves $P_u$ and $P_l$, called {\em $2$-central paths}, that best represent all curves in $\mathcal{P}$. Despite its theoretical interest and wide range of practical applications, 2CPP has not been well studied. In this paper, we first establish criteria that $P_u$ and $P_l$ ought to meet in order for them to best represent $\mathcal{P}$. In particular, we require that there exists parametrizations $f_u(t)$ and $f_l(t)$ ($t\in [a,b]$) of $P_u$ and $P_l$ respectively such that the maximum distance from $\{f_u(t), f_l(t)\}$ to curves in $\mathcal{P}$ is minimized. Then an efficient algorithm is presented to solve 2CPP under certain realistic assumptions. Our algorithm constructs $P_u$ and $P_l$ in $O(nm\log^4n 2^{\alpha(n)}\alpha(n))$ time, where $n$ is the total complexity of $\mathcal{P}$ (i.e., the total number of vertices and edges), $m$ is the number of curves in $\mathcal{P}$, and $\alpha(n)$ is the inverse Ackermann function.Our algorithm uses the parametric search technique and is faster than arrangement-related algorithms (i.e. $\Omega(n^2)$) when $m \ll n$ as in most real applications.

F\'{a}ry's Theorem for 1-Planar Graphs

F\'{a}ry's theorem states that every {\em plane graph} can be drawn as a straight-line drawing. A plane graph is a graph embedded in a plane without edge crossings.In this paper, we extend F\'{a}ry's theorem to {\em non-planar} graphs.More specifically, we study the problem of drawing {\em 1-plane graphs} with straight-line edges.A 1-plane graph is a graph embedded in a plane with at most one crossing per edge.We give a characterisation of those 1-plane graphs that admit a straight-line drawing.The proof of the characterisation consists of a linear time testing algorithm and a drawing algorithm.We also show that there are 1-plane graphs for which every straight-line drawing has exponential area.To our best knowledge, this is the first result to extend F\'{a}ry's theorem to non-planar graphs.

Contiguous minimum single-source-multi-sink cuts in weighted planar graphs

We present a fast algorithm for uniform sampling of contiguous minimum cuts separating a source vertex from a set of sink vertices in a weighted undirected planar graph with $n$ vertices embedded in the plane. The algorithm takes $O(n)$ time per sample, after an initial $O(n^3)$ preprocessing time during which the algorithm computes the number of all such contiguous minimum cuts.Contiguous cuts (that is, cuts where the cut set can be separated from the remaining vertices by a simply connected planar region whose boundary intersects only the cut edges) have applications in computer vision and medical imaging~\cite{BoykovV06,VicenteKR08}.

Constant Time Enumeration of Bounded-Size Subtrees in Trees and Its Application

There are demands for efficient methods for discovering patterns in massive structured data in the form of graphs and trees. We study a special case of the k-subtree enumeration problem, originamlly introduced by (Ferreira, Grossi, and Rizzi, ESA'11, 275-286, 2011), where an input graph is a tree of n vertices. Based on reverse search technique, we present an efficient enumeration algorithm that lists all k-subtrees of an input tree in constant delay (i.e., worst-case O(1) time per subtree). This result improves on the straightforward application of Ferreira et al's algorithm with O(k) amortized time per subtrees when an input is a tree.

Approximating the rainbow - better lower and upper bounds

In this paper we study the \emph{minimum rainbow subgraph problem}, motivated by applications in bioinformatics. The input of the problem consists of an undirected graph where each edge is coloured with one of the $p$ possible colors. The goal is to find a subgraph of minimum order (i.e. minimum number of vertices) which has precisely one edge from each color class. In this paper we show a $\min_q(\sqrt{2p}, q + \frac{\Delta}{e^{p q^2/\Delta n}})$-approximation algorithm using LP rounding, where $\Delta$ is the maximum degree in the input graph. In particular, this is a $\max(\sqrt{2n}, \sqrt{2\Delta\ln{\Delta}})$-approximation algorithm.On the other hand we prove that there exists a constant $c$ such that the minimum rainbow subgraph problem does not have a $c\ln{\Delta}$-approximation, unless ${\bf NP \subseteq TIME}(n^{O(\log{\log{n}})})$.

Online Knapsack Problem with Removal Cost

In this paper, we study online knapsack problems with removal cost.The input is a sequence of items $u_1,u_2,\dots,u_n$ associated with size.When $u_i$ is given,an online algorithm may remove some items in the knapsack with some removal cost toputs $u_i$ into the knapsack,or rejects it without paying anything.The profit by an online algorithm is the sum of the sizesin the knapsack minus the total removal cost occurred.Our goal is to make the profit as large as possible when the game ends.The removal cost means cancellation charge or disposal fee.We prove tight bounds for the competitive ratio of these problemswhen the removal cost is unit or proportional to the size of removed items.

An Improved Exact Algorithm for TSP in Degree-4 Graphs

The paper presents an $O^*(1.716^n)$-time polynomial-space algorithm for the traveling salesman problem in an $n$-vertex edge-weighted graph with maximum degree $4$, which improves the previous results of the $O^*(1.890^n)$-time polynomial-space algorithm by Eppstein and the $O^*(1.733^n)$-time exponential-space algorithm by Gebauer.

On the Advice Complexity of Tournaments

Advice complexity, introduced by Karp and Lipton [KL80], asks, for a given power of interpreter, how many bits of “help” suffice to accept a given language (where the advice depends only on the length of the input).Thus, this is a notion that contains aspects both of informational complexity and of computational complexity, and captures non-uniform complexity. We are concerned with the connection between this notion, and P-selective sets. A set is P-selective if given two inputs one can decide, in polynomial time, which of them is more likely to be a member of the set. This basic notion can be used, for example, to perform binary search over these sets, and, more generally, says that the set can be decided with simple procedures. The main question we study in our paper is how complex should the advice be as a function of the power of the interpreter, from the standpoint of average-case complexity. In the case of deterministic interpreters, Ko [Ko83] proved that quadratic advice suffices, and Hemaspaandra and Torenvliet [HT96] showed that a linear advice is required. A long standing open problem is if P-selective sets can be accepted by deterministic polynomial-time Turing machines with linear-size advice, and is our main interest.We prove that P-selective sets can be decided with a non-uniform probabilistic polynomial Turing machine using linear size advice. This is the first sub-quadratic result for the class P-sel for bounded-error probabilistic machines.We emphasize the importance of such result, as the shorter the advice required, the simpler the decision procedure of the set. As a consequence, several Karp-Lipton type theorems are obtained.Our methods are based on several fundamental concepts of theoretical computer science, as hardness amplification and Von Neumann’s Minimax Theorem. We utilize powerful and recent advances in these areas, and demonstrate surprising connections between them and the seemingly unrelated notion of selectivity.

A Linear Time Algorithm for Computing Minmax Regret 1-Median on a Tree with Positive Vertex Weights

In a model of facility location problem, the uncertainty in the weight of a vertexis represented by an interval of weights, and the minmax regret is used as the goal.The best previously known algorithm for finding the minmax regret 1-median on treeswith positive vertex weights takes O(nlog n)time.We improve it to O(n), solving the open problem posed by Brodal et al. [2008].

External Memory Soft Heap, and Hard Heap, a Meldable Priority Queue

An external memory version of soft heap that we call ``\underline{E}xternal \underline{M}emory \underline{S}oft \underline{H}eap''(EMSH) is presented.It supports {\tt Insert}, {\tt Findmin}, {\tt Deletemin} and {\tt Meld} operations and as in soft heap, itguarantees that the number of corrupt elements in it is never more than$\epsilon N$, where $N$ is the total number of items inserted in it, and $\epsilon$ isa parameter of it called the error-rate.The amortised I/O complexity of an {\tt Insert} is $O(\frac{1}{B} \log_{m}\frac{1}{\epsilon})$.{\tt Findmin}, {\tt Deletemin} and {\tt Meld} all have non-positive amortised I/O complexities.When we choose an error rate $\epsilon<1/N$, EMSH stays devoid of corrupt nodes,and thus becomes a meldable priority queue that we call ``hard heap''.The amortised I/O complexity of an {\tt Insert}, in this case, is $O(\frac{1}{B} \log_{m}\frac{N}{B})$,over a sequence of operations involving $N$ {\tt Insert}s.{\tt Findmin}, {\tt Deletemin} and {\tt Meld} all have non-positive amortised I/O complexities.If the inserted keys are all unique, a {\tt Delete} (by key) operation can also be performed at an amortised I/O complexity of $O(\frac{1}{B} \log_{m}\frac{N}{B})$.A balancing operation performed at appropriate intervals on a hard heapensures that the number of I/Os performed by a sequence of $S$ operations on it is$O(\frac{S}{B}+\frac{1}{B}\sum_{i = 1}^{S} \log_{m}\frac{N_i}{B})$, where$N_i$ is the number of elements in the heap before the $i$th operation.

A remark on one-wayness versus pseudorandomness

Every pseudorandom generator is in particular a one-way function. If we only consider part of the output of the pseudorandom generator is this still one-way? Here is a general setting formalizing this question. Suppose G:{0,1}^n->{0,1}^l(n) is a pseudorandom generator with stretch l(n)> n. Let M_R\in\{0,1}^{m(n) \times l(n)} be a linear operator computable in polynomial time given randomness R. Consider the functionF(x,R)= ( M_R G(x), R )We obtain the following results.- There exists a pseudorandom generator such that for every constant \mu<1 and for an arbitrary polynomial time computable M_R\in {0,1}^{(1-\mu)n \times l(n)}, F is not one-way. Furthermore, our construction yields a tradeoff between the hardness of the pseudorandom generator and the output length m(n). For example, given \alpha=\alpha(n) and a 2^{cn}-hard pseudorandom generator we construct a 2^{\alpha cn}-hard pseudorandom generator such that F is not one-way, where m(n)<=\beta n and \alpha+\beta = 1 - o(1).- We show this tradeoff to be tight for 1-1 pseudorandom generators. That is, for any G which is a 2^{\alpha n}-hard 1-1 pseudorandom generator, if \alpha+\beta=1+\epsilon then there is M_R\in {0,1}^{\beta n \times l(n)} such that F is a \Omega(2^{\epsilon n})-hard one-way function.

Restricted Max-Min Fair Allocations with Inclusion-free Intervals

We consider the restricted assignment version of the problem of fairly allocating a set of $m$ indivisible items to $n$ agents (also known as the Santa Claus problem). We study the variant where every item has some non-negative value and it can be assigned to an \textit{interval} of players (i.e. to a set of consecutive players). Moreover, intervals are inclusion free. The goal is to distribute the items to the players and fair allocations in this context are those maximizing the minimum utility received by any agent.When every item can be assigned to any player a PTAS is known~\cite{woeginger_ptas}. We present a $1/2$-approximation algorithm for the addressed more general variant with inclusion-free intervals.

A simple $D^2$-sampling based PTAS for $k$-means and other Clustering problems

Given a set of points $P \subset \mathbb{R}^d$, the $k$-means clustering problem is to find a set of $k$ {\em centers} $C = \{c_1,...,c_k\}, c_i \in \mathbb{R}^d,$ such that the objective function $\sum_{x \in P} d(x,C)^2$, where $d(x,C)$ denotes the distance between $x$ and the closest center in $C$, is minimized. This is one of the most prominent objective functions that have been studied with respect to clustering.$D^2$-sampling \cite{ArthurV07} is a simple non-uniform sampling technique for choosing points from a set of points. It works as follows: given a set of points $P \subseteq \mathbb{R}^d$, the first point is chosen uniformly at random from $P$. Subsequently, a point from $P$ is chosen as the next sample with probability proportional to the square of the distance of this point to the nearest previously sampled points.$D^2$-sampling has been shown to have nice properties with respect to the $k$-means clustering problem. Arthur and Vassilvitskii \cite{ArthurV07} show that $k$ points chosen as centers from $P$ using $D^2$-sampling gives an $O(\log{k})$ approximation in expectation. Ailon et. al. \cite{AJMonteleoni09} and Aggarwal et. al. \cite{AggarwalDK09} extended results of \cite{ArthurV07} to show that $O(k)$ points chosen as centers using $D^2$-sampling give $O(1)$ approximation to the $k$-means objective function with high probability. In this paper, we further demonstrate the power of $D^2$-sampling by giving a simple randomized $(1 + \epsilon)$-approximation algorithm that uses the $D^2$-sampling in its core.

An Improved Algorithm for Packing $T$-Paths in Inner Eulerian Networks

A digraph $G=(V,E)$ with a distinguished set $T\subseteq V$ of \emph{terminals} is called \emph{inner Eulerian} if for each $v \in V - T$ the numbers of arcs entering and leaving $v$ are equal. By a \emph{$T$-path} we mean a simple directed path connecting distinct terminals with all intermediate nodes in $V - T$. This paper concerns the problem of finding a maximum $T$-path packing, i.e. a maximum collection of arc-disjoint $T$-paths.Lomonosov established a min-max relation for this problem and gave a polynomial-time algorithm. The capacitated version was studied by Ibaraki, Karzanov, and Nagamochi, who came up with a strongly-polynomial algorithm of complexity $O(\phi(V,E) \cdot \log T + V^2E)$ (hereinafter $\phi(n,m)$ denotes the complexity of a max-flow computation in a network with $n$ nodes and $m$ arcs).For unit capacities, the latter algorithm takes $O(\phi(V,E) \cdot \log T + VE)$ time, which is unsatisfactory since a max-flow can be found in $o(VE)$ time. For this case, we present an improved method that runs in $O(\phi(V,E) \cdot \log T + E \log V)$ time. Thus plugging in, e.g., the max-flow algorithm of Dinic the overall complexity reduces from $O(VE)$ to $O(\min(V^{2/3}E, E^{3/2}) \cdot \log T)$.

Unexplored Steiner Ratios in Geometric Networks

In this paper we extend the context of Steiner ratio and examine the influence of Steiner points on the weight of a graph in two generalizations of the Euclidean minimum weight connected graph (MST). The studied generalizations are with respect to the weight function and the connectivity condition. First, we consider the Steiner ratio of Euclidean minimum weight connected graph under the budget allocation model. The budget allocation model is a geometric version of a new model for weighted graphs introduced by Ben-Moshe et al. in [1]. It is known that adding auxiliary points, called Steiner points, to the initial point set may result in a lighter Euclidean minimum spanning tree. We show that this behavior changes under the budget allocation model.Apparently, Steiner points are not helpful in weight reduction of the geometric minimum spanning trees under the budget allocation model (BMST), as opposed to the traditional model. An interesting relation between the BMST and the Euclidean square root metric reveals a somewhat surprising result: Steiner points are also redundant in weight reduction of the minimum spanning tree in the Euclidean square root metric.Finally, we consider the Steiner ratio of geometric $t$-spanners. A geometric $t$-spanner is a geometric connected graph with the following strengthened connectivity requirement: any two nodes $p$ and $q$ are connected with a path of length at most $t \times |pq|$, where $|pq|$ is the Euclidean distance between $p$ and $q$.Surprisingly, the contribution of Steiner points to the weight of geometric $t$-spanners has not been a subject of research so far (to the best of our knowledge). Here, we show that the influence of Steiner points on reducing the weight of geometric spanner networksgoes much further than what is known for trees.[1] B. Benmoshe, E. Omri, and M. Elkin. Optimizing budget allocation in graphs. In23RD Canadian Conference on Computational Geometry, pages 33{38, 2011.

Towards Optimal and Expressive Kernelization for d-Hitting Set

A sunflower in hypergraph is a set of hyperedges with pairwise equal intersections. Sunflowers are a useful tool in polynomial-time data reduction for problems formalizable as d-Hitting Set, the problem of covering all hyperedges (of cardinality at most d) of a hypergraph by at most k vertices. Additionally, in fault diagnosis, sunflowers yield concise explanations for ``highly defective structures''. We provide a linear-time algorithm that, by finding sunflowers, transforms an instance of d-Hitting Set into an equivalent instance comprising at most O(k^d) hyperedges and vertices. In terms of parameterized complexity theory, we show a problem kernel with asymptotically optimal size (unless coNP in NP/poly). We show that the number of vertices can be reduced to O(k^(d-1)) with additional processing in O(k^(1.5d)) time---nontrivially combining the sunflower technique with known problem kernels due to Abu-Khzam and Moser.

Partially Specified Nearest Neighbor Search

We study the Partial Nearest Neighbor Problem that consists inpreprocessing an $n$-point database $\db$ from $\dim$-dimensionalmetric space into a data structure such that the following query canbe answered efficiently: Given a query vector $Q \in \nspace$ and anaxes-aligned query subspace represented by $S \in \{0,1\}^{\dim}$,report a point $P \in \db$ with $\dist_S(Q,P) \leq \dist_S(Q,P')$ forall $P' \in \db$, where $\dist_S(Q,P)$ is the distance between $Q$ and$P$ in the subspace $S$. This problem formulation is naturally relatedto similarity search between feature vectors with respect to \emph{asubset} of features. Thus, the problem is of great practicalimportance in bioinformatics, image recognition, business data mining,etc., however, due to exponentially many subspaces, each changingdistances significantly, the problem has a considerable complexity. Wepresent the first exact algorithms for $\ell_2$- and$\ell_{\infty}$-metric with linear space and sub-linear worst-casequery time, we give a simple approximation algorithm and showexperimentally that our approach performs well on real world data.

Geodesic Order Types

The geodesic between two points a and b in the interior of a simple polygon P is the shortest polygonal path inside P that connects a to b. It is thus the natural generalization of straight line segments on unconstrained point sets to polygonal environments. In this paper we use this extension to generalize the concept of the order type of a set of points in the Euclidean plane to geodesic order types. In particular, we show that, for any set S of points and an ordered subset B of at least four points, one can always construct a polygon P such that the points of B define the geodesic hull of S w.r.t. P, in the specified order. Moreover, we show that an abstract order type derived from the dual of the Pappus arrangement can be realized as a geodesic order type.

Multi-Pattern Matching with Bidirectional Indexes

We study multi-pattern matching in a scenario where thepattern set is to be matched to several texts and hence indexing the patternset is affordable. This kind of scenarious arise e.g. in metagenomics,where pattern set represents DNA of several species and the goal is to findout which species are represented in the sample and in which quantity.We develop a generic search method that exploits bidirectional indexesboth for the pattern set and texts, and analyze the best and worst caserunning time of the method on worst case text. We show that finding theinstance of the search method with minimum best case running time onworst case text is NP-hard. The positive result is that an instance withlogarithm-factor approximation to minimum best case running time canbe found in polynomial time using a bidirectional index called affix tree.We further show that affix trees can be simulated space-efficiently usingbidirectional variant of compressed suffix trees.

Stretch in Bottleneck Games

In bottleneck congestion games the social cost is the worst congestion (bottleneck) on any resource, and each player selfishly minimizes the worst resource congestion in its strategy. We examine the price of anarchy with respect to the stretch which is a measure of variation in the resource utilization in the strategy sets of the players. The stretch is particularly important in routing problems since it compares the chosen path lengths with the respective shortest path lengths. We show that the price of anarchy in general bottleneck games is bounded by O(sm), where s is the stretch and m is the total number of resources. In linear bottleneck games, where the resource latencies are linearly proportional to the players' workloads, the price of anarchy is improved to O(\sqrt{s m}). These bounds are asymptotically tight. For constant stretch linear games we obtain a \Theta(\sqrt m) improvement over the previously best known bound.

A local algorithm for finding dense bipartite-like subgraphs

We give a local algorithm to extract dense bipartite-like subgraphs which characterize cyber-communities in theWeb [10].We use the bipartiteness ratio of a set as the quality measure that was introduced by Trevisan [17]. Our algorithm, denoted as FindDenseBipartite$(v, s, \theta)$, takes as input a starting vertex $v$, a volume target $s$ and a bipartiteness ratio parameter $\theta$ and outputs an induced subgraph of $G$. It is guaranteed to have the following approximation performance: for any subgraph $S$ with partition $(L,R)$, bipartiteness ratio $\theta$, there exists a subset $S_\theta\subseteq S$ such that $\vol(S_\theta)\geq \vol(S)/9$ and that if the starting vertex $v\in S_\theta$, the algorithm FindDenseBipartite$(v,s,\theta)$ outputs a subgraph $(X,Y)$ with density $O(\sqrt{\theta})$. The running time of the algorithm is $O(s^2(\Delta+\log s))$, where $\Delta$ is the maximum degree of $G$, independent of the size of $G$.

Succinct Representations of Binary Trees for Range Minimum Queries

We provide several succinct representations of binary trees that can be used to represent the Cartesian tree of an array $A$ of size $n$. All our representations take the optimal $2n + o(n)$ bits of space in the worst case and support range minimum queries (RMQs) in $O(1)$ time. The first one is a modification of the representation of Farzan and Munro (SWAT 2008); a consequence of this result is that we can represent the Cartesian tree of a random permutation in $1.92n + o(n)$ bits in expectation. The second one is a new, and very simple, representation that also supports a number of other standard operations (including navigation) in $O(1)$ time. Finally, using a well-known transformation between binary trees and ordinal trees, we show yet another method to represent Cartesian trees: transform a Cartesian tree (a binary tree) into an ordinal tree, and then represent the ordinal tree, using ordinal tree operations to effect operations on the Cartesian tree. This provides an alternative, and more natural, way to view the 2D-Min-Heap of Fischer and Huen (SICOMP 2011). Furthermore, we show that the pre-processing needed to output the data structure can be performed in linear time using $o(n)$ bits of extra working space, improving the result of Fischer and Heun who use $n + o(n)$ bits working space.