25th Annual Symposium on Computational Geometry

Accepted papers with abstracts

  • Micha Sharir and Roel Apfelbaum.
    An Improved Bound on the Number of Unit Area Triangles.

    We show that the number of unit-area triangles determined by a set of n points in the plane is O(n9/4 + ε), for any ε > 0, improving the recent bound O(n44/19) of Dumitrescu et al.

  • Boris Bukh, Jiri Matousek and Gabriel Nivasch.
    Lower Bounds for Weak epsilon-Nets and Stair-Convexity.

    A subset N of Rd is called a "weak epsilon-net" (witha respect to convex sets) for a finite point set X in Rd if N intersects every convex set that contains at least ε|X| points of X. For every fixed d ≥ 2 and every r ≥ 1 we construct subsets X of Rd for which every weak 1/r-net has at least Ω(r logd - 1 r) points; this is the first superlinear lower bound for weak epsilon-nets in a fixed dimension.

    The construction is a "stretched grid", i.e., the Cartesian product of d suitable fast-growing finite sequences, and convexity in this grid can be analyzed using "stair-convexity", a new variant of the usual notion of convexity.

    We also consider weak epsilon-nets for the diagonal of our stretched grid in Rd, d ≥ 3, which is an "intrinsically 1-dimensional" point set. In this case we exhibit slightly superlinear lower bounds (involving the inverse Ackermann function), showing that upper bounds by Alon, Kaplan, Nivasch, Sharir, and Smorodinsky (2008) are not far from the truth in the worst case.

    Using the stretched grid we also improve the known upper bound for the so-called "second selection lemma" in the plane by a logarithmic factor. That is, we obtain a set T of m triangles with vertices in an n-point set in the plane such that no point is contained in more than O(m2 / (n3 log (n3 / m))) triangles of T.

  • Joachim Giesen, Balint Miklos, Mark Pauly and Camille Wormser.
    The Scale Axis Transform.

    We introduce the scale axis transform, a new skeletal shape representation for bounded open sets ORd. The scale axis transform induces a family of skeletons that captures the important features of a shape in a scale-adaptive way and yields a hierarchy of successively simplified skeletons. Its definition is based on the medial axis transform and the simplification of the shape under multiplicative scaling: the s-scaled shape Os is the union of the medial balls of O with radii scaled by a factor of s. The s-scale axis transform of O is the medial axis transform of Os, with radii scaled back by a factor of 1/s. We prove topological properties of the scale axis transform and we describe the evolution sOs by defining the multiplicative distance function to the shape and studying properties of the corresponding steepest ascent flow. All our theoretical results hold for any dimension. In addition, using a discrete approximation, we present several examples of two-dimensional scale axis transforms that illustrate the practical relevance of our new framework.

  • Francis Y. L. Chin, Zeyu Guo and He Sun.
    Minimum Manhattan Network is NP-Complete.

    Given a set T of n points in R2, a network G is said to be a Manhattan network on T, if for all p, qT there exists a Manhattan path, consisting of horizontal and vertical line segments, between p and q and all its line segments are in G. For a given network G, let the length of G, denoted by L(G), be the total length of all the segments in G. For a given point set T, the Minimum Manhattan Network Problem is to find a Manhattan network G on T with minimum L(G). Over the past ten years, whether this problem is NP-complete has been open, and there has been a vast amount of research devoted to the designing of approximation algorithms for this problem.

    In this paper, we shall prove that this problem is strongly NP-complete, which implies that there does not exist FPTAS algorithms for this problem unless P=NP. The reduction is from the well-known 3-SAT problem, relying on six different gadgets in the reduction. The validity of the reduction has been confirmed with a computer program.

  • Eyal Ackerman, Jacob Fox, János Pach and Andrew Suk.
    On Grids in Topological Graphs.

    A topological graph is a graph drawn in the plane with vertices represented by points and edges as arcs connecting its vertices. If the edges are drawn as straight-line segments, then the graph is geometric. A (k, l)-grid in a topological graph is a pair of edge subsets E1 and E2, such that |E1| = k, |E1| = l, and every edge in E1 crosses every edge in E2. It is known that for fixed constants k, l, every n-vertex topological graph with no (k, l)-grid has O(n) edges. We conjecture that this remains true even when: (1) considering grids with distinct vertices; or (2) the edges within each subset of the grid are required to be pairwise disjoint and the graph is geometric. These conjectures are shown to be true apart from log n and log2 n factors, respectively. We also settle the second conjecture for the first nontrivial case k = 2, l = 1, and for convex geometric graphs. The latter result follows from a stronger statement that generalizes the celebrated Marcus-Tardos Theorem on excluded patterns in 0-1 matrices.

  • Mohammad Abam and Mark de Berg.
    Kinetic Spanners in Rd.

    We present a new (1 + ε)-spanner for sets of n points in Rd. Our spanner has size O(n / εd - 1) and maximum degree O(logd n). The main advantage of our spanner is that it can be maintained efficiently as the points move: Assuming the trajectories of the points can be described by bounded-degree polynomials, the number of topological changes to the spanner is O(n2 / εd - 1), and using a supporting data structure of size O(n logd n) we can handle events in O(logd + 1 n) time. Moreover, the spanner can be updated in O(log n) time if the flight plan of a point changes. This is the first kinetic spanner for points in Rd whose performance does not depend on the spread of the point set.

  • Naoki Katoh and Shin-ichi Tanigawa.
    A Proof of the Molecular Conjecture.

    A d-dimensional body-and-hinge framework is, roughly speaking, a structure consisting of rigid bodies connected by hinges in d-dimensional space, whose generic infinitesimal rigidity has been characterized in terms of the underlying multigraph independently by Tay and Whiteley as follows: A multigraph G can be realized as an infinitesimally rigid body-and-hinge framework by mapping each vertex to a body and each edge to a hinge if and only if (D - 1)G contains D edge-disjoint spanning trees, where D = (d + 1 choose 2) and (D - 1)G is the graph obtained from G by replacing each edge by (D - 1) parallel edges. In 1984 they jointly posed a question about whether their combinatorial characterization can be further applied to a nongeneric case. Specifically, they conjectured that G can be realized as an infinitesimally rigid body-and-hinge framework if and only if G can be realized as that with the additional ``hinge-coplanar'' property, i.e., all the hinges incident to each body are contained in a common hyperplane. This conjecture is called the Molecular Conjecture due to the equivalence between the infinitesimal rigidity of ``hinge-coplanar'' body-and-hinge frameworks and that of bar-and-joint frameworks derived from molecules in 3-dimension. In 2-dimensional case this conjecture has been proved by Jackson and Jordán in 2006. In this paper we prove this long standing conjecture affirmatively for general dimension. Also, as a corollary, we obtain a combinatorial characterization of the 3-dimensional rigidity matroid for the bar-and-joint framework of the square of a graph.

  • Frédéric Chazal, David Cohen-Steiner, Marc Glisse, Leonidas Guibas and Steve Oudot.
    Proximity of Persistence Modules and their Diagrams.

    Topological persistence has proven to be a key concept for the study of real-valued functions defined over topological spaces. Its validity relies on the fundamental property that the persistence diagrams of nearby functions are close. However, existing stability results are restricted to the case of continuous functions defined over triangulable spaces.

    In this paper, we present new stability results that do not suffer from the above restrictions. Furthermore, by working at an algebraic level directly, we make it possible to compare the persistence diagrams of functions defined over different spaces, thus enabling a variety of new applications of the concept of persistence. Along the way, we extend the definition of persistence diagram to a larger setting, introduce the notions of discretization of a persistence module and associated pixelization map, define a proximity measure between persistence modules, and show how to interpolate between persistence modules, thereby lending a more analytic character to this otherwise algebraic setting. We believe these new theoretical concepts and tools shed new light on the theory of persistence, in addition to simplifying proofs and enabling new applications.

  • Mark de Berg, Herman Haverkort and Constantinos Tsirogiannis.
    Visibility Maps of Realistic Terrains have Linear Smoothed Complexity.

    We study the complexity of the visibility map of terrains whose triangles are fat, not too steep and have roughly the same size. It is known that the complexity of the visibility map of such a terrain with n triangles is Θ(n2) in the worst case. We prove that if the elevations of the vertices of the terrain are subject to uniform noise which is proportional to the edge lengths, then the worst-case expected (smoothed) complexity is only Θ(n). This provides an explanation why visibility maps of superlinear complexity are unlikely to be encountered in practice.

  • Gaiane Panina and Ileana Streinu.
    Flattening Single-Vertex Origami: The Non-Expansive Case.

    Very little is known about the foldability and reconfiguration of rigid origami: flat pieces of paper creased along internal edges forming a plane graph, allowed to bend along the creases while maintaining the rigid geometry of the faces. But no progress can be made without fully understanding the simplest case, which appears as a necessary step in any possible method: the single-vertex origami. For a vertex placed in the interior of the piece of paper, or for a boundary vertex incident to a paper angle of at most π, Streinu and Whiteley solved the problem by reducing it to the Carpenter's Rule Problem for closed spherical polygons. Their method, using expansive motions induced by pointed spherical pseudo-triangulation mechanisms, can be applied to unfold closed spherical polygons of total length less than 2π, and open ones less than π. It is known that lengths larger than 2π may not be reconfigurable.

    The remaining case, open polygons of length between π and 2π, is not directly amenable to the pseudo-triangulation technique, as it requires both contractive and expansive motions. Here, we settle the problem by showing that it is always possible to unfold, without self-collisions, a spherical bar-and-joint polygonal path of total length between π and 2π. The motion (necessarily partially non-expansive) can be carried out in discrete steps, and completed in finite time, for which we give precise bounds.

  • Oswin Aichholzer, Wolfgang Aigner, Franz Aurenhammer, Thomas Hackl, Bert Juettler, Margot Rabl and Elisabeth Pilgerstorfer.
    Divide-and-Conquer for Voronoi Diagrams Revisited.

    We show how to divide the edge graph of a Voronoi diagram into a tree that corresponds to the medial axis of a (modified) planar domain. Division into base cases is then possible, which, in the bottom-up phase, can be merged by trivial concatenation. The resulting construction algorithm - similar to Delaunay triangulation methods - is not bisector-based and merely computes dual links between the sites, its atomic steps being inclusion tests for sites in circles. This guarantees computational simplicity and numerical stability. Moreover, no part of the Voronoi diagram, once constructed, has to be discarded again. The algorithm works for polygonal and curved objects as sites and, in particular, for circular arcs which allows its extension to general free-form objects by stability-preserving and data saving biarc approximations. The algorithm is randomized, with expected runtime O(n log n) under certain assumptions on the input data. Experiments substantiate an efficient behavior even when these assumptions are not met. Applications to offset computations and motion planning for general objects are described.

  • Marc Pouget, Sylvain Lazard, Elias Tsigaridas, Fabrice Rouillier, Luis Peńaranda and Jinsan Cheng.
    On the Topology of Planar Algebraic Curves.

    We revisit the problem of computing the topology and geometry of a real algebraic plane curve. The topology is of prime interest but geometric information, such as the position of singular and critical points, is also relevant. A challenge is to compute efficiently this information for the given coordinate system even if the curve is not in generic position.

    Previous methods based on the cylindrical algebraic decomposition (CAD) use sub-resultant sequences and computations with polynomials with algebraic coefficients. A novelty of our approach is to replace these tools by Gröbner basis computations and isolation with rational univariate representations. This has the advantage of avoiding computations with polynomials with algebraic coefficients, even in non-generic positions. Our algorithm isolates critical points in boxes and computes a decomposition of the plane (which is not a CAD) by rectangular boxes. This decomposition also induces a new approach for computing an arrangement of polylines isotopic to the input curve. We also present an analysis of the complexity of our algorithm. An implementation of our algorithm demonstrates its efficiency, in particular on high-degree non-generic curves.



  • Vicente H. F. Batista, David L. Millman, Sylvain Pion and Johannes Singler.
    Parallel Geometric Algorithms for Multi-Core Computers.

    Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The d-dimensional algorithms we describe are (a) spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) kd-tree construction, (c) axis-aligned box intersection computation, and finally (d) bulk insertion of points in Delaunay triangulations for mesh generation algorithms or simply computing Delaunay triangulations. We show experimental results for these algorithms in 3D, using our implementations based on the Computational Geometry Algorithms Library (CGAL). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention.

  • Pankaj Kumar Agarwal, Esther Ezra and Micha Sharir.
    Near-Linear Approximation Algorithms for Geometric Hitting Sets.

    Given a set system (X,ℜ), the hitting set problem is to find a smallest-cardinality subset HX, with the property that each range R ∈ ℜ has a non-empty intersection with H. We present near-linear time approximation algorithms for the hitting set problem, under the following geometric settings: (i) ℜ is a set of planar regions with small union complexity. (ii) ℜ is a set of axis-parallel d-rectangles in d-space. In both cases X is either the entire d-dimensional space or a finite set of points in d-space. The approximation factors yielded by the algorithm are small; they are either the same as or within an O(log n) factor of the best factors known to be computable in polynomial time.

  • Sandor Fekete, Dietmar Fey, Marcus Komann, Alexander Kroeller, Marc Reichenbach and Christiane Schmidt.
    Distributed Vision with Smart Pixels.

    We study a problem related to computer vision: How can a field of sensors compute higher-level properties of observed objects deterministically in sublinear time, without accessing a central authority? This issue is not only important for real-time processing of images, but lies at the very heart of understanding how a brain may be able to function.

    In particular, we consider a quadratic field of n "smart pixels" on a video chip that observe a B/W image. Each pixel can exchange low-level information with its immediate neighbors. We show that it is possible to compute the centers of gravity along with a principal component analysis of all connected components of the black grid graph in time O(√n), by developing appropriate distributed protocols that are modeled after sweepline methods.

    Our method is not only interesting from a philosophical and theoretical point of view, it is also useful for actual applications for controling a robot arm that has to seize objects on a moving belt. We describe details of an implementation on an FPGA; the code has also been turned into a hardware design for an application-specific integrated circuit (ASIC).

  • Gunnar Carlsson, Vin de Silva and Dmitriy Morozov.
    Zigzag Persistent Homology and Real-valued Functions.

    We study the problem of computing zigzag persistence of a sequence of homology groups and study a particular sequence derived from the levelsets of a real-valued function on a topological space. The result is a local, symmetric interval descriptor of the function. Our structural results establish a connection between the zigzag pairs in this sequence and extended persistence, and in the process resolve an open question associated with the latter. Our algorithmic results not only provide a way to compute zigzag persistence for any sequence of homology groups, but combined with our structural results give a novel algorithm for computing extended persistence. This algorithm is easily parallelizable and uses (asymptotically) less memory.

  • Mikael Vejdemo-Johansson and Vin de Silva.
    Persistent Cohomology and Circular Coordinates.

    Nonlinear dimensionality reduction (NLDR) algorithms such as Isomap, LLE and Laplacian Eigenmaps address the problem of representing high-dimensional nonlinear data in terms of low-dimensional coordinates which represent the intrinsic structure of the data. This paradigm incorporates the assumption that real-valued coordinates provide a rich enough class of functions to represent the data faithfully and efficiently. On the other hand, there are simple structures which challenge this assumption: the circle, for example, is one-dimensional but its faithful representation requires two real coordinates. In this work, we present a strategy for constructing circle-valued functions on a statistical data set. We develop a machinery of persistent cohomology to identify candidates for significant circle-structures in the data, and we use harmonic smoothing and integration to obtain the circle-valued coordinate functions themselves. We suggest that this enriched class of coordinate functions permits a precise NLDR analysis of a broader range of realistic data sets.

  • Andrea Vattani.
    k-means Requires Exponentially Many Iterations Even in the Plane.

    The k-means algorithm is a well-known method for partitioning n points that lie in the d-dimensional space into k clusters. Its main features are simplicity and speed in practice. Theoretically, however, the best known upper bound on its running time (i.e. O(nkd)) is, in general, exponential in the number of points (when kd = Ω(n / log n)). Recently, Arthur and Vassilvitskii [3] showed a super-polynomial worst-case analysis, improving the best known lower bound from Ω(n) to 2Ω(√n) with a construction in d = Ω(√n) dimensions. In [3] they also conjectured the existence of superpolynomial lower bounds for any d ≥ 2. Our contribution is twofold: we prove this conjecture and we improve the lower bound, by presenting a simple construction in the plane that leads to the exponential lower bound 2Ω(n).

  • Natasa Jovanovic, Jan Korst, Ramon Clout, Verus Pronk and Ludo Tolhuizen.
    Candle in the Woods: Asymptotic Bounds on Minimum Blocking Sets.

    We consider the problem of determining the minimum number Nd of unit disks that is required to block all rays emanating from a point P in the two-dimensional space, where each disk has at least a distance d to point P and to any other disk. We study the asymptotic behavior of Nd, as d tends to infinity. By deriving upper bounds and lower bounds, we prove that π2 / 16 ≤ limd→∞ (Nd / d2) ≤ 18 / π2, where the upper bound is based on establishing an interesting link between unit disks positioned on a regular triangular grid and Farey sequences from number theory. By positioning point P as well as the centers of the disks on the grid points of such a triangular grid, we create hexagonal rings of disks around P. We prove that we need exactly d - 1 of these hexagons to block all rays emanating from P.

  • Bernard Chazelle and Wolfgang Mulzer.
    Computing Hereditary Convex Structures.

    Color red and blue the n vertices of a convex polytope P in R3. Can we compute the convex hull of each color class in o(n log n)? What if we have k > 2 colors? What if the colors are random? Consider an arbitrary query halfspace and call the vertices of P inside it blue: can the convex hull of the blue points be computed in time linear in their number? More generally, can we quickly compute the blue hull without looking at the whole polytope? This paper considers several instances of hereditary computation and provides new results for them. In particular, we resolve an eight-year old open problem by showing how to split a convex polytope in linear expected time.

  • Nabil Mustafa and Saurabh Ray.
    PTAS for Geometric Hitting Set Problems via Local Search.

    We consider the problem of computing minimum-sized geometric hitting sets in which, given a set of geometric objects and a set of points, the goal is to compute the smallest subset of points which hit all geometric objects. The problem is known to be strongly NP-hard even for simple geometric objects like unit disks in the plane. Therefore, unless P = NP, it is not possible to get Fully Polynomial Time Approximation Algorithms (FPTAS) for such problems. We give the first PTAS for this problem when the geometric objects are halfspaces in ℜ3 and when they are r-admissible regions in the plane (this includes pseudodiscs since they are 2-admissible). When there are m objects and n points, the algorithm computes a (1 + ε)-approximation to the minimum hitting set in time O(mnO-2)). Quite suprisingly, our algorithm is a very simple local search algorithm which iterates over local improvements.

  • Jean-Daniel Boissonnat, Olivier Devillers and Samuel Hornus.
    An Efficient Implementation of the Delaunay Triangulation and its Graph in Medium Dimension.

    We describe a new implementation of the well-known incremental algorithm for the construction of Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is fully robust. Extensive comparisons show that our implementation outperforms the best currently available codes for convex hulls and Delaunay triangulations, and that it can be used for quite big input sets in spaces of dimensions up to 6. To circumvent prohibitive memory usage, we also propose a modification of the algorithm that uses and stores only the Delaunay graph (the edges of the full triangulation). We show that a careful implementation of the modified algorithm performs only 5 to 8 times slower than the original algorithm while drastically reducing memory usage in dimension 4 or above.

  • Sang Won Bae and Kyung-Yong Chwa.
    The Geodesic Farthest-Site Voronoi Diagram in a Polygonal Domain with Holes.

    We investigate the farthest-site Voronoi diagram of k point sites with respect to the geodesic distance in a polygonal domain of n corners and h (≥ 0) holes. In the case of h = 0, Aronov et al. in 1993 proved that there are at most O(k) faces in the diagram and the complexity of the diagram is at most O(n + k). However, any nontrivial upper bound on the geodesic farthest-site Voronoi diagram in a polygonal domain when h > 0 remains unknown afterwards. In this paper, we show that the diagram in a polygonal domain consists of Θ(hk) faces and its total combinatorial complexity is Θ(nk) in the worst case for any h ≥ 1. Interestingly, the worst-case complexity of the diagram is independent from the number h of holes if h ≥ 1 while the maximum possible number of faces is dependent on h rather than on the complexity n of the polygonal domain. Also, we present an O(nk log2 (n + k) log k)-time algorithm that constructs the diagram explicitly.

  • Chuanjiang Luo, Jian Sun and Yusu Wang.
    Integral Estimation from Point Cloud in ℜd: a Geometric View.

    Integration over a domain, such as a Euclidean space or a Riemannian manifold, is a fundamental problem across scientific fields. Many times, the underlying domain is only accessible through a discrete approximation, such as a set of points sampled from it, and it is crucial to be able to estimate integral in such discrete settings. In this paper, we study the problem of estimating the integral of a function over a k-submanifold embedding in ℜd, from its function values over a set of sample points. Previously, such estimation is usually obtained in a statistical setting, where input data is usually assumed to be drawn from certain probabilistic distribution. Our paper considers arbitrary point clouds data (PCD), and approaches the problem from a geometric point of view. Specifically, we model the integral as a weighted sum, and propose two weighting schemes: the Voronoi and the Principle eigenvector weighting schemes. The running time of both methods depends mostly on the intrinsic dimension of the underlying manifold, instead of on the ambient dimensions. We show that the estimation based on the Voronoi scheme converges to the true integral (explicit error bound is given) under the standard (ε, δ)-sampling condition. This is the first result of this sort for estimating integral from general PCDs. For the Principle eigenvector scheme, although no theoretical guarantee is established for it, we show its connection to the heat diffusion operator and illustrate several justifications behind its construction. Experiments show that both new methods consistently out-perform common statistical methods under various sampling conditions.

  • David Eppstein, Elena Mumford, Bettina Speckmann and Kevin Verbeek.
    Area-Universal Rectangular Layouts.

    A rectangular layout is a partition of a rectangle into a finite set of interior-disjoint rectangles. Rectangular layouts appear in various applications: as rectangular cartograms in cartography, as floorplans in building architecture and VLSI design, in treemap visualizations, and as graph drawings. Often areas are associated with the rectangles of a rectangular layout and it might hence be desirable if one rectangular layout can represent several area assignments. A layout is area-universal if any assignment of areas to rectangles can be realized by a combinatorially equivalent rectangular layout. We identify a simple necessary and sufficient condition for a rectangular layout to be area-universal: a rectangular layout is area-universal if and only if it is one-sided. More generally, given any rectangular layout L and any assignment of areas to its regions, we show that there can be at most one layout (up to horizontal and vertical scaling) which is combinatorially equivalent to L and achieves a given area assignment. We also investigate similar questions for perimeter assignments. The adjacency requirements for the rectangles of a rectangular layout can be specified in various ways, most commonly via the dual graph of the layout. We show how to find an area-universal layout for a given set of adjacency requirements whenever such a layout exists.

  • Andrea Francke and Michael Hoffmann.
    Maximum Degree Four Euclidean Minimum Spanning Tree is NP-hard.

    We show that it is an NP-hard problem to decide for a given set P of n points in the Euclidean plane and a given parameter k ∈ ℜ, whether P admits a spanning tree of maximum vertex degree four whose sum of edge lengths does not exceed k.

  • Rom Pinchasi.
    Halving Lines and Measure Concentration in the Plane.

    Given a set P of n points in the plane and a collection of k halving lines of P l1,..., lk, indexed according to the increasing order of their slopes, we denote by d(lj, lj + 1) the number of points in P that lie above lj + 1 and below lj. We prove an upper bound of O(nk1/3) for the sum ∑j = 1..k-1 d(lj, lj + 1). We show how this problem is related to the halving lines problem and provide several consequences about measure concentration in ℜ2.

  • Tamal Dey and Kuiyu Li.
    Cut Locus and Topology from Surface Point Data.

    A cut locus of a point p in a compact Riemannian manifold M is defined as the set of points where minimizing geodesics issued from p stop being minimizing. It is known that a cut locus contains most of the topological information of M. Our goal is to utilize this property of the cut loci to decipher the topology of M from a point sample. Recently it has been shown that Rips complexes can be built from a point sample P of M systematically to compute the Betti numbers, the rank of the homology groups of M. Rips complexes can be computed easily and therefore are favored over others such as restricted Delaunay, alpha, Čech, and witness complex. However, the sizes of the Rips complexes tend to be large. Since the dimension of a cut locus is lower than that of the manifold M, a sub-sample of P approximating the cut locus is usually much smaller in size and hence admits a relatively smaller Rips complex.

    In this paper we explore the above approach for surfaces embedded in any high dimensional Euclidean space. We present an algorithm that computes a sub-sample P' of a sample P of a 2-manifold where P' approximates a cut locus. Empirical results show that the first Betti number of M can be computed from the Rips complexes built on these sub-samples. The sizes of these Rips complexes are much smaller than the one built on the original sample of M.

  • Marc van Kreveld and Rodrigo Silveira.
    Embedding Rivers in Polyhedral Terrains.

    Data conflation is a major issue in GIS: spatial data obtained from different sources, using different acquisition techniques, needs to be combined into one single consistent data set before the data can be analyzed. The most common occurrence for hydrological applications is conflation of a digital elevation model and rivers. We assume that a polyhedral terrain is given, and a subset of its edges are designated as river edges, each with a flow direction. The goal is to obtain a terrain where the rivers flow along valley edges, in the specified direction, while preserving the original terrain as much as possible.

    We study the problem of changing the elevations of the vertices to ensure that all the river edges become valley edges, while minimizing the total elevation change. We show that this problem can be solved using linear programming. However, several types of artifacts can occur in an optimal solution. We analyze which other criteria, relevant for hydrological applications, can be captured by linear constraints as well, in order to reduce such artifacts. We implemented and tested the approach on real terrain and river data, and describe the results obtained with different variants of the algorithm. Moreover, we give a polynomial-time algorithm for river embedding for the special case where only the elevations of the river vertices can be modified.

  • Mashhood Ishaque, Bettina Speckmann and Csaba Toth.
    Shooting Permanent Rays Among Disjoint Polygons in the Plane.

    We present a data structure for ray shooting and insertion in the free space among disjoint polygonal obstacles with a total of n vertices in the plane. The portion of each query ray between the starting point and the first obstacle hit is inserted permanently as a new obstacle. Our data structure uses O(n log n) space and preprocessing time, and it supports m successive ray shooting and insertion queries in O(n log2 n + m log m) total time. We present two applications for our data structure:

    (1) Our data structure supports efficient implementation of auto-partitions in the plane, that is, binary space partitions where each partition is done along a supporting line of an input segment. If n input line segments are fragmented into m pieces by an auto-partition, then it can now be implemented in O(n log2 n + m log m) time. This improves the expected runtime of Patersen and Yao's classical randomized auto-partition algorithm for n disjoint line segments in the plane to O(n log2 n).

    (2) If we are given disjoint polygonal obstacles with a total of n vertices in the plane, a permutation of the reflex vertices, and a half-line at each reflex vertex that partitions the reflex angle into two convex angles, then the convex partitioning algorithm draws a ray emanating from each reflex vertex in the prescribed order in the given direction until it hits another obstacle, a previous ray, or infinity. The previously best implementation (with a semi-dynamic ray shooting data structure) requires O(n3/2 - ε/2) time using O(n1 + ε) space. Our data structure improves the runtime to O(n log2 n).

  • Chandra Chekuri, Ken Clarkson and Sariel Har-Peled.
    On the Multi-Cover Problem in Geometric Settings.

    We consider the set multi-cover problem in geometric settings. Given a set of points P and a collection of geometric shapes (or sets) F, we wish to find a minimum cardinality subset of F such that each point pP is covered (contained in) at least d(p) sets. Here d(p) is an integer requirement for p. When the demands d(p) = 1 for all p, this is the standard set cover problem. The set cover problem in geometric settings admits an approximation ratio that is better than the general version. In this paper, we show that similar improvements can be obtained for the multi-cover problem as well. In particular, we obtain an O(log opt) approximation for set systems of bounded VC-dimension and an O(1) approximation for covering points by half-spaces in three dimensions and for some other classes of shapes.

  • Timothy Chan and Sariel Har-Peled.
    Approximation Algorithms for Maximum Independent Set of Pseudo-Disks.

    We present the first constant-factor approximation algorithm for maximum independent set of pseudo-disks in the plane, both in the weighted and unweighted cases. For the unweighted case, we suggest a local search algorithm that, in polynomial time, provides a 10 approximation to the optimal solution. For the weighted case, we suggest a novel rounding scheme based on an LP relaxation of the problem, that leads to a constant approximation.

    Most previous algorithms for maximum independent set (in geometric settings) relied on packing arguments that are not applicable in this case. As such, the analysis of both algorithms requires some new ideas that we believe to be of independent interest.

  • Bernd Gärtner and Martin Jaggi.
    Core-Sets for Polytope Distance.

    Following recent work of Clarkson, we translate the core-set framework to the problems of finding the point closest to the origin inside a polytope, finding the shortest distance between two polytopes, Perceptrons, and soft- as well as hard-margin Support Vector Machines (SVM).

    We prove asymptotically matching upper and lower bounds on the size of core-sets, stating that ε-core-sets of size ⌈ (1 + o(1)) E / ε ⌉ do always exist, and that this is best possible. The crucial quantity E is what we call the excentricity of a polytope, or a pair of polytopes.

    Additionally, we prove linear convergence speed of Gilbert's algorithm, one of the earliest known approximation algorithms for polytope distance, and generalize both the algorithm and the proof to the two polytope case.

    Interestingly, our coreset bounds also imply that we can for the first time prove matching upper and lower bounds for the sparsity of Perceptron and SVM solutions.

  • Kasturi Varadarajan.
    Epsilon Nets and Union Complexity.

    We consider the following combinatorial problem: given a set of n objects (for example, disks in the plane, triangles), and an integer L ≥ 1, what is the size of the smallest subset of these n objects that covers all points that are in at least L of the objects ? This is the classic question about the size of an L/n-net for these objects. It is well known that for fairly general classes of geometric objects the size of an L/n-net is O((n/L) log (n/L)). There are some instances where this general bound can be improved, and this improvement is usually due to bounds on the combinatorial complexity (size) of the boundary of the union of these objects. Thus, the boundary of the union of m disks has size O(m), and this translate to an O(n/L) bound on the size of an L/n-net for disks. For m fat triangles, the size of the union boundary is O(m log log m), and this yields L/n-nets of size O((n/L) log log (n/L)).

    Improved nets directly translate into an upper bound on the ratio between the optimal integral solution and the optimal fractional solution for the corresponding geometric set cover problem. Thus, for covering points by disks, this raio is O(1); and for covering points by fat triangles, this ratio is O(log log n). This connection to approximation algorithms for geometric set cover is a major motivation for attempting to improve bounds on nets.

    Our main result is an argument that in some cases yields nets that are smaller than those previously obtained from the size of the union boundary. Thus for fat triangles, for instance, we obtain nets of size O((n/L) log log log n). We use this to obtain a randomized polynomial time algorithm that gives an O(log log log k)-approximation for the problem of covering k points by the smallest subset of a given set of triangles.

  • Timothy M. Chan and Eric Y. Chen.
    Optimal In-Place Algorithms for 3-d Convex Hulls and 2-d Segment Intersection.

    We describe the first optimal randomized in-place algorithm for the basic 3-d convex hull problem (and consequently for 2-d Voronoi diagrams). The algorithm runs in O(n log n) expected time using only O(1) extra space; this improves the previous O(n log3 n) bound by Brönnimann, Chan, and Chen [SoCG'04]. The same approach leads to an optimal randomized in-place algorithm for the 2-d line segment intersection problem, with O(n log n + k) expected running time for output size k, improving the previous O(n log2 n + k) bound by Vahrenhold [WADS'05]. We also point out a simplification of a known optimal cache-oblivious (non-in-place) algorithm by Kumar and Ramos (2002) for 3-d convex hulls, and observe its applicability to 2-d segment intersection, extending a recent result for 2-d red/blue segment intersection by Arge, Mřlhave, and Zeh [ESA'08]. Our results are all obtained by standard random sampling techniques, with some interesting twists.

  • Luc Habert and Michel Pocchiola.
    Arrangements of Double Pseudolines.

    We define an arrangement of double pseudolines as a finite family of at least two homotopically trivial simple closed curves embedded in a projective plane, with the property that any two meet in exactly four points, where they cross, and induce a cell structure on the projective plane. We show that any arrangement of double pseudolines is the dual family of a family of pairwise disjoint convex bodies of a projective plane endowed with a topological point-line incidence geometry, and we provide a simple axiomatic characterization of the class of isomorphism classes of indexed arrangements of oriented double pseudolines.

  • Friedrich Eisenbrand, Nicolai Haehnle and Thomas Rothvoss.
    Diameter of Polyhedra: Limits of Abstraction.

    We investigate the diameter of a family of connected graphs G = (V, E) which contains the 1-skeletons of non-degenerate polyhedra in dimension d having n facets. The vertices V of G are subsets of {1,...,n} of cardinality d and the edges E of G are such that the following connectivity condition holds:

    For each u, v in V there exists a path whose intermediate verticesa all contain uv.

    The largest diameter of such a graph is denoted by Δ(d, n). In the setting of non-degenerate polyhedra, each vertex is uniquely determined by the d facets in which it is contained and there exists a path from a vertex u to a vertex v which does not leave the facets in which both u and v are contained. Thus if Δu(d, n) is the maximum diameter of a non-degenerate polyhedron with n facets in dimension d, then Δu(d, n) ≤ Δ(d, n) holds.

    This graph class contains furthermore a class introduced by Kalai, who, in addition to the condition above, explicitly defines the edges E to be the pairs uv such that |uv| = d - 1. Kalai calls this relaxation of 1-skeletons ultraconnected set systems and remarks that the asymptotically best upper bounds which are known for polyhedra can also be proved to hold in this setting. These bounds are the linear upper bound 2d - 1n in fixed dimension d by Larman and the quasipolynomial upper bound nlog d + 2 by Kalai and Kleitman.

    We observe that similar bounds also hold for Δ(d, n). More precisely we prove Δ(d, n) ≤ 2d - 1n and Δ(d, n) ≤ nlog d + 1.

    Our main result is a superlinear lower bound on Δ(d, n), namely Δ(d, n) ≥ dn - 3d2n, which, for a suitable choice of d is Ω(n3/2). The non-trivial construction relies on disjoint covering designs and uses Hall's theorem which characterizes the existence of perfect matchings in a bi-partite graph.

    This result shows that, while this simple abstraction still offers enough features to allow the proofs of the best asymptotic upper bounds on the diameter of polyhedra to be adapted, a linear bound, as it is conjectured in the famous Hirsch conjecture, would require additional features stemming from geometry.

  • Chee Yap and Long Lin.
    Parametrizability and Nonlocal Isotopy: An Approach to Efficient Approximation of Nonsingular Curves.

    We consider domain subdivision algorithms for computing isotopic approximations of nonsingular curves represented implicitly by an equation f(X, Y) = 0. Two algorithms in this area are from Snyder (1992) and Plantinga and Vegter (2004). We introduce a new algorithm that combines the advantages of these two algorithms: like Snyder, we use the parametrizability criterion for subdivision, and like Plantinga-Vegter we exploit non-local isotopy. We further extend our algorithm in two important and practical directions: first, we allow subdivision cells to be rectangles with arbitrary but bounded aspect ratios. Second, we extend the input domains to be regions R0 with arbitrary geometry and which might not be simply connected. Our algorithm halts as long as the curve has no singularities in the region, and intersects the boundary of R0 transversally. We report on very encouraging preliminary experimental results, showing that our algorithms can be much more efficient than both Plantinga-Vegter's and Snyder's algorithms.

  • Erin Chambers, Jeff Erickson and Amir Nayyeri.
    Minimum Cuts and Shortest Homologous Cycles.

    We describe the first algorithms to compute minimum cuts in surface-embedded graphs in near-linear time. Given an undirected graph embedded on an orientable surface of genus g, with two specified vertices s and t, our algorithm computes a minimum (s, t)-cut in gO(g) n log n time. Except for the special case of planar graphs, for which O(n log n)-time algorithms have been known for more than 20 years, the best previous time bounds for finding minimum cuts in embedded graphs follow from algorithms for general sparse graphs. A slight generalization of our minimum-cut algorithm computes a minimum-cost subgraph in every Z2-homology class. We also prove that finding a minimum-cost subgraph homologous to a single input cycle is NP-hard.

  • Don Sheehy and Gary Miller.
    Approximate Center Points with Proofs.

    This paper presents the first deterministic algorithm for computing an approximate center point of a set P ∈ ℜd with running time subexponential in d. The algorithm is a derandomization of the Iterated-Radon algorithm of Clarkson et al and is guaranteed to terminate with an O(1 / d2)-center. Moreover, it returns a polynomial-time checkable proof of the approximation guarantee, despite the coNP-Completenes of testing center points in general. We also show how using iterated Tverberg points can improve the runtime of the deterministic algorithm and improve the approximation guarantee for the randomized algorithm. In particular, we show how to improve the O(1 / d2) approximation ratio of the Iterated-Radon algorithm to O(1 / dr / (r - 1)) for a cost of O((rd)d) in time for any integer r.

  • Csaba Toth.
    Binary Plane Partitions for Disjoint Line Segments.

    A binary space partition (BSP) for a set of disjoint objects in Euclidean space is a recursive decomposition of the space, where each step partitions the space (and some of the objects) along a hyperplane and recurses on the objects clipped in each of the two open halfspaces. The size of a BSP is defined as the number of resulting fragments of the input objects. It is shown that every set of n disjoint line segments in the plane admits a BSP of size O(n log n / log log n). This bound is best possible apart from the constant factor.

  • Peyman Afshani, Chris Hamilton and Norbert Zeh.
    Cache-Oblivious Range Reporting With Optimal Queries Requires Superlinear Space.

    We consider a number of range reporting problems in two and three dimensions and prove lower bounds on the amount of space used by any cache-oblivious data structure for these problems that achieves an optimal query bound of O(logB N + K / B) memory transfers in the worst case, where K is the size of the query output.

    The problems we study are three-sided range reporting in the plane, dominance reporting in three dimensions, and halfspace range reporting in three dimensions. We prove that, in order to achieve the above query bound or even a bound of O((logB N)c (1 + K / B)) for any constant c > 0, the structure has to use &Omega(N (log log N)ε) space, where ε > 0 is a constant that depends on c and the constants hidden in the big-Oh notation of the query bound.

    Our result has a number of interesting consequences. The first one is a new type of separation between the I/O-model and the cache-oblivious model, as I/O-efficient structures with the optimal query bound and using linear or O(N log N) space are known for the above problems. The second consequence is the non-existence of linear-space cache-oblivious persistent B-trees with worst-case optimal range queries.

  • Glencora Borradaile, James Lee and Anastasios Sidiropoulos.
    Randomly Removing g Handles at Once.

    It was shown in [Indyk and Sidiropoulos 07] that any orientable graph of genus g can be probabilistically embedded into a graph of genus g - 1 with constant distortion. In particular, such graphs embed into a distribution over planar graphs with distortion exp(O(g)). By removing all g handles at once, we present a probabilistic embedding with distortion poly(g), which also works in the non-orientable case. Our result is obtained by showing that the minimum-cut graph [Erickson and Har-Peled 2004] has low dilation, and then randomly cutting this graph out of the surface using the Peeling Lemma of [Lee and Sidiropoulos 08].

  • Peyman Afshani, Chris Hamilton and Norbert Zeh.
    A Unified Approach for Cache-Oblivious Range Reporting and Approximate Range Counting.

    In this paper we present cache-oblivious solutions to two important variants of range searching: range reporting and approximate range counting. The main contribution of our paper is a unified approach for constructing cache-oblivious data structures that provide relative (1 + ε)-approximations for a general class of range counting queries. This class includes three-sided range counting in the plane, dominance counting in three dimensions, and halfspace range counting in three dimensions. Our technique allows us to obtain data structures that use linear space and answer queries in the optimal query bound of O(logB (N / K)) in the worst case, where K is the number of points in the query range.

    Except for orthogonal range counting in the plane, no such structures were known in the cache-oblivious model before. Furthermore, our approach is powerful enough to provide the first approximate halfspace range counting data structure with a worst-case query time of O(log (N / K)) in internal memory; previously, the same query bound was only known to hold in the expected case.

    An easy but important consequence of our main result is the existence of O(N log N)-space cache-oblivious data structures with an optimal query complexity of O(logB N + K / B) for the reporting versions of the same problems. Through standard reductions, we also obtain the first cache-oblivious data structures that use near-linear space and achieve the optimal query bound for circular range reporting and k-nearest neighbor search in the plane, and for orthogonal range reporting in three dimensions.

MADALGO - Center for Massive Data Algorithmics, a Center of the Danish National Research Foundation / Department of Computer Science / University of Aarhus