Approximate Local Search in Combinatorial Optimization James B. Orlin Abstract Local search algorithms for combinatorial optimization problems are in general of pseudopolynomial running time and polynomial-time algorithms are often not known for finding locally optimal solutions for NP-hard optimization problems. We introduce the concept of -local optimality and show that an -local optimum can be identified in time polynomial in the problem size and 1/ whenever the corresponding neighborhood can be searched in polynomial time, for > 0. If the neighborhood can be searched in polynomial time for a -local optimum, a variation of our main algorithm produces a ( + )-local optimum in time polynomial in the problem size and 1/. As a consequence, a combinatorial optimization problem has a fully polynomial-time approximation scheme if and only if the problem of determining a better solution--the so-called augmentation problem--has a fully polynomial-time approximation scheme. Abraham P. Punnen Andreas S. Schulz 1 Intro duction. A combinatorial optimization problem consists of a collection of instances (F , c), where the set F of feasible solutions is a family of subsets of a finite ground set E = {1, . . . , n}. The ob jective function c : E Q+ assigns a nonnegative cose to every feasible solution t S F through c(S ) := S ce . (Formally, we should point out that we focus on linear combinatorial optimization problems as opposed to "general" combinatorial optimization problems, in which the cost of a feasible solution is not necessarily the sum of the cost coefficients of its elements. In other words, the class of problems we are looking at here is equivalent to that of 0/1-integer linear programming problems.) We assume that is closed under component-wise scaling of ob jective function coefficients; i.e., if (F , c) , then (F , c ) for all c Q+ . For technical reasons, let us also assume that c(S ) = 0 for S F . The goal is to find Sloan Scho ol of Management and Op erations Research Center, Massachusetts Institute of Technology, Cambridge, MA, USA; Email: jorlin@mit.edu Department of Mathematical Sciences, University of New Brunswick, Saint John, New Brunswick, Canada; Email: punnen@unbsj.ca Sloan Scho ol of Management and Op erations Research Center, Massachusetts Institute of Technology, Cambridge, MA, USA; Email: schulz@mit.edu a globally optimal solution, i.e., a feasible solution S such that c(S ) c(S ) for all S F . (Although we restrict the following discourse to minimization problems, all results extend in a natural way to the case of maximization problems.) The Traveling Salesperson Problem (Tsp) or the Minimum Spanning Tree Problem are typical examples of combinatorial optimization problems. Many combinatorial optimization problems are NPhard, and one popular practical approach for attacking them is using local search strategies, which presupposes the concept of a neighborhood. A neighborhood function for an instance (F , c) of a combinatorial optimization problem is a mapping NF : F 2F . Note that we assume that NF does not depend on the ob jective function c. For convenience, we usually drop the subscript and simply write N . For a feasible solution S , N (S ) is called the neighborhood of S . We assume ¯ that S N (S ). A feasible solution S is said to ¯ be local ly optimal with respect to N if c(S ) c(S ) ¯). Typical neighborhood functions for all S N (S include the k -opt neighborhood for the Tsp, the flip neighborhood for Max Cut and Max 2Sat, and the swap neighborhood for Graph Partitioning. Roughly speaking, a local search algorithm sets out with an initial feasible solution and then repeatedly searches neighborhoods to find better and better solutions until it reaches a locally optimal solution. Figure 1 gives a generic description of the standard local search algorithm, which is sometimes also called iterative improvement. ¯ Step 1: Compute a feasible starting solution S ; ¯ Step 2: while S is not locally optimal do ¯ ¯ Choose S N (S ) such that c(S ) < c(S ); ¯ S := S ; ¯ Step 3: Output S . Figure 1: Algorithm Standard Lo cal Search Computational studies of local search algorithms and their variations have been extensively reported in the literature for various combinatorial optimization Copyright © 2004 by the Association for Computing Machinery, Inc. and the Society for industrial and Applied Mathematics. All Rights reserved. Printed in The United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Association for Computing Machinery, 1515 Broadway, New York, NY 10036 and the Society for Industrial and Applied Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688 587 problems. Empirically, local search heuristics appear to converge rather quickly, within low-order polynomial time. Compared to this wealth of information on empirical analysis, relatively little is known on theoretical properties of this class of algorithms. Of course, if all cost coefficients are integers, the standard local search algorithm terminates in a pseudopolynomial number of iterations since it improves the ob jective function value by an integral amount at each iteration. However, polynomial-time algorithms for computing a local optimum are in general not known. This is especially true for the above-mentioned combinatorial optimization problems and neighborhoods. On the other hand, Lueker (1976) constructed instances of the metric Tsp such that the standard local search algorithm with the 2-opt neighborhood takes an exponential number of iterations under a particular pivoting rule. Chandra, Karloff, and Tovey (1999) extended this result to k -opt for all fixed k > 2. This discontenting situation prompted Johnson, Papadimitriou, and Yannakakis (1988) to introduce the complexity class PLS. A combinatorial optimization problem together with a given neighborhood function N belongs to PLS if (a) instances are polynomialtime recognizable and a feasible solution is efficiently computable, (b) the feasibility of a proposed solution can be checked in polynomial time, and (c) neighborhoods can be searched efficiently. That is, there is a polynomial-time algorithm with input S F and c Q+ that decides whether S is locally optimal and, if not, it computes S N (S ) with c(S ) < c(S ). We call this neighborhood search algorithm ImproveN (S, c). Note that all common local search problems are in PLS, in particular the problems mentioned earlier. The class PLS has its own type of reduction, which gives rise to the identification for complete problems in that class. For instance, Max 2Sat with the flip neighborhood, Max Cut with the swap neighborhood, and Tsp with the Lin-Kernighan neighborhood are PLScomplete (Krentel 1990; Sch¨ffer and Yannakakis 1991; a Papadimitriou 1992), and so is Tsp with the k -opt neighborhood for some constant k (Krentel 1989). In particular, if a local optimum can be found in polynomial time for one of these problems, then a local optimum can be computed in polynomial time for all problems in PLS. Unfortunately, it is unknown whether it is hard to find a local optimum for a PLS-complete problem (but Johnson et al. (1988) pointed out that this would imply NP = co-NP) or whether this can be done in polynomial time (which would subsume an efficient algorithm for linear programming, which belongs to PLS). (Note that the negative results for the Tsp mentioned at the end of the previous paragraph only apply to the standard local search algorithm, as described in Figure 1. Actually, there exist instances for every PLS-complete problem for which the standard local search algorithm takes exponential time, regardless of the tie-breaking and pivoting rules used (Yannakakis 1997, Theorem 13).) In light of this somewhat elusive situation, it is interesting to explore the possibility of identifying approximately local ly optimal solutions in polynomial time. We therefore introduce the notion of an -locally optimal solution, which is to some extent related to the worst-case relative performance guarantee of an approximation algorithm. We say that a feasible solution S to an instance of a combinatorial optimization problem with neighborhood function N is an -local optimum if c(S ) - c(S ) c(S ) for all S N (S ) , for some > 0. Hence, while S is not necessarily a local optimum, it "almost" is. A family of algorithms (A )>0 for is an -local optimization scheme if A produces an -local optimum. If the running time of algorithm A is polynomial in the input size and 1/, it is called a ful ly polynomial-time -local optimization scheme. In this paper, we show that every combinatorial optimization problem with an efficiently searchable neighborhood has a fully polynomial-time -local optimization scheme. In particular, an -locally optimal solution can be computed in polynomial time for every problem in PLS, including the PLS-complete problems mentioned above. Related Work. Ausiello and Protasi (1995) introduced the class GLO (for guaranteed local optima) of optimization problems that have the property that the ob jective function value of each local optimum is guaranteed to be within a constant factor of that of a global optimum. Khanna, Motwani, Sudan, and Vazirani (1998) extended this notion to nonoblivious GLO problems, which allow for a modification of the ob jective function used to compute a local optimum. In either class, the underlying neighborhoods contain all solutions of bounded Hamming distance from the current solution; moreover, it is assumed that the number of distinct ob jective function values over the set of feasible solutions is polynomially bounded. Hence, the standard local search algorithm yields a locally optimal solution in polynomial time. In contrast, we do not make any assumption on the ob jective function values or neighborhoods considered; we show that an -local optimum can always be computed with a polynomial number of calls to the Improve subroutine. On the other hand, while an -local optimum has nearly the properties of a local optimum, its ob jective function value is not guar- 588 anteed to be close to that of a global optimum. However, this is in general true for local optima as well. For instance, Papadimitriou and Steiglitz (1977) showed that no local optimum of an efficiently searchable neighborhood for the Tsp can be within a constant factor of the optimal value, unless P = NP. Klauck (1996) studied the complexity of finding a solution whose ob jective function value is approximately as good as that of the worst local optimum, by using a restricted form of PLS-reductions. Completeness under this reduction implies that an approximation of a local optimum cannot be achieved efficiently unless P = PLS. For instance, 0/1-Programming with the k flip neighborhood and Tsp with the k -opt neighborhood for constant k are complete under this reduction. A neighborhood function N of a combinatorial optimization problem is exact if every locally optimal solution with respect to N is also globally optimal. In this case, our fully polynomial-time -local optimization scheme actually is a fully polynomial-time approximation scheme (FPTAS). In other words, for every combinatorial optimization problem, one can compute in polynomial time a solution of ob jective function value arbitrarily close to the optimal value if the corresponding augmentation problem can be solved efficiently. In fact, Gr¨tschel and Lov´sz (1995) and Schulz, Weismano a tel, and Ziegler (1995) showed that in this particular situation, one can even find an exact optimal solution efficiently. Schulz and Weismantel (1999, 2002) discussed extensions of this result from 0/1-integer linear programming problems (i.e., combinatorial optimization problems) to arbitrary integer programs. However, none of the employed techniques can be extended to compute a local optimum in polynomial time, unless the neighborhood is exact. In fact, if this were possible, then P = PLS. Fischer (1995) examined a different question, which implies that our main result is best possible if one considers a certain class of algorithms: Given a feasible solution S to an instance (F , c) of a combinatorial optimization problem and a number k in unary, does there exist a local optimum within k neighborhood steps of S ? She showed that this question is NP-complete for Max Cut and Max 2Sat under the flip neighborhood and for Tsp under the 2-opt neighborhood, among others. Our Results. Apart from our main result presented above and discussed in more detail in Section 2 below, we offer various evidence in Section 3 to show that this result is indeed "best possible": First, note that the fully polynomial-time -local optimization scheme produces an -locally optimal so- lution by proceeding from one feasible solution to a solution in its neighborhood, and so forth. In this sense, it is a typical local search algorithm. Fischer's result shows that one cannot hope to find a local optimum efficiently by proceeding in this manner. Yet, an -local optimum can be determined in polynomial time. The key is to modify the original ob jective function so as to make sufficient progress in a relatively small number of steps. It is worth mentioning that our algorithm follows a path of feasible solutions that is not necessarily monotone with respect to the original ob jective function. In particular, it differs from a standard local search algorithm, which always replaces the current solution with a neighboring solution of lower cost. Secondly, we point out that any algorithm for computing a local optimum that treats the neighborhood search and the feasibility check of PLS problems as oracles must be in the worst case exponential in the input dimension. In other words, if there exists a polynomialtime algorithm to compute a local optimum for a PLScomplete problem, it must use problem-specific knowledge. In contrast, our algorithmic scheme works for any combinatorial optimization problem so long as a feasible solution can be efficiently computed; in particular, it treats the subroutine Improve as a black box. Thirdly, we show that the existence of a family (A )>0 of algorithms that find -local optima in time polynomial in the input size and log 1/ implies the existence of a polynomial-time algorithm for computing a local optimum, and we just argued why this is impossible in our framework. Hence, the dependence of the running time on 1/ cannot be improved. Furthermore, we prove that replacing the relative error in the definition of an -local optimum with the absolute error would also yield the existence of a polynomial-time algorithm for computing a local optimum. Finally, in Section 4 we present various extensions of the main result. For instance, we replace the neighborhood search algorithm (part (c) in the definition of PLS) by a subroutine that merely answers "Yes" or "No" to the question whether a feasible solution is locally optimal. In contrast to our previous assumption, it does not return a better neighboring solution if the answer is "No". Despite of this weaker assumption, one can still design a fully polynomial-time -local optimization scheme. Moreover, we provide a new characterization for when a combinatorial optimization problem has a fully polynomial-time approximation scheme. A Fully Polynomial-Time -Lo cal Optimization Scheme. In this section we develop a polynomial-time algorithm to compute an -local optimum for a given instance 2 589 (F, c) with ground set E of a combinatorial optimization problem with neighborhood function N . The algorithm starts with a feasible solution S 0 . We then alter the element costs ce for e E according to a prescribed scaling rule to generate a modified instance. Using local search on this modified problem, we look for a solution with an ob jective function value (with respect to the original cost) that is half that of S 0 . If no such solution is found we are at a local optimum for the modified problem and output this solution. Otherwise we replace S 0 by the solution of cost less than half, call the latter one S 1 , and the algorithm is repeated. A formal description of the algorithm is given in Figure 2. Note Input: Ob jective function c : E N; subroutine ImproveN ; initial feasible solution S 0 F ; accuracy > 0. Output: Solution S F that is an -local optimum with respect to N and c. Step 1: Let i := 0; K , and Step 2: Let K := c(S i ), q := 2n(1 + ) cq e ce := for e E ; q Step 3: Let k := 0 and S i,k := S i ; Step 4: rep eat Call ImproveN (S i,k , c ); if the answer is "No", then Let S i,k+1 N (S i,k ) such that c (S i,k+1 ) < c (S i,k ); set k := k + 1; else S := S i,k ; stop. until c(S i,k ) K/2; Step 5: Let S Step 2. i+1 Theorem 2.1. Algorithm -Local Search produces an -local optimum. Proof. Let S be the solution produced by the algorithm, and let S N (S ) be an arbitrary solution in its neighborhood. Let K and q denote the corresponding values from the last execution of Step 2 of the algorithm. Note that cq cqe e e e e c(S ) = ce q q S S S c e e e q +1 ce + nq = c(S ) + nq , q S S where n = |E |. Here, the second inequality follows from the fact that S is locally optimal with respect to c . Together with c(S ) K/2, this implies c(S ) - c(S ) c(S ) nq c(S ) L et us now analyze the running time of Algorithm -Local Search. In each improving move within the local search in Step 4 of the algorithm, the ob jective function value is decreased by at least q units. Thus the number of calls to Improve between two consecutive iterations of Step 2 is O(n(1 + )/) = O(n/). Step 2 is executed at most log c(S 0 ) times, where S 0 is the starting solution. Thus the total number of neighborhoods searched is O(n -1 log c(S 0 )). Therefore, if the neighborhood N can be searched in polynomial time for an improving solution, we have a fully polynomial-time -local optimization scheme. Note that the number of iterations includes the factor log c(S 0 ) and hence the bound is not strongly polynomial. However, it is possible to prove a strongly polynomial bound on the number of iterations. For this, we make use of the following lemma, which Radzik (1993) attributes to Goemans. Lemma 2.1. Let d = (d1 , . . . , dn ) be a real vector and let y1 , . . . , yp be vectors in {0, 1}n . If, for al l i = 1, . . . , p - 1, 0 d yi+1 1 d yi , then p = O(n log n). 2 Note that the value of K at each execution of Step 2 is reduced at least by half. Further, K is a linear combination of ce for e E and the coefficients in this linear combination are from the set {0, 1}. Lemma 2.1 implies that Step 2 of Algorithm -Local Search can be executed at most O(n log n) times. Thus the total number of calls of Improve in Step 4 throughout the algorithm c(S ) nq - nq = . 2nq K - 2nq := S i,k , set i := i + 1 and goto Figure 2: Algorithm -Lo cal Search that the modification of the cost coefficients in Step 2 merely amounts to rounding them up to the closest integer multiple of q . Let us establish correctness first. 590 is O(n -1 n log n). If (n, log cmax ) is the time needed to search the neighborhood N for an improving solution (i.e., the running time of Improve) and (n) is the time needed to obtain a feasible starting solution, the complexity of Algorithm -Local Search is O( (n) + (n, log cmax )n -1 min{n log n, log K 0 }), where K 0 is the ob jective function value of the starting solution and cmax is the maximal value of an ob jective function coefficient; i.e., K 0 := c(S 0 ) and cmax := maxeE ce . The following theorem summarizes the preceding discussion. Theorem 2.2. Algorithm -Local Search correctly identifies an -local ly optimal solution of an instance of a combinatorial optimization problem in O( (n) + (n, log cmax )n -1 min{n log n, log K 0 }) time. Thus if (n, log cmax ) and (n) are polynomial, then Algorithm -Local Search is a fully polynomial-time local optimization scheme. Note that (n, log cmax ) and (n) are indeed polynomials of the input size for all problems in PLS. Corollary 2.1. Every problem in PLS has a ful ly polynomial-time -local optimization scheme. The running time of Algorithm -Local Search can sometimes be improved by exploiting the special structure of the underlying neighborhood. A neighborhood function N generates a so-called k -opt neighborhood if S1 , S2 N (S ) implies |(S1 \ S2 ) (S2 \ S1 )| k , which is equivalent to bounding the Hamming distance between the incidence vectors of S1 and S2 by k . For the k -opt neighborhood, by choosing the parameK ter q := 2k(1+) in Algorithm -Local Search, we still get an -local optimum. Moreover, the number of calls of Improve between two consecutive executions of Step 2 of this modified algorithm is O(-1 ), for fixed k . This brings down the total number of such calls to O(-1 min{n log n, log K 0 }) and implies a running time of O( (n) + nk -1 min{n log n, log K 0 }) for Algorithm -Local Search. Similarly, if the considered combinatorial optimization problem possesses a 2-approximation algorithm, one can use this algorithm to compute the starting solution S 0 . If such a solution is used as the starting solution, then Algorithm -Local Search executes Step 2 only once and hence the total number of improving moves is O(n -1 ). Consequently, the overall running time of Algorithm -Local Search is O( (n) + (n, log cmax )n -1 ), where (n) now denotes the running time of the 2-approximation algorithm. In fact, whenever one has a feasible solution S 0 for an instance I such that c(S 0 ) p( I )c(S ) for some polynomial p of the input size I , then one can adapt the value of q to q := (K )/(n p( I )(1 + )) and the stopping criterion of the while-loop accordingly, so that Algorithm -Local Search computes an -local optimum in O(n p( I )-1 ) iterations. Arkin and Hassin (1998) applied a similar approach to that used in Algorithm -Local Search to compute a local optimum in polynomial time in the context of the weighted k -set packing problem. They considered a neighborhood for which the value of each local optimum is within a certain factor of the value of a global optimum; hence, this approach, which they attribute to Rubinstein, leads to a polynomial-time approximation algorithm. It has since been applied in related situations as well; see, e.g., Arkin, Hassin, Rubinstein, and Sviridenko 2002. 3 Negative Results. In this section we present a collection of results that underscore that neither the accuracy of the solutions produced by Algorithm -Local Search nor its running time can be significantly improved, unless additional, problem-specific knowledge is used. Let us first argue that any algorithm to compute a local optimum for a problem in PLS has to have exponential running time in the worst case if the algorithms for checking feasibility and neighborhood search are oracles completely hiding both the set of feasible solutions and the neighborhood structure. Theorem 3.1. If the only available information on an instance (F , c) of a combinatorial optimization problem is the objective function vector c, a feasible solution S 0 F , a membership oracle, and a neighborhood search oracle Improve, then any algorithm for computing a local optimum takes exponential time in the worst case. Proof. Let the ground set be E = {1, 2, . . . , n}. The ob jective function coefficients are ci = 2i-1 for i = 1, 2, . . . , n. Let the nonempty subsets of E be labelled n S 0 , S 1 , . . . , S 2 -2 such that c(S 0 ) > c(S 1 ) > · · · > 2n -2 c(S ). An adversary decides on the (hidden) set of feasible solutions, which is of the form {S 0 , S 1 , . . . , S k }, for some nonnegative integer k {2 , 2 + 1, . . . , 2n - 2}. That is, the adversary determines k and {0, 1, . . . , n-1}. Consider the neighborhood N (S i ) of S i defined by N (S i ) = {S i , S i+1 } with N (S k ) = {S k }, for i = 0, 1, . . . , k . Note that this neighborhood structure is not listed explicitly. It is hidden by the improvementoracle, which tells whether a given solution is locally optimal with respect to this neighborhood and, if not, it returns an improved solution. The initial feasible solution is S 0 . It is not difficult to see that an algorithm that just has access to S 0 and Improve 591 takes an exponential number of iterations to reach the unique local minimum (which is actually the global minimum). However, even with the additional access to a membership oracle, which checks whether a proposed solution is feasible, the algorithm would have to try exponentially many solutions as it does not know the adveTsary's choices of k and . r he importance of Theorem 3.1 is based on the fact that Algorithm -Local Search only requires a subset of the information stated in the assumptions of this theorem; in particular, it does not make use of the membership oracle. The instance used in the proof also helps to point up the characteristics of an -local optimum. In fact, S 0 is -locally optimal for 1/(2n - 2). We next note that finding an -local optimum of additive error with respect to a given neighborhood structure is as hard as finding a local optimum with respect to the same neighborhood structure. While its proof relies on a standard argument, the result is still worth recording. Theorem 3.2. If there is an algorithm that for every instance (F , c) of a combinatorial optimization problem with neighborhood N finds in polynomial time a feasible solution S such that c(S ) c(S ) + for al l S N (S ), for some fixed > 0, then there is a polynomial-time algorithm to find a local optimum. Proof. Let (F , c) be an instance of , where w.l.o.g. c is an integer-valued function. Create a new instance (F , c ) by setting ce := (1 + )ce for all elements e of the ground set. Apply the given algorithm to the new instance and let S be the resulting solution. Then, c (S )-c (S ) for all S N (S ). Thus, c(S )-c(S ) /( + 1) < 1 for all S N (S ). Since c is integer-valued, it follows that S is a local optimum for the original instance. T he next result is somewhat similar to (Garey and Johnson 1979, Theorem 6.8) except that we are discussing it in the context of local optimality. Theorem 3.3. If a combinatorial optimization problem has a ful ly polynomial-time -local optimization scheme (A )>0 such that the actual running time of A is polynomial in the input size and log 1/, then there is a polynomial-time algorithm that computes a local optimum. Proof. Let (F , c) be an instance of , where w.l.o.g. c is an integer-valued function. Choose := 1/(n cmax + 1) and apply A . Note that its running time is polynomial in the input size of the instance. If S is the solution returned by this algorithm, then c(S ) (1 + )c(S ) < c(S )+1 for all S N (S ). Hence, S is a local optimum. 4 Extensions. We now discuss some extensions of the results of Section 2 that range from approximation guarantees in case of exact neighborhoods, over replacing Improve with weaker oracles, to bounded integer programming problems. 4.1 Exact Neighb orho o ds. Recall that a neighborhood function N for a combinatorial optimization problem is exact if every local optimum is already globally optimal. In view of this, one may be tempted to conjecture that the ob jective function value of an -local optimum with respect to an exact neighborhood is also within a factor of (1 + ) of the value of a global optimum. However, this is not true as shown by the following example. Let G = (V , E ) be a connected graph with edge weights ce for e E . Let F be the family of all spanning trees of G. Hence, we are considering the Minimum Spanning Tree Problem. For any tree T F , consider the neighborhood N (T ) that consists of those spanning trees obtained from T by adding an edge e E \ T to T and removing an edge f T from the induced elementary cycle. This is the 2-opt neighborhood, which is known to be exact. Now choose G as a wheel (see Figure 3) with node set {0, 1, . . . , n}. For each edge (0, i), i = 1, 2, . . . , n, of this 1 0 1 1 0 2 n 1 0 0 3 1 0 1 1 0 4 5 Figure 3: A wheel on n + 1 nodes wheel, assign a cost of 1 and for each edge (i, i + 1), i = 1, 2, . . . , n (where node n + 1 is identified with node 1) assign a cost zero. The spanning tree T , which is a star rooted at node 0, is a 1/(n - 1)-local optimum for any n 3. However, the Hamiltonian 592 path T = (0, 1, . . . , n) is a minimum spanning tree and c(T ) - c(T ) = (n - 1)c(T ). Thus, T is not a (1 + )-approximation for any < n - 1. However, note that for exact neighborhoods our fully polynomial-time -local optimization scheme actually is a fully polynomial-time approximation scheme (FPTAS), as the following theorem shows. Theorem 4.1. If the neighborhood N of a combinatorial optimization problem is exact, then the objective function value of the solution produced by Algorithm Local Search is within a factor of (1 + ) of that of a global minimum. Proof. Let (F , c) be a given instance of . Let S be an optimal solution and S be the solution produced by the algorithm. Let K , q , and c denote the corresponding values from the last execution of Step 2 of Algorithm -Local Search. Since S is locally optimal with respect to c and the neighborhood is exact, S is an optimal solution for (F , c ). Thus, c q e cqe c e e e e +1 c(S ) q q q q S S S c(S ) + nq c(S ) + c(S ) , 1+ where the last inequality follows from the definiton of q and The fact that c(S ) K/2. The result follows. t heorem 4.1 implies that whenever a combinatorial optimization problem has an exact neighborhood and an initial feasible solution is readily available, then Algorithm -Local Search is a (1 + )-approximation algorithm that calls Improve a polynomial number of times (for fixed > 0). However, Gr¨tschel and Lov´sz o a (1995) and Schulz et al. (1995) showed that under these circumstances, one can actually compute an optimal solution with a polynomial number of calls of Improve. 4.2 Weaker Version of Neighb orho o d Search. In the context of global optimization, Schulz et al. (1996) also pointed out that the requirements on the improvement oracle can be somewhat weakened. Interestingly, this extension also works in the context of local optimization, as we will show now. For that, we replace Improve with another subroutine, which we call Test. Test accepts the same input as Improve, namely a current feasible solution S together with an ob jective function vector c. It also answers "Yes" or "No" depending on whether S is locally optimal with respect to c or not, but in contrast to Improve it does not provide a solution S N (S ) of lower cost if S is not locally optimal. It just answers "Yes" or "No". Lemma 4.1. Test can emulate Improve in polynomial time; i.e., whenever the input solution S is not local ly optimal, a polynomial number of cal ls to Test suffices to create a solution S N (S ) of lower cost. In contrast to all other results presented in this paper, here we need to assume that Test accepts arbitrary, not just nonnegative cost coefficients. In particular, we set cmax := maxeE |ce |. Proof. Let (F , c) be an instance and S a feasible solution that is not locally optimal with respect to c under the given neighborhood N . W.l.o.g., we may assume that S = E . (Otherwise one can transform the given instance into an equivalent one for which this is the case.) Here, E = {1, 2, . . . , n} is the ground set. The algorithm that we are about to explain proceeds by considering one coordinate after the other. In particular, it will call Test n times. The first call TestN (S, c1 ) is made with the ob jective function c if e = 1, 1-M for e E , c 1 := e ce otherwise, where M := n cmax + 1. If Test responds with "Yes", then we can infer that all solutions S in N (S ) of lower cost than S with respect to c satisfy 1 S . On the other hand, if the reply is "No", then there is at least one solution S N (S ) such that c(S ) < c(S ) and 1S. In general, assume that we already know a subset R {1, 2, . . . , k }, for some k {1, 2, . . . , n - 1}, with the following two properties: (i) there exists a solution S N (S ) with c(S c(S ) such that R = S {1, 2, . . . , k }; ) < (ii) if j R for 1 j k , then all solutions S N (S ) with c(S ) < c(S ) satisfy R {1, 2, . . . , j } = S {1, 2, . . . , j }. We then call TestN (S, ck+1 ) with ce - M if e R , k+1 ce := ck+1 - M if e = k + 1, ce otherwise, for e E . If the result is "Yes", then we can infer that all solutions S N (S ) with c(S ) < c(S ) and S {1, 2, . . . , k } = R satisfy k + 1 S . However, if the reply is "No", then there must be a solution S N (S ) with c(S ) < c(S ) and S {1, 2, . . . , k } = R such that 593 k + 1 S . We leave R unchanged in the former case, and set R := R {k + 1} in the latter case. Consequently, after n steps we have identified a set S = R N (S ) such that c(S ) < c(S ). L emma 4.1 and Theorem 2.2 imply the following result. Corollary 4.1. An -local optimum of an instance (F , c) of a combinatorial optimization problem with neighborhood function N that is given via a Test oracle of running time (n, log cmax ), can be computed in time O( (n) + (n, log cmax )n2 -1 min{n log n, log K 0 }). 4.3 Approximate Version of Neighb orho o d Search. Very large-scale neighborhood (VLSN) search algorithms are local search algorithms using neighborhoods of very large size; see Ahuja, Ergun, Orlin, and Punnen (2002) for a survey. For many very large-scale neighborhoods of NP-hard combinatorial optimization problems, the problem of finding an improving solution is itself NP-hard. On the other hand, solutions produced by local search algorithms using such huge neighborhoods could be of very high quality. To keep the complexity of a VLSN algorithm manageable, approximation algorithms are often employed to search an underlying neighborhood such that if the approximation algorithm fails to find an improving move, the algorithm terminates leaving an "approximate" local solution. More precisely, instead of Improve, we may only have at our command a subroutine -Improve, which solves the following problem: Given an ob jective function vector c and a solution S F , find S N (S ) such that c(S ) < c(S ), or assert that S is a -local optimum. Lov´sz (1995) and Schulz et al. (1995) discussed in Seca tion 4.1, it is particularly interesting to note that in case of an exact neighborhood, the existence of a polynomialtime algorithm -Improve for all > 0 implies that the combinatorial optimization problem possesses a fully polynomial-time approximation scheme. Indeed, if we call (4.1) the augmentation problem if N is exact (e.g., N (S ) = F for all S F ) and = 0, and a family of algorithms -Improve (with running time polynomial in the input size and 1/ for all > 0) a ful ly polynomialtime approximation scheme for the augmentation problem, we can state the following result. Theorem 4.2. A combinatorial optimization problem has a ful ly (strongly) polynomial-time approximation scheme if and only if its corresponding augmentation problem has a ful ly (strongly) polynomial-time approximation scheme. The proof of Theorem 4.2 is similar to that of Theorem 4.1. It is worth mentioning that such a result cannot be obtained with the help of the techniques used in (Gr¨tschel and Lov´sz 1995; Schulz et al. 1995). o a 4.4 Bounded Integer Linear Programming Problems. Let us finally discuss some generalizations to the case of integer linear programs with bounded variables. In our discussions so far, we considered combinatorial optimization problems, which in fact are 0/1-integer linear programs, i.e., problems of the form min{c x : x F } with F {0, 1}n . Interestingly, most of our results extend directly to the case of integer linear programs with bounded variables, which can be described as follows: min{c x : x F } with F {0, 1, . . . , u}n for some nonnegative integer u. In this context, a neighborhood assigns to each feasible solution x F a set of feasible points in {0, 1, . . . , u}n . If an initial feasible solution and an algorithm Improve are available, Algorithm -Local Search can easily be modified to compute an -local optimum in this setting as well. In fact, by choosing the scaling parameter q := 2n(K )u , its number of iterations (and therefore 1+ the number of calls of Improve) is O(n -1 u log K 0 ). Thus, if an initial feasible solution can be identified in polynomial time, if Improve can be implemented in polynomial time, and if u is bounded by a polynomial in n and log cmax , Algorithm -Local Search runs in polynomial time. Acknowledgments. The authors are grateful to Rafi Hassin for pointing them to the papers by Arkin and Hassin (1998) and Arkin et al. (2002) after he had read a previous version of this paper (Orlin, Punnen, and Schulz 2003). (4.1) Naturally, finding a -local optimum efficiently, given an algorithm -Improve, faces similar difficulties as the problem of finding a local optimum when an algorithm Improve is provided. However, one can easily modify Algorithm -Local Search so that it computes a ( + )-local optimum in polynomial time. In fact, if one uses -Improve in lieu of Improve and selects the scaling parameter q to be q := 2n(1+K( ++) , the )1 resulting algorithm produces a ( + )-local optimum in time O( (n)+ (n, log cmax ) n -1 min{n log n, log K 0 }), where (n, log cmax ) is the running time of -Improve. Hence, a (strongly) polynomial-time algorithm for solving (4.1) implies a (strongly) polynomial-time algorithm to compute a ( + )-local optimum, for every fixed > 0. In view of the results of Gr¨tschel and o 594 Jim Orlin was partially supported through NSF contract DMI-0217123; Abraham Punnen was partially supported by NSERC grant OPG 0170381; Jim Orlin and Andreas Schulz were partially supported by ONR contract N00014-98-1-0317. References ¨ Ahuja, R., O. Ergun, J. Orlin, and A. Punnen (2002). A survey of very large-scale neighborhood search techniques. Discrete Applied Mathematics 123, 75­102. Arkin, E. and R. Hassin (1998). On local search for weighted k -set packing. Mathematics of Operations Research 23, 640­648. Arkin, E., R. Hassin, S. Rubinstein, and M. Sviridenko (2002). Approximations for maximum transportation problem with permutable supply vector and other capacitated star packing problems. In M. Penttonen and E. Meineche Schmidt (Eds.), Algorithm Theory ­ SWAT 2002, Volume 2368 of Lecture Notes in Computer Science, pp. 280­287. Springer. Proceedings of the 8th Scandinavian Workshop on Algorithm Theory. Ausiello, G. and M. Protasi (1995). Local search, reducibility and approximability of NPoptimization problems. Information Processing Letters 54, 73­79. Chandra, B., H. Karloff, and C. Tovey (1999). New results on the old k -opt algorithm for the traveling salesman problem. SIAM Journal on Computing 28, 1998­2029. Fischer, S. (1995). A note on the complexity of local search problems. Information Processing Letters 53, 69­75. Garey, M. and D. Johnson (1979). Computers and Intractability: A Guide to the Theory of NPCompleteness. Freeman. Gr¨tschel, M. and L. Lov´sz (1995). Combinatorial o a optimization. In R. Graham, M. Gr¨tschel, and o L. Lov´sz (Eds.), Handbook of Combinatorics, a Volume II, Chapter 28, pp. 1541­1597. NorthHolland. Johnson, D., C. Papadimitriou, and M. Yannakakis (1988). How easy is local search? Journal of Computer and System Sciences 37, 79­100. Khanna, S., R. Motwani, M. Sudan, and U. Vazirani (1998). On syntactic versus computational views of approximability. SIAM Journal on Computing 28, 164­191. Klauck, H. (1996). On the hardness of global and local approximation. In R. Karlsson (Ed.), Algorithm Theory ­ SWAT '96, Volume 1097 of Lecture Notes in Computer Science, pp. 88­99. Springer. Proceedings of the 5th Scandinavian Workshop on Algorithm Theory. Krentel, M. (1989). Structure in locally optimal solutions. In Proceedings of the 30th Annual Symposium on Foundations of Computer Science, Los Alamitos, CA, pp. 216­221. IEEE. Krentel, M. (1990). On finding and verifying locally optimal solutions. SIAM Journal on Computing 19, 742­749. Lueker, G. (1976). Manuscript, Department of Computer Science, Princeton University, Princeton, NJ. Orlin, J., A. Punnen, and A. Schulz (2003, July). Approximate local search in combinatorial optimization. Working Paper 4325-03, Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA. http://ssrn.com/abstract=423560. Papadimitriou, C. (1992). The complexity of the LinKernighan heuristic for the traveling salesman problem. SIAM Journal on Computing 21, 450­ 465. Papadimitriou, C. and K. Steiglitz (1977). On the complexity of local search for the traveling salesman problem. SIAM Journal on Computing 6, 76­ 83. Radzik, T. (1993). Parametric flows, weighted means of cuts, and fractional combinatorial optimization. In P. Pardalos (Ed.), Complexity in Numerical Optimization, pp. 351­386. World Scientific. Sch¨ffer, A. and M. Yannakakis (1991). Simple local a search problems that are hard to solve. SIAM Journal on Computing 20, 56­87. Schulz, A. and R. Weismantel (1999). An oraclepolynomial time augmentation algorithm for integer programming. In Proceedings of the 10th Annual Symposium on Discrete Algorithms, Baltimore, MD, pp. 967­968. ACM/SIAM. Schulz, A. and R. Weismantel (2002). The complexity of generic primal algorithms for solving general integer programs. Mathematics of Operations Research 27, 681­692. Schulz, A., R. Weismantel, and G. Ziegler (1995). 0/1integer programming: Optimization and augmentation are equivalent. In P. Spirakis (Ed.), Algorithms ­ ESA '95, Volume 979 of Lecture Notes 595 in Computer Science, pp. 473­483. Springer. Proceedings of the 3rd Annual European Symposium on Algorithms. Schulz, A., R. Weismantel, and G. Ziegler (1996). Unpublished Manuscript. Yannakakis, M. (1997). Computational complexity. In E. Aarts and J. Lenstra (Eds.), Local Search in Combinatorial Optimization, Wiley-Interscience Series in Discrete Mathematics and Optimization, Chapter 2, pp. 19­55. 596