The complexity of many optimization problems and algorithms seems well understood. However, often theoretical results contradict observations made in experiments and practice. Some NP-hard optimization problems can be solved efficiently in practice and for many problems algorithms with exponential worst-case running time outperform polynomial-time algorithms. The reason for this discrepancy is the pessimistic worst-case perspective, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. In order to provide a more realistic measure that can explain the practical performance of algorithms, various alternatives to worst-case analysis have been suggested. We study both probabilistic models in which inputs are to some extent random and deterministic models in which inputs exhibit certain structural properties.
The framework of smoothed analysis is a hybrid of worst-case and average-case analysis. In this model, inputs are generated in two steps: first, an adversary chooses an arbitrary instance, and then this instance is slightly perturbed at random. The smoothed performance of an algorithm is defined to be the worst expected performance the adversary can achieve. This model can be viewed as a less pessimistic worst-case analysis, in which the randomness rules out pathological worst-case instances that are rarely observed in practice but dominate the worst-case analysis. Smoothed analysis has been introduced in 2001 by Daniel Spielman and Shang-Hua Teng who showed that the smoothed running time of a certain version of the simplex algorithm is polynomial. Since then, smoothed analysis has been applied to a wide variety of different problems and algorithms. We have applied smoothed analysis, amongst others, to multi-objective optimization problems and local search heuristics like 2-Opt for the TSP and Lloyd’s algorithm for k-means clustering. Our analyses explain why these algorithms are efficient in practice despite their exponential worst-case running time.