The NFL Theorems Still with No Mathematics

The mountainous landscape we discussed is a particular case of what is generally called a fitness landscape, and the heights of the peaks in our example is a particular case of a fitness function. Imagine two algorithms conducting a search on a given fitness landscape. They move from point to point over the search space (choosing the search points either at random or in a certain order). After having performed, say, m measurements, an algorithm produces what Wolpert and Macready call a sample—a table wherein the m measured values of the fitness function are listed in temporal order. Generally speaking, two arbitrarily chosen algorithms will not yield identical samples. The probability of algorithm ai producing a specific table that is m rows long is different from the probability of algorithm a2 producing the same table after the same number of iterations.

Enter the first NFL theorem: if the results of the two algorithms' searches are compared not for a specific fitness landscape but averaged over all possible landscapes, the probabilities of obtaining the same sample are equal for any pair of algorithms. The quantity that is averaged is the probability of generating a given sample by an algorithm. This is an exact translation of the first NFL theorem from its mathematically symbolic form into plain words.

The NFL theorems do not restrict the value of m, the number of iterations. There is no condition that the search stops when a certain preselected number of iterations has been completed or when a preselected value of the fitness function has been found. In other words, the concept of a target is absent from the theorems. On the other hand, they do not forbid the algorithms to be target-oriented. The theorems are indifferent to algorithms' having or not having a target.

The NFL theorems are often discussed in terms of algorithms' performance, although the concept of a performance measure is not part of the theorems as such. According to Wolpert and Macready, "the precise way that the sample is mapped to a performance measure is unimportant" (73). The NFL theorems allow for wide latitude in the choice of performance measures. In particular, whereas the theorems themselves do not refer to any target of a search, the algorithms in question may be either target-oriented or not.

Following are examples of targeted and nontargeted algorithms, both equally subject to the NFL theorems. Assume that the fitness function is simply the height of peaks in a specific mountainous region. If we choose a target-oriented algorithm, the target of the search can be defined as a specific peak P whose height is, say, 6000 meters above sea level. In this case the number n of iterations required to reach the predefined height of 6000 meters may be chosen as the performance measure. Then algorithm a\ performs better than algorithm ai if ai converges on the target in fewer steps than ai does. If two algorithms generated the same sample after m iterations, then they would have found the target—peak P—after the same number n of iterations. The first NFL theorem tells us that the average probabilities of reaching peak P in m steps are the same for any two algorithms. Any two algorithms will have an equal average performance, provided that the averaging is over all possible fitness landscapes (not all of which must in fact exist materially). In the example discussed, the average number n of iterations required to locate the target is the same for any two algorithms, if the averaging is done over all possible mountainous landscapes.

Importantly, the NFL theorems do not say anything about the relative performance of algorithms ai and ai on a specific landscape. On a specific landscape, either ai or ai may happen to be much better than its competitor.

Algorithms can also be compared in a targetless context. For example, rather than defining a target as a certain peak P or even as a peak of a certain height, the algorithms may be compared by finding out which of them, ai or ai, finds a higher peak after a certain number m of iterations. The performance measure in this case is the height of a peak reached after m iterations. No specific peak and no specific height is preselected as a target. An algorithm ai that after m iterations finds a higher peak than algorithm ai performs better. The first NFL theorem tells us that, if averaged over all possible mountainous reliefs (not all of them necessarily existing), the probabilities of both ai and ai generating the same sample after m iterations are equal. This also means that, in all likelihood, the height of a peak reached after m iterations, if averaged over all possible landscapes, will be the same for any two algorithms.

The NFL theorems are certainly valid for evolutionary algorithms. As Wolpert (2002) reports, Wolpert and Macready have proven recently that the NFL theorems are invalid for coevolutionary algorithms, but that is a different question.

Was this article helpful?

0 0

Post a comment