Date of this Version
Predators that forage for aggregated prey appear to require a decision rule for determining the point at which to discontinue their search in a given prey patch and move on to another. Although the optimum rule depends heavily on features of the searching behavior of the predator and the distribution of the prey (Oaten 1977), most previous authors have assumed that the decision must involve an assessment of the capture rate within a patch and a comparison with the mean capture rate in the environment as a whole (Krebs 1978). When the perceived quality of the given patch becomes significantly less than the expected quality of the next one, the predator should leave. Because the time interval since the last prey capture is the most readily available measure of the instantaneous capture rate, it has been suggested that foraging animals may monitor this interval and leave the patch when it exceeds some critical value (Krebs 1978). The “giving-up time,” by this argument, should be uniform across patches within a habitat and inversely proportional, across habitats, to the mean prey availability. Although this inference has been supported by empirical studies, Cowie & Krebs (1979) have recently suggested that the correlation could be a sampling artifact. Even if departure from a patch were independent of the interval between prey encounters, the mean giving-up time would still be shorter, on the average, in a rich environment than in a poor one. A reanalysis of several experiments on patch foraging by predatory insects, described in detail elsewhere (Bond 1980), can be used to test Cowie & Krebs’ independence hypothesis.