Sequential Hypothesis Testing under Stochastic D e a d lin e s Peter I. Frazier ORFE Princeton University Princeton, NJ 08544 pfrazier@princeton.edu Angela J. Yu CSBMB Princeton University Princeton, NJ 08544 ajyu@princeton.edu Abstract Most models of decision-making in neuroscience assume an infinite horizon, which yields an optimal solution that integrates evidence up to a fixed decision threshold; however, under most experimental as well as naturalistic behavioral settings, the decision has to be made before some finite deadline, which is often experienced as a stochastic quantity, either due to variable external constraints or internal timing uncertainty. In this work, we formulate this problem as sequential hypothesis testing under a stochastic horizon. We use dynamic programming tools to show that, for a large class of deadline distributions, the Bayes-optimal solution requires integrating evidence up to a threshold that declines monotonically over time. We use numerical simulations to illustrate the optimal policy in the special cases of a fixed deadline and one that is drawn from a gamma distribution. 1 Introduction Major strides have been made in understanding the detailed dynamics of decision making in simple two-alternative forced choice (2AFC) tasks, at both the behavioral and neural levels. Using a combination of probabilistic and dynamic programming tools, it has been shown that when the decision horizon is infinite (i.e. no deadline), the optimal policy is to accumulate sensory evidence for one alternative versus the other until a fixed threshold, and report the corresponding hypothesis [1]. Under similar experimental conditions, it appears that humans and animals accumulate information and make perceptual decisions in a manner close to this optimal strategy [2­4], and that neurons in the posterior parietal cortex exhibit response dynamics similar to that prescribed by the optimal algorithm [5]. However, in most 2AFC experiments, as well as in more natural behavior, the decision has to be made before some finite deadline. This corresponds to a finite-horizon sequential decision problem. Moreover, there is variability associated with that deadline either due to external variability associated with the deadline imposition itself, or due to internal timing uncertainty about how much total time is allowed and how much time has already elapsed. In either case, with respect to the observer's internal timer, the deadline can be viewed as a stochastic quantity. In this work, we seek to understand the optimal strategy and its dynamics for decision-making under the pressure of a stochastic deadline. We will show through analytical and numerical analysis that the optimal policy is a monotonically declining decision threshold over time. Declining decision thresholds have been used previously in [6] to model the speed vs. accuracy tradeoff, and also in the context of sequential hypothesis testing (see [7] for a survey). In the following, we first present a formal model of the problem, as well as the main theoretical results (Sec. 2). We then use numerical simulations to examine the optimal policy in some specific examples (Sec. 3). 2 Decision-making under a Stochastic Deadline We assume that on each trial, a sequence of i.i.d inputs are observed: x1 , x2 , x3 , . . .. With probability p0 , all the inputs for the trial are generated from a probability density f1 , and, with probability 1 1 - p0 , they are generated from an alternate probability density f0 . Let be index of the generating distribution. The objective is to decide whether is 0 or 1 quickly and accurately, while also under the pressure of a stochastic decision deadline. We define xt (x1 , x2 , . . . , xt ) to be the vector of observations made by time t. This vector of observations gives information about the generating density . Defining pt P{ = 1 | xt }, we t+1 t observe that p may be obtained iteratively from p via Bayes' rule, pt+1 = P{ = 1 | xt+1 } = pt f1 (xt+1 ) . pt f1 (xt+1 ) + (1 - pt )f0 (xt+1 ) (1 ) Let D be a deadline drawn from a known distribution that is independent of the observations xt . We will assume that the deadline D is observed immediately and effectively terminates the trial. Let c > 0 be the cost associated with each unit time of decision delay, and d .5 be the cost associated with exceeding the deadline, where both c and d are normalized against the (unit) cost of making an incorrect decision. We choose d .5 so that d is never smaller than the expected penalty for guessing at . This avoids situations in which we prefer to exceed the deadline. A decision-policy is a sequence of mappings, one for each time t, from the observations so far to the set of possible actions: stop and choose = 0; stop and choose = 1; or continue sampling. We define to be the time when the decision is made to stop sampling under decision-policy , and to be the hypothesis chosen at this time ­ both are random variables dependent on the sequence of observations. More formally, 0 , 1 , . . ., where t (xt ) {0, 1, continue}, tt and min(D, inf {t N : (x ) {0, 1}}), (x ). We may also define tt inf{t N : (x ) {0, 1}} to be the time when the policy would choose to stop sampling if the deadline were to fail to occur. Then = min(D, ). Our loss function is defined to be l( , ; , D) = 1{=} 1{ t, pt ,D,x . The cost associated with continuing at time t, known as the Q-factor for continuing and denoted by Q, takes the form Q(t, pt ) inf l(, ; , D) | D > t, pt ,D,x . (3 ) t+1, 2 Continuing vs. Stopping 0.5 0.4 Cost 0.3 0.2 0.1 0 0 Q(t + 1, p) - c Q(t, p) ¯ Q(t, p) 0.5 1 pt = p Figure 1: Comparison of the cost Q(t, p) of stopping at time t (red); the cost Q(t, p) of continuing at time t (blue solid line); and Q(t + 1, p) - c (black solid line), which is the cost of continuing at time t + 1 minus an adjustment Q(t + 1, p) - Q(t, p) = c. The continuation region C t is the interval between the intersections of the solid blue and red lines, marked by the blue dotted lines, and the continuation region C t+1 is the interval between the intersections of the solid black and red lines, marked by the black dotted lines. Note that Q(t + 1, p) - c Q(t, p), so C t contains C t+1 . Note that, in general, both V (t, pt ) and Q(t, pt ) may be difficult to compute due to the need to optimize over infinitely many decision policies. Conversely, the cost associated with stopping at time t, known as the Q-factor for stopping and denoted by Q, is easily computed as Q(t, pt ) = inf l(t, ; , D) | D > t, pt ,D,x = min{pt , 1 - pt } + ct, (4 ) =0,1 where the infimum is obtained by choosing = 0 if pt .5 and choosing = 1 otherwise. An optimal stopping rule is to stop the first time the expected cost of continuing exceeds that of stopping, and to choose = 0 or = 1 to minimize the probability of error given the accumulated evidence (see [9]). That is, = inf {t 0 : Q(t, pt ) Q(t, pt )} and = 1{p 1/2} . We define s the continuation region at time t by C t pt [0, 1] : Q(t, pt ) > Q(t, pt ) o that = inf {t 0 : pt C t }. Although we have obtained an expression for the optimal policy in terms of Q(t, p) / and Q(t, p), computing Q(t, p) is difficult in general. Lemma 1. The function p Q(t, pt ) is concave with respect to pt for each t N. Proof. We may restrict the infimum in Eq. 3 to be over only those and depeding on D and the future observations xt+1 {xt+1 , xt+2 , . . .}. This is due to two facts. First, the expectation is conditioned on pt , which contains all the information about available in the past observations xt , and makes it unnecessary for the optimal policy to depend on xt except through pt . Second, dependence on pt in the optimal policy may be made implicit by allowing the infimum to be attained by different and for different values of pt but removing explicit dependence on pt from the individual policies over which the infimum is taken. With and chosen from this restricted set of policies, we note that the distribution of the future observations xt+1 is entirely determined by and so we have l(, ; , D) | , pt D,xt+1 = l(, ; , D) | D,xt+1 . Summing over the possible values of , we may then write: k l(, ; , D) | = k D,xt+1 P{ = k | pt } l(, ; , D) | pt ,D,xt+1 = {0,1} = l(, ; , D) | = 0 D,xt+1 (1 - pt ) + l(, ; , D) | = 1 D,xt+1 pt . Eq. (3) can then be rewritten as: Q(t, pt ) = inf l(, ; , D) | = 0 D,xt+1 (1 - pt ) + l(, ; , D) | = 1 D,xt+1 pt , t+1, where this infimum is again understood to be taken over this set of policies depending only upon observations after time t. Since neither l(, ; , D) | = 0 nor l(, ; , D) | = 1 depend on pt , this is the infimum of a collection of linear functions in pt , and hence is concave in pt ( [8]). 3 We now need a lemma describing how expected cost depends on the distribution of the deadline. Let D be a deadline whose distribution is different than that of D. Let be the policy that is optimal given that the deadline has distribution D, and denote by . Then define V (t, pt ) min(p , 1 - p )1{ t ,D,x so that V gives the expected cost of taking the stopping time which is optimal for deadline D and applying it to the situation with deadline D . Similarly, let Q (t, pt ) and Q (t, pt ) denote the corresponding expected costs under and D given that we continue or stop, respectively, at time t given pt and D > t. Note that Q (t, pt ) = Q(t, pt ) = min(pt , 1 - pt ) + ct. These definitions are the basis for the following lemma, which essentially shows that replacing the deadline D which a less urgent deadline D lowers cost. This lemma is needed for Lemma 3 below. Lemma 2 If D is such that P{D > t + 1 | D > t} P{D > t + 1 | D > t} for all t, then V (t, p) V (t, p) and Q (t, p) Q(t, p) for all t and p. Proof. First let us show that if we have V (t + 1, p ) V (t + 1, p ) for some fixed t and all p , then we also have Q (t, p) Q(t, p) for that same t and all p. This is the case because, if we fix t, then Q(t, pt ) = (d + c(t + 1)) P{D = t + 1 | D > t} + V (t + 1, pt+1 ) | pt xt+1 P{D > t + 1 | D > t} = d + c(t + 1) + V (t + 1, pt+1 ) - (d + c(t + 1)) | pt xt+1 P{D > t + 1 | D > t} d + c(t + 1) + V (t + 1, pt+1 ) - (d + c(t + 1)) | pt xt+1P{D > t + 1 | D > t} d + c(t + 1) + V (t + 1, pt+1 ) - (d + c(t + 1)) | pt xt+1P{D > t + 1 | D > t} = Q (t, p). In the first inequality we have used two facts: that V (t + 1, pt+1 ) Q(t + 1, pt+1 ) = min(pt+1 , 1 - pt+1 ) + c(t + 1) d + c(t + 1) (which is true because d .5); and that P{D > t + 1 | D > t} P{D > t + 1 | D > t}. In the second inequality we have used our assumption that V (t + 1, p ) V (t + 1, p ) for all p . Now consider a finite horizon version of the problem where is only optimal among stopping times bounded above by a finite integer T . We will show the lemma for this case, and the lemma for the infinite horizon version of the problem follows by taking the limit as T . We induct backwards on t. Since is required to stop at T , we have V (T , pT ) = Q(T , pT ) = Q (T , pT ) = V (T , pT ). Now for the induction step. Fix p and t < T . If chooses to stop at t when pt = p, then V (t, p) = Q(t, p) = Q (t, p) = V (t, p). If continues instead, then V (t, p) = Q(t, p) Q (t, p) = V (t, p) by the induction hypothesis. Note the requirement that d 1/2 in the previous lemma. If this requirement is not met, then if pt is such that d < min(pt , 1 - pt ) then we may prefer to get timed out rather than choose = 0 or = 1 and suffer the expected penalty of min(pt , 1 - pt ) for choosing incorrectly. In this situation, since the conditional probability P{D = t + 1 | D > t} that we will time out in the next time period grows as time moves forward, the continuation region may expand with time rather than contract. Under most circumstances, however, it seems reasonable to assume the deadline cost to be at least as large as that of making an error. We now state Lemma 3, which shows that the cost of delaying by one time period is as least as large as the continuation cost c, but may be larger because the delay causes the deadline to approach more rapidly. Lemma 3. For each t N and p (0, 1), Q(t - 1, pt-1 = p) Q(t, pt = p) - c. Proof. Fix t. Let inf{s t + 1 : ps C s } so that min( , D) attains the infimum for / t Q(t, p ). Also define inf{s t : ps C s+1 } and min(D, ). Since is within the set / over which the infimum defining Q(t - 1, p) is taken, Q(t - 1, p) min(p , 1 - p )1{ t - 1, pt-1 = p D,xt = min(p , 1 - p )1{ t - 1, pt-1 = p D,xt = min(p , 1 - p )1{-1 t - 1, pt = p D,xt+1 , where the last step is justified by the stationarity of the observation process, which implies that the joint distribution of (ps )st , p , and conditioned on pt = p is the same as the joint distribution 4 of (ps-1 )st , p , and + 1 conditioned on pt-1 = p. Let D = D + 1 and we have Q (t, p) = min(p , 1 - p )1{ t, pt = p D ,xt+1 , so Q(t - 1, p) Q (t, p) - c. Finally, as D satisfies the requirements of Lemma 2, Q (t, p) Q(t, p). Lemma 4. For t N, Q(t, 0) = Q(t, 1) = c(t + 1) + dP{D = t + 1 | D > t}. Proof. On the event pt = 0, we have that P{ = 0} = 1 and the policy attaining the infimum in (3) is = t + 1, = 0. Thus, Q(t, 0) becomes Q(t, 0) = l( , ; , D) | D > t, pt = 0 D,xt+1 = l( , ; , D) | D > t, = 0 D,xt+1 = d1{t+1D} + c(t + 1) | D > t, = 0 D,xt+1 = c(t + 1) + dP{D = t + 1 | D > t}. Similarly, on the event pt = 1, we have that P{ = 1} = 1 and the policy attaining the infimum in (3) is = t + 1, = 1. Thus, Q(t, 1) = c(t + 1) + dP{D t + 1 | D > t}. We are now ready for the main theorem, which shows that C t is either empty or an interval, and that C t+1 C t . To illustrate our proof technique, we plot Q(t, p), Q(t, p), and Q(t + 1, p) - c as functions of p in Figure 2.1. As noted, the continuation region C t is the set of p such that Q(t, p) Q(t, p), To show that C t is either empty or an interval, we note that Q(t, p) is a concave function in p (Lemma 1) whose value at the endpoints p = 0, 1 are greater than the corresponding values of Q(t, p) (Lemma 4). Such a concave function may only intersect Q(t, p), which is a constant plus min(p, 1 - p), either twice or not at all. When it intersects twice, we have the situation pictured in Figure 2.1, in which C t is a non-empty interval, and when it does not intersect C t is empty. To show that C t+1 C t we note that the difference between Q(t + 1, p) and Q(t, p) is the constant c. Thus, to show that C t , the set where Q(t, p) contains Q(t, p), is larger than C t+1 , the set where Q(t + 1, p) is larger than Q(t + 1, p), it is enough to show that the difference between Q(t + 1, p) and Q(t, p) is at least as large as the adjustment c, which we have done in Lemma 3. Theorem. At each time t N, the optimal continuation region C t is either empty or a closed interval, and C t+1 C t . Proof. Fix t N. We begin by showing that C t+1 C t . If C t+1 is empty then the statement follows trivially, so consider the case when C t+1 = . Choose p C t+1 . Then Q(t, p) Q(t + 1, p) - c Q(t + 1, p) - c = min{p, 1 - p} + ct = Q(t, p). Thus, p C t , implying C t+1 C t . Now suppose that C t is non-empty and we will show it must be a closed interval. Let at inf C t an d b t sup C t . Since C t is a non-empty subset of [0, 1], we have at , bt [0, 1]. Furthermore, t a > 0 because Q(t, p) c(t + 1) + dP{D = t + 1 | D > t} > ct = Q(t, 0) for all p, and the continuity of Q(t, ·) implies that Q(t, p) > Q(t, p) > 0 for p in some open interval around 0. Similarly, bt < 1. Thus, at , bt (0, 1). We will show first that [at , 1/2] C t . If at > 1/2 then this is trivially true, so consider the case that at 1/2. Since Q(t, ·) is concave on the open interval (0, 1), it must also be continuous there. This and the continuity of Q imply that Q(t, at ) = Q(t, at ). Also, Q(t, 0) > Q(t, 0) by Lemma 4. Thus at > 0 and we may take a left-derivative at at . For any (0, at ), at - C t so / Q(at - ) > Q(at - ). This implies together with Q(t, at ) = Q(t, at ) that Q(t, at ) - Q(t, at - ) - - Q(t, at ) - Q(t, at - ) Q(t, at ). Q(t, at ) = lim lim = p p 0+ 0+ Since Q(t, ·) is concave by Lemma 1 and Q(t, ·) is linear on [0, 1/2], we have for any p [at , 1/2], - - - - Q(t, at ) = Q(t, p ). Q(t, p ) Q(t, at ) p p p p Since Q(t, ·) is concave, it is differentiable except at countably many points, so for any p [at , 1/2], p- p- Q(t, p ) dp = Q(t, p). Q(t, p ) dp Q(t, at ) + Q(t, p) = Q(t, at ) + t p t p a a 5 Therefore p C t , and, more generally, [at , 1/2] C t . By a similar argument, [1/2, bt] C t . Finally, C t [at , bt ] [at , 1/2] [1/2, bt ] C t and we must have C t = [at , bt ]. We also include the following proposition, which shows that if D is finite with probability 1 then the continuation region must eventually narrow to nothing. Proposition. If P{D < } = 1 then there exists a T < such that C T = . Proof. First consider the case when D is bounded, so P{D T + 1} = 1 for some time T < . Then, Q(T , pT ) = d + c(T + 1), while Q(T , pT ) = cT + min(pT , 1 - pT ) cT + 1/2. Thus Q(T , pT ) - Q(T , pT ) d + c - 1/2 > 0, and C T = . Now consider the case when P{D > t} > 0 for every t. By neglecting the error probability and including only continuation and deadline costs, we obtain Q(t, pt ) d P{D = t+1 | D > t}+c(t+1). Bounding the error probability by 1/2 we obtain Q(t, pt ) ct + 1/2. Thus, Q(t, pt ) - Q(t, pt ) c + d P{D = t + 1 | D > t} - 1/2. Since P{D < } = 1, limt c + d P{D = t + 1 | D > t} - 1/2 = c + d - 1/2 > 0, and there exists a T such that c + d P{D = t+1 | D > t} - 1/2 > 0 for every t T . This implies that, for t T and pt [0, 1], Q(t, pt ) - Q(t, pt ) > 0 and C t = . 3 Computational simulations We conducted a series of simulations in which we computed the continuation region and distributions of response time and accuracy for the optimal policy for several choices of the parameters c and d, and for the distribution of the deadline D. We chose the observation xt to be a Bernoulli random variable under both f0 and f1 for every t = 1, 2, . . . with different values for q P{xi = 1 | }. In our simulations we chose q0 = .45 and q1 = .55. We computed optimal policies for two different forms of deadline distribution: first for a deterministic deadline fixed to some known constant; and second for a gamma distributed deadline. The gamma distribution with parameters k > 0 and > 0 has density ( k /(k ))xk-1 e- x for x > 0, where (·) is the gamma function. The parameters k and , called the shape and rate parameters respectively, are completely determined by choosing the mean and the standard deviation of the distribution since the gamma distribution has mean k / and variance k / 2 . A fixed deadline T may actually be seen as a limiting case of a gamma-distributed deadline by taking both k and to infinity such that k / = T is fixed. We used the table-look-up form of the backward dynamic programming algorithm (see, e.g., [10]) to compute the optimal Q-factors. We obtained approximations of the value function and Q-factors at a finite set of equally spaced discrete points {0, 1/N , . . . , (N - 1)/N , 1} in the interval [0, 1]. In our simulations we chose N = 999. We establish a final time T that is large enough that P{D T } is nearly 1, and thus P{ T } is also nearly 1. In our simulations we chose T = 60. We approximated the value function V (T , pT ) at this final time by Q(T , pT ). Then we calculated value functions and Q-factors for previous times recursively according to Bellman's equation: Q(t, p) = V (t + 1, pt+1 ) | pt = p pt+1 ; V (t, p) = min(Q(t, p), Q(t, p)). This expectation relating Q(t, ·) to V (t + 1, ·) may be written explicitly using our hypotheses and Eq. 1 to define a function g so that pt+1 = g (pt , xt+1 ). In our case this function is defined by g (pt , 1) (pt q1 )/(pt q1 + (1 - pt)q0 ) and g (pt , 0) (pt (1 - q1 ))/(pt (1 - q1 ) + (1 - pt)(1 - q0 )). Then we note that P{xt+1 = 1 | pt } = P{xt+1 = 1 | = 1}pt + P{xt+1 = 1 | = 0}(1 - pt ) = pt q1 + (1 - pt )q0 , and similarly P{xt+1 = 0 | pt } = pt (1 - q1 ) + (1 - pt )(1 - q0 ). Then Q(t, pt ) = (c(t + 1) + d)P{D t + 1 | D > t} + P{D > t + 1 | D > t} [ +t pt t pt (1 - q1 ) + (1 - pt )(1 - q0 ) V + 1, g (pt, 1) q1 + (1 - pt )q0 V + 1, g (pt, 0) . We computed continuation regions C t from these Q-factors, and then used Monte Carlo simulation with 106 samples for each problem setting to estimate P{ = | = t} and P{ = t} as functions of t. The results of these computational simulations are shown in Figure 3. We see in Fig. 3A that the decision boundaries for a fixed deadline (solid blue) are smoothly narrowing toward the midline. Clearly, at the last opportunity for responding before the deadline, the optimal policy would always generate a response (and therefore the thresholds merge), since we assumed that the cost of penalty 6 A Probability 1 Varying std(D) 0.8 0.6 0.4 0.2 0 0 std=0 std=1 std=5 B Probability 1 0.8 0.6 0.4 0.2 Varying mean(D) =40 =30 =25 10 20 30 40 1 0 0 10 20 30 40 C Probability 1 0.8 0.6 0.4 0.2 0 0 10 Varying c D 0.8 Varying d 0.6 0.4 0.2 d=0.5 d=2 d=1000 c=0.001 c=0.002 c=0.004 Probability 20 30 40 0 0 10 20 30 40 T im e T im e Figure 2: Plots of the continuation region C t (blue), and the probability of a correct response P{ = | = t} (red). The default settings were c = .001, d = 2, mean(D) = 40, std(D) = 1, and q0 = 1-q1 = .45. In each plot we varied one of them while keeping the others fixed. In (A) we varied the standard deviation of D, in (B) the mean of D, in (C) the value of c, and in (D) the value of d. is greater than the expected cost of making an error: d .5 (since the optimal policy is to choose the hypothesis with probability .5, the expected probability of error is always .5). At the time step before, the optimal policy would only continue if one more data point is going to improve the belief state enough to offset the extra time cost c. Therefore, the optimal policy only continues for a small "window" around .5 even though it has the opportunity to observe one more data point. At earlier times, the window "widens" following similar logic. When uncertainty about the deadline increases (larger std(D); shown in dashed and dash-dotted blue lines), the optimal thresholds are squeezed toward each other and to the left, the intuition being that the threat of encountering the deadline spreads earlier and earlier into the trial. The red lines denote the average accuracy for different stopping times obtained from a million Monte Carlo simulations of the observation-decision process. They closely follow the decision thresholds (since the threshold is on the posterior probability p ), but are slightly larger, because p must exceed the threshold, and pt moves in discrete increments due to the discrete Bernoulli process. The effect of decreasing the mean deadline is to shift the decision boundaries left-ward, as shown in Fig. 3B. The effect of increasing the cost of time c is to squeeze the boundaries toward the midline (Fig. 3C ­ this result is analogous to that seen in the classical sequential probability ratio test for the infinite-horizon case. The effect of increasing d is to squeeze the thresholds to the left (Fig. 3D), and the rate of shifting is on the order of log(d) because the tail of the gamma distribution is falling off nearly exponentially. 4 Discussion In this work, we formalized the problem of sequential hypothesis testing (of two alternatives) under the pressure of a stochastically sampled deadline, and characterized the optimal policy. For a large class of deadline distributions (including gamma, normal, exponential, delta), we showed that the optimal policy is to report a hypothesis as soon as the posterior belief hits one of a pair of monotonically declining thresholds (toward the midline). This generalizes the classical infinite horizon case in the limit when the deadline goes to infinity, and the optimal policy reverts to a pair of fixed thresholds as in the sequential probability ratio test [1]. We showed that the decision policy becomes more conservative (thresholds pushed outward and to the right) when there's less uncertainty about 7 the deadline, when the mean of the deadline is larger, when the linear temporal cost is larger, and when the deadline cost is smaller. In the theoretical analysis, we assumed that D has the property that P{D > t + u | D > t} is nonincreasing in t for each u 0 over the set of t such that P{D > t} > 0. This assumption implies that, if the deadline has not occurred already, then the likelihood that it will happen soon grows larger and larger, as time passes. The assumption is violated by multi-modal distributions, for which there is a large probability the deadline will occur at some early point in time, but if the deadline does not occur by that point in time then will not occur until some much later time. This assumption is met by a fixed deadline (std(D) 0), and also includes the classical infinite-horizon case (D ) as a special case (and the optimal policy reverts to the sequential probability ratio test). This assumption is also met by any distribution with a log-concave density because log P{D > t + u | D > t} = log P{D > t + u} - log P{D > t} = F (t + u) - F (t), where F (t) log P{D > t}. If the density of D is log-concave, then F is concave ( [8]), and the increment F (t+u)-F (t) is non-increasing in t. Many common distributions have log-concave densities, including the exponential distribution, the gamma distribution, the normal distribution, and the uniform distribution on an interval. We used gamma distributions for the deadline in the numerical stimulations. There are several empirical properties about timing uncertainty in humans and animals that make the gamma distribution particularly suitable. First, realizations from the gamma distribution are always non-negative, which is consistent with the assumption that a subject never thinks a deadline has passed before the experiment has started. Second, if we fix the rate parameter and vary the shape k , then we obtain a collection of deadline distributions with different means whose variance and mean are in a fixed ratio, which is consistent with experimental observations [11]. Third, for large values of k the gamma distribution is approximately normal, which is also consistent with experimental observations [11]. Finally, a gamma distributed random variable with mean µ may be written as the sum of k = µ independent exponential random variables with mean 1/ , so if the brain were able to construct an exponential-distributed timer whose mean 1/ were on the order of milliseconds, then it could construct a very accurate gamma-distributed timer for intervals of several seconds by resetting this exponential timer k times and responding after the k th alarm. This has interesting ramifications for how sophisticated timers for relatively long intervals can be constructed from neurons that exhibit dynamics on the order of milliseconds. This work makes several interesting empirical predictions. Subjects who have more internal uncertainty, and therefore larger variance in their perceived deadline stochasticity, should respond to stimuli earlier and with lower accuracy. Similarly, the model makes quantitative predictions about the subject's performance when the experimenter explicitly manipulates the mean deadline, and the relative costs of error, time, and deadline. Acknowledgments We thank Jonathan Cohen, Savas Dayanik, Philip Holmes, and Warren Powell for helpful discussions. The first author was supported in part by the Air Force Office of Scientific Research under grant AFOSR-FA9550-05-1-0121. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] Wald, A & Wolfowitz, J (1948). Ann. Math. Statisti. 19: 326-39. Luce, R D (1986). Response Times: Their Role in Inferring Elementary Mental Org. Oxford Univ. Press. Ratcliff, R & Rouder, J N (1998). Psychol. Sci. 9: 347-56. Bogacz, R et al (2006). Pyschol. Rev. 113: 700-65. Gold, J I & Shadlen, M N (2002). Neuron 36: 299-308. Mozer et al (2004). Proc. Twenty Sixth Annual Conference of the Cognitive Science Society. 981-86. Siegmund, D (1985). Sequential Analysis. Springer. Boyd, S & Vandenberghe, L (2004) Convex Optimization. Cambridge Univ. Press. Poor, H V (1994). An Introduction to Signal Detection and Estimation. Springer-Verlag. Powell, W B (2007) Approximate Dynamic Programming: Solving the curses of dimensionality. Wiley. Rakitin, et al (1998). J. Exp. Psychol. Anim. Behav. Process. 24: 15-33. 8