Management sciences, decision sciences and quantitative methods
Sequential decision making, time pressure, information search, Bayesian inference
Arrow et al. (1949) introduced the first sequential search problem, “where at each stage the options available are to stop and take a definite action or to continue sampling for more information." We study how time pressure in the form of task accumulation may affect this decision problem. To that end, we consider a search problem where the decision maker (DM) faces a stream of random decision tasks to be treated one at a time, and accumulate when not attended to. We formulate the problem of managing this form of pressure as a Partially Observable Markov Decision Process, and characterize the corresponding optimal policy. We find that the DM needs to alleviate this pressure very differently depending on how the search on the current task has unfolded thus far. As the search progresses, the DM is less and less willing to sustain high levels of workloads in the beginning and end of the search, but actually increases the maximum workload she is willing to handle in the middle of the process. The DM manages this workload by first making a priori decisions to release some accumulated tasks, and later by aborting the current search and deciding based on her updated belief. This novel search strategy critically depends on the DM's prior belief about the tasks, and stems, in part, from an effect related to the decision ambivalence. These findings are robust to various extensions of our basic set-up.
© 2019, INFORMS