Commonsense estimation

A few years before his collaboration with Andrew Gordon on formal theories of commonsense psychology, Jerry Hobbs published a fascinating paper on “Optimal Choice of Granularity In Commonsense Estimation”. The abstract:

It has been observed that when people make crude estimates, they feel comfortable choosing between alternatives which differ by a half-order of magnitude (e.g., were there 100, 300, or 1,000 people in the crowd), and less comfortable making a choice on a more detailed scale, with finer granules, or on a coarser scale (like 100 or 1,000). In this paper, we describe two models of choosing granularity in commonsense estimates, and we show that for both models, in the optimal granularity, the next estimate is 3-4 times larger than the previous one. Thus, these two optimization results explain the commonsense granularity.

This principle has been applied to software estimation in the “planning poker” consensus-based estimating technique. Participants estimate the relative level of effort of software development tasks using point values drawn from a predefined set. The process is facilitated with a variant of the Delphi method developed at RAND in the 1950s. Steve McConnell’s Software Estimation: Demystifying the Black Art is the best overview of these and other software estimation techniques.