Finally, we extend the optimal policy to account for design spaces that contain distinct design subclasses. As usual in this framework, the initial optimal stopping problem is reduced to a free-boundary problem, solved through the principles of the smooth and/or continuous fit. Our analysis also provides useful contingency-based guidelines for managerial action as information gets revealed through the testing cycle. We present the sequential testing of two simple hypotheses for a large class of Lvy processes. We find that accounting for the design space structure splits the experimentation process into two phases: the initial exploration phase, in which the design team focuses on obtaining information about the design space, and the subsequent exploitation phase in which the design team, given their understanding of the design space, focuses on obtaining a "good enough" configuration. Test outcomes above the upper threshold, on the other hand, merit termination because they signal to the design team that the likelihood of obtaining a design with a still higher performance (given the experimentation cost) is low. Outcomes below the lower threshold indicate an overall low performing design space and, consequently, continued testing is suboptimal. Our results suggest optimal continuation only when the previous test outcomes lie between two thresholds. We derive the optimal dynamic testing policy, and we analyze its qualitative properties. This study explicitly considers the design space structure and the resulting correlations among design performances, and examines their implications for learning. We conceptualize the extent of useful learning from analysis of a test outcome as depending on two key structural characteristics of the design space, namely whether the set of designs are "close" to each other (i.e., the designs are similar on an attribute level) and whether the design attributes exhibit nontrivial interactions (i.e., the performance function is complex). Still, design teams often learn from test outcomes during iterative test cycles enabling them to infer valuable information about the performances of (as yet) untested designs. New product designs often involve complex architectures and incorporate numerous components, and this makes the ex ante assessment of their performance difficult. Past research in new product development (NPD) has conceptualized prototyping as a "design-build-test-analyze" cycle to emphasize the importance of the analysis of test results in guiding the decisions made during the experimentation process.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |