A key assumption of new product development is that user requirements and related preferences do not vary on time scales of the process length. However, prior work has identified cases in which user preferences for product attributes can vary with time. This study proposes a method, Design for Dynamic User Preferences, which adapts reinforcement learning (RL) algorithms for designing physical systems whose functionality changes with user feedback. An illustrative case comprised of the design of a variable stiffness prosthetic ankle is presented to evaluate the potential usefulness of the framework. Lifetime user satisfaction for static and dynamic design strategies are compared over simulated user preferences under a number of conditions. Results suggest RL-based strategies outperform static strategies for cases with dynamic user preferences despite significantly less initial information. Within RL methods, upper-confidence bound policies led to higher user satisfaction on average. This study suggests that further investigation into RL-based design strategies is warranted for situations with possibly dynamic preferences.

This content is only available via PDF.
You do not currently have access to this content.