Elicitron’s architecture for requirements elicitation using LLMs: First, LLM agents are generated within a design context in either serial and parallel fashion (incorporating diversity sampling to represent varied user perspectives). These agents then engage in simulated product experience scenarios, documenting each step (Action, Observation, Challenge) in detail. Following this, they undergo an agent interview process, where questions are asked and answered to surface latent user needs. In the final stage, latent needs are identified using an LLM on a provided criteria, and finally a report is generated from the identified latent needs.
Elicitron’s architecture for requirements elicitation using LLMs: First, LLM agents are generated within a design context in either serial and parallel fashion (incorporating diversity sampling to represent varied user perspectives). These agents then engage in simulated product experience scenarios, documenting each step (Action, Observation, Challenge) in detail. Following this, they undergo an agent interview process, where questions are asked and answered to surface latent user needs. In the final stage, latent needs are identified using an LLM on a provided criteria, and finally a report is generated from the identified latent needs.
Abstract
Requirement elicitation, a critical, yet time-consuming and challenging step in product development, often fails to capture the full spectrum of user needs. This may lead to products that fall short of user expectations. This article introduces a novel framework that leverages large language models (LLMs) to automate and enhance the requirement elicitation process. LLMs are used to generate a vast array of simulated users (LLM agents), enabling the exploration of a much broader range of user needs and unforeseen use cases. These agents engage in product experience scenarios, explaining their actions, observations, and challenges. Subsequent agent interviews and analysis uncover valuable user needs, including latent ones. We validate our framework with three experiments. First, we explore different methodologies for the challenge of diverse agent generation, discussing their advantages and shortcomings. We measure the diversity of identified user needs and demonstrate that context-aware agent generation leads to greater diversity. Second, we show how our framework effectively mimics empathic lead user interviews, identifying a greater number of latent needs than conventional human interviews. Third, we show that LLMs can be used to analyze interviews, capture needs, and classify them as latent or not. Our work highlights the potential of using LLMs to accelerate early-stage product development with minimal costs and increase innovation.