A Design Inquiry into Negative Algorithmic Experience: Prototyping Embodied LLM Hallucination
Advisors: Daragh Byrne, Sinan Goral
Abstract: The rapid development of large language models has sparked interest in integrating them into conversational-based embodied artifacts in our everyday lives. While gaining unprecedented enabling capability from large-scale human-generated datasets and algorithms, the negative side of the algorithm becomes much easier to ignore. In this context, hallucination, an intrinsic flaw leading to the deviated intention from user input, could lead to different interpretations and spark more interactive experiences in personal space. Our work expands algorithm experience prototyping methodology and uses a series of speculative prototypes to investigate what hallucination could look like, how different narrative media contribute to possible interaction scenarios, and what encounters will emerge during prototyping sessions in the corresponding context. Our work intends to reflect on the possibilities and design implications of LLM hallucination when embodied LLM becomes an enabling agent of our everyday multimodal input/output and contributes to the creative interpretation of the dark side of algorithms and a new perspective of applying prototyping methods.