Generative Artificial Intelligence (GenAI) is establishing itself in academia and industry through accessible implementations like ChatGPT and Stable Diffusion. Subsequently, GenAI is changing the interaction paradigms between users and computers. Currently, many individual approaches exist to leverage these tools, creating various strategies and agendas for using them in HCI research. Thus, we expect this will change how researchers design and create interactive interfaces. This workshop facilitates a common understanding of GenAI concerning HCI in which participants will share and discuss their experiences using GenAI and Large Language Models (LLMs) in HCI research. In this workshop, the potential of GenAI for Human-Computer Interaction will be explored and discussed with the research community through position papers, research statements, and opinions, thereby creating comprehensive practical insights synthesizing collective experience and strategies with real-world examples.
Generative AI (genAI) tools, like ChatGPT, have become popular not only with everyday users but also with Human-Computer In- teraction (HCI) researchers and practitioners. Despite their rapid adoption, there is a lack of studies examining their design, particu- larly regarding prompt handling, organization, and management. Our empirical survey study, involving 61 genAI tool users, addresses this gap by investigating the usability and user experience of the current features of these tools. We illustrate that advanced search and labeling functionalities and innovative interface designs can significantly enhance user experience as well as aid in reflecting on sustainability when using this technology. As genAI approaches the so-called “Trough of Disillusionment” (in Gartner’s Hype Cycle terms),1 our research aims to guide the design of genAI tools toward a more pragmatic and practical fit to end-user practices, ensuring that technology adoption comes with a deeper understanding of its capabilities and offerings.
To ensure that academic researchers comply with institutional and disciplinary standards such as data protection legislation and ethical research guidelines, institutions require researchers to submit forms for processing. This task can be unnecessarily time-consuming and complex from the researchers' perspective. Generative Artificial Intelligence (GenAI) implemented through conversational means (i.e., generative conversational AI systems such as ChatGPT) could be an ideal technological solution to support researchers in this process. The technology’s ability as a cognitive service (i.e., being able to simulate human thought processes in domains of language, knowledge, and search) affords it an ability to support cognitively complex tasks – such as filling out academic compliance forms – in an intelligent way (e.g., providing synthesised and contextualised data insights, metacognitive support strategies, cognitive scaffolding). However, it is currently unclear what challenges and pain points researchers experience when filling in these forms and whether a conversational AI tool would be appropriate to support this process. We used semi-structured interviews to explore researchers’ experiences when filling in academic compliance forms and how they could be supported in this process. We found that participants struggled to 1) navigate the bureaucratic nature of the institutional pathways involved, 2) gain access to relevant support resources and past application examples, and 3) apply their general knowledge about data protection and ethical principles to their research projects. Further, our participants identified a need for more interactive support to address these pain points, which suggests that a conversational AI tool could be an appropriate technological solution. We, therefore, propose the development of ComplianceBot, a ChatGPT-based cognitive-support tool that helps researchers learn the conceptual and procedural skills necessary for filling in academic compliance forms. This position paper contributes a user case study which demonstrates how recent developments in conversational AI technologies can now afford the rapid development of low-cost, high-fidelity prototypes to address user needs in unprecedented ways and produce deeper insights that contribute to the field of Human-Computer Interaction.
This paper discusses the pressures faced by academics, particularly women and non-binary academics, due to the increasing demands of teaching, service, and administrative tasks, that leave them with insufficient time for research-related activities. It draws parallels between the invisible labour carried out by academics and parents, and suggests that models of mental workload from the field of family work could be applied to the academic sphere. The paper also explores the potential of Large Language Models (LLMs) to alleviate some of these burdens, particularly those of invisible labour. We conclude by proposing a research agenda to further investigate this topic.
Generative Artificial Intelligence (GenAI) provides new capabili- ties to generate figures and other visual elements for papers. This can help researchers to enhance the visual quality of their papers. However, there are also challenges when generating images or visual elements, such as the complexity of describing visual con- tent with a text prompt. This paper describes how GenAI can be effectively used in researchers’ workflows. It further discusses so- lutions for existing challenges in generating images from textual descriptions, like using tools like ControlNet and IP Adapter or specifically trained image generation model checkpoints. Lastly, the paper shows how GenAI can be used to generate LateX code for visual elements like tables or diagrams.
This paper investigates the integration of generative AI tools within the User-Centered Design (UCD) framework, focusing on a case study involving design students from the University of Applied Sciences in Mainz. The study evaluates the benefits and limitations of AI tools in supporting the design process. The students developed an interactive solution using various AI tools throughout the UCD stages, including problem definition, ideation, prototyping, and user feedback. Findings indicate that while AI tools can aid in data analysis and idea generation, they often explore problems at a superficial level, making it challenging to achieve a deep understanding of the problem-solution interplay. Furthermore, the linear interaction mode of the AI tools used in this course and their limited collaborative capabilities were a major drawback.. The study highlights the importance of balancing AI use with human intuition and creativity to achieve more nuanced and effective design outcomes. Future research should focus on enhancing AI's collaborative features and context awareness to better integrate into the design process.
Today, interaction with LLM-based agents is mainly based on text or voice interaction. Currently, we explore how nonverbal cues and affective information can augment this interaction in order to create empathic, context-aware agents. For that, we extend user prompts with input from different modalities and varying levels of abstraction. In detail, we investigate the potential of extending the input into LLMs beyond text or voice, similar to human-human interaction in which humans not only rely on the simple text that was uttered by a conversion partner but also on nonverbal cues. As a result, we envision that cameras can pick up facial expressions from the user, which can then be fed into the LLM communication as an additional input channel fostering context awareness. In this work we introduce our application ideas and implementations, preliminary findings, and discuss arising challenges.
In this work we describe out initial method of utilizing the text- to-image AI Midjourney to generate stimulus material that is cute based on Lorenz Kindchenschema.
Nowadays, Generative Artificial Intelligence (GenAI) can outper- form humans in creative professions, such as design. As a result, GenAI attracted a lot of attention from researchers and industry. However, GenAI could used to augment humans with a multimodal user interface, as proposed by Ben Shneiderman in his recent work on Human-Centred Artificial Intelligence (HCAI). Most studies of HCAI have mainly focused on greenfield projects. In contrast to existing research, we describe a brownfield software architecture approach with a loosely coupled GenAI-driven multimodal user in- terface that combines human interaction with third-party systems. A domain-specific language for user interaction connects natural language and signals of the existing system through GenAI. Our proposed architecture enables research and industry to provide user interfaces for existing software systems that allow hands-free interaction.