98%
921
2 minutes
20
Positive human-agent relationships can effectively improve human experience and performance in human-machine systems or environments. The characteristics of agents that enhance this relationship have garnered attention in human-agent or human-robot interactions. In this study, based on the rule of the persona effect, we study the effect of an agent's social cues on human-agent relationships and human performance. We constructed a tedious task in an immersive virtual environment, designing virtual partners with varying levels of human likeness and responsiveness. Human likeness encompassed appearance, sound, and behavior, while responsiveness referred to the way agents responded to humans. Based on the constructed environment, we present two studies to explore the effects of an agent's human likeness and responsiveness to agents on participants' performance and perception of human-agent relationships during the task. The results indicate that when participants work with an agent, its responsiveness attracts attention and induces positive feelings. Agents with responsiveness and appropriate social response strategies have a significant positive effect on human-agent relationships. These results shed some light on how to design virtual agents to improve user experience and performance in human-agent interactions.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9944929 | PMC |
http://dx.doi.org/10.1038/s41598-023-29874-5 | DOI Listing |
Front Psychol
August 2025
Department of Clinical Psychology, and Psychotherapy, Institute of Psychology, University of Würzburg, Würzburg, Germany.
Starting long before modern generative artificial intelligence (AI) tools became available, technological advances in constructing artificial agents spawned investigations into the extent to which interactions with such non-human counterparts bear resemblance to human-human-interactions. Although artificial agents are typically not ascribed a mind of their own in the same sense as humans, several researchers concluded that social presence with or social influence from artificial agents can resemble that seen in interactions with humans in important ways. Here we critically review claims about a comparability between human-agent interactions and human-human-interactions, outlining methodological approaches and challenges which predate the AI era but continue to influence work in the field.
View Article and Find Full Text PDFFront Robot AI
August 2025
Social Cognitive Systems Group, Faculty of Technology/CITEC, Bielefeld University, Bielefeld, Germany.
Collaborating in real-life situations rarely follows predefined roles or plans, but is established on the fly and flexibly coordinated by the interacting agents. We introduce the notion of fluid collaboration (FC), marked by frequent changes of the tasks partners assume or the resources they consume in response to varying requirements or affordances of the environment, tasks, or other agents. FC thus necessitates dynamic, action-oriented Theory of Mind reasoning to enable agents to continuously infer and adapt to others' intentions and beliefs in real-time.
View Article and Find Full Text PDFFront Comput Neurosci
July 2025
Department of Mechanical Engineering, North Carolina A&T State University, Greensboro, NC, United States.
Despite state-of-the-art technologies like artificial intelligence, human judgment is critically essential in cooperative systems, such as the multi-agent system (MAS), which collect information among agents based on multiple-cue judgment. Human agents can prevent impaired situational awareness of automated agents by confirming situations under environmental uncertainty. System error caused by uncertainty can result in an unreliable system environment, and this environment affects the human agent, resulting in non-optimal decision-making in MAS.
View Article and Find Full Text PDFPLoS One
July 2025
National Institute of Informatics, Tokyo, Japan.
Cooperative relationships between humans and agents are becoming more important for the social coexistence of anthropomorphic agents, including virtual agents and robots. One way to improve the relationship between humans and agents is for humans to empathize with agents. Empathy can increase human acceptance.
View Article and Find Full Text PDFFront Artif Intell
April 2025
Computer Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.
In Open Multi-Agent Systems (OMAS), the open nature of such systems precludes that all communication protocols are hardwired in advance. It is therefore essential that agents can incrementally learn to understand each other. Ideally, this is done with a minimal number of a priori assumptions, in order not to compromise the open nature of the system.
View Article and Find Full Text PDF