Enterprise AI Analysis
Actions, Speech, and Looks: What Shapes How We Feel About In-Vehicle AI Assistants?
What should an intelligent in-vehicle assistant (IVA) look like, and how should it behave to truly enhance the in-car experience? We present a large-scale video-based online experiment (n = 1238) exploring how IVA design factors influence user perceptions. Participants evaluated two scenarios (adjusting temperature, adjusting seat position) across 32 conditions varying in autonomy (user-initiated, system-initiated, autonomous with explanation, autonomous without explanation), embodiment (abstract virtual agent, humanlike virtual agent, abstract robot, humanoid robot), and conversational style (formal, informal). Contrary to prevailing academic trends, our findings reveal a clear preference against robotic embodiments and high levels of autonomy, sometimes even when explainable. Instead, participants favored proactivity with lower system autonomy and less anthropomorphic designs. We discuss how these insights challenge current design assumptions and offer concrete guidelines for shaping IVAs that align with driver expectations and comfort. This work contributes an empirically grounded understanding of IVA appearance, behavior, and communication style to inform future human-centered automotive interaction design.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
What should an intelligent in-vehicle assistant (IVA) look like, and how should it behave to truly enhance the in-car experience? We present a large-scale video-based online experiment (n = 1238) exploring how IVA design factors influence user perceptions. Participants evaluated two scenarios (adjusting temperature, adjusting seat position) across 32 conditions varying in autonomy (user-initiated, system-initiated, autonomous with explanation, autonomous without explanation), embodiment (abstract virtual agent, humanlike virtual agent, abstract robot, humanoid robot), and conversational style (formal, informal). Contrary to prevailing academic trends, our findings reveal a clear preference against robotic embodiments and high levels of autonomy, sometimes even when explainable. Instead, participants favored proactivity with lower system autonomy and less anthropomorphic designs. We discuss how these insights challenge current design assumptions and offer concrete guidelines for shaping IVAs that align with driver expectations and comfort. This work contributes an empirically grounded understanding of IVA appearance, behavior, and communication style to inform future human-centered automotive interaction design.
• Human-centered computing → Empirical studies in HCI; Natural language interfaces.
in-vehicle assistant, embodiment, persona, trust
Astrid Marieke Rosenthal-von der Pütten, Nikolai Bock, Dimitra Theofanou-Fülbier, and Sebastian Zepf. 2026. Actions, Speech, and Looks: What Shapes How We Feel About In-Vehicle AI Assistants?. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI '26), April 13-17, 2026, Barcelona, Spain. ACM, New York, NY, USA, 29 pages. https://doi.org/10.1145/3772318.3790435
IVA Autonomy Levels Explored
| Factor | Preferred Outcome |
|---|---|
| Autonomy Level |
|
| Embodiment |
|
| Conversational Style |
|
Impact of Robotic Embodiment in Automotive Context
The study found a clear preference against robotic embodiments, especially humanoid robots. The humanoid robot, despite being perceived as most corporeal and socially present, scored lowest on warmth, competence, trust, and pragmatic quality. It also evoked discomfort (with a strong effect size) and had the lowest usage intention. Only 5% of participants chose it, compared to 65% for the abstract virtual agent. This is attributed to spatial intrusion, potential distraction, and general perception of robots as 'creepy' or 'not aesthetic' in an automotive context. This highlights the importance of context-dependent design for IVAs, favoring minimal and non-intrusive embodiments for verbal interactions.
Key Takeaways:
- Robotic embodiments, especially humanoid, elicited high discomfort and low usage intention.
- Spatial intrusion and perceived 'creepiness' were key negative factors.
- Abstract virtual agents were highly preferred, suggesting context-specific design is crucial.
Calculate Your AI ROI
Estimate the potential efficiency gains and cost savings by implementing AI solutions tailored for your enterprise.
Your AI Implementation Roadmap
A structured approach to integrate intelligent in-vehicle assistants, ensuring a smooth transition and maximum impact.
Phase 1: Discovery & Strategy
Conduct a comprehensive audit of existing systems and define core AI objectives, identifying high-impact areas for IVA integration.
Phase 2: MVP Development & Testing
Develop a Minimum Viable Product (MVP) focusing on low-autonomy, system-initiated virtual agents with informal conversational styles. Conduct user testing in controlled environments.
Phase 3: Iterative Refinement & Expansion
Based on user feedback, iteratively refine IVA behavior and expand functionality. Gradually introduce explainable autonomous actions for non-critical tasks, continuously monitoring user perception and trust.
Phase 4: Full Deployment & Continuous Optimization
Deploy across the enterprise with ongoing monitoring and AI model optimization. Explore advanced human-centered designs for specific contexts, such as passenger-focused interactions in autonomous vehicles.
Ready to Transform Your Enterprise with AI?
Schedule a personalized consultation with our AI specialists to design and implement intelligent in-vehicle assistants that drive comfort, trust, and efficiency.