Skip to main content
Enterprise AI Analysis: Balancing Automation and Discretion: How Decision Stakes and Human-AI Collaboration Affect Citizen Perceptions in Public Administration

Enterprise AI Analysis

Balancing Automation and Discretion: How Decision Stakes and Human-AI Collaboration Affect Citizen Perceptions in Public Administration

The growing use of AI in public administration improves efficiency, yet its use in discretionary decisions raises concerns about fairness and legitimacy. This mixed-method study explores how decision stakes and Human-AI collaboration configurations affect citizens' perceptions of fairness and adoption. While quantitative analysis showed no significant effects, qualitative interviews revealed that citizens value meaningful human involvement, interactive dialogue, and appeal mechanisms, highlighting a tension between efficiency and human control.

Authors: Saja Aljuneidi, Wilko Heuten, Zhamilya Bilyalova, Maria K Wolters, Susanne Boll

Bridging the Gap: Human-AI Collaboration for Trust in Public Services

This research reveals critical insights for deploying AI in public administration. Despite expectations, quantitative metrics showed no significant difference in fairness or adoption across varying decision stakes and AI-human collaboration models. However, deep qualitative analysis uncovered that citizens prioritize meaningful human interaction, interactive dialogue, and the right to appeal, especially in high-stakes scenarios. The study highlights that perceived human oversight is often superficial without tangible engagement, emphasizing the need for AI systems that actively support human judgment and democratic values rather than merely automating processes.

0 Participants Surveyed
0 Qualitative Themes Identified
0 AI Configurations Tested
0 Decision Stakes Explored

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Governments globally are adopting AI for efficiency, but its use in discretionary public services raises significant concerns regarding fairness, accountability, and legitimacy. Unlike commercial contexts, citizens often cannot opt out of public services, necessitating a careful balance between efficiency and democratic values. This research investigates how citizens perceive AI-driven decisions in public administration, considering the impact of decision stakes and human-AI collaboration configurations.

Previous studies have separately examined decision stakes (e.g., how outcomes affect lives) and decision-maker configurations (e.g., AI alone vs. human-AI hybrid). However, their combined effect, particularly how they interact to shape perceptions of fairness and adoption, remains underexplored. This paper addresses this gap, focusing on two contextual factors: decision stakes and decision-maker configuration, including AI alone, AI with human supervision, and human with AI advice.

To investigate citizens' perceptions, a mixed-method Wizard-of-Oz study (n=43) was conducted using a simulated Intelligent Self-Service Kiosk (ISSK). Participants completed two tasks with varying decision stakes: a low-stakes ID card renewal (regulatory offense, small fine) and a high-stakes social housing application (requesting additional living space, significant personal impact).

For each task, participants experienced one of three decision-making configurations:

  • AI Alone: The ISSK independently made and communicated the decision.
  • Supervisory Hybrid: AI made the decision, which was then reviewed and confirmed (or rejected) by a civil servant.
  • Advisory Hybrid: AI recommended a decision, while a civil servant reviewed the case and made the final decision.

After each interaction, participants rated fairness (procedural, informational, distributive) and adoption intent, followed by a qualitative interview to gather deeper insights into their experiences and expectations.

Study Procedure Flow

Demographic Survey & Video Intro
Task 1 (ID Renewal or Social Housing)
Dialogue & AI Decision
Quantitative Feedback (Task 1)
Task 2 (Remaining Task)
Dialogue & AI Decision
Quantitative Feedback (Task 2)
Semi-structured Interview
Debriefing & Compensation

Quantitative Analysis: Surprisingly, quantitative analysis found no significant effects of either decision stakes or decision-making configuration on perceived fairness (procedural, informational, distributive) or inclination to adopt. This divergence from prior expectations prompted further qualitative investigation. A key reason for this was identified as participants often misperceiving the actual AI configuration, frequently assuming AI had primary control even in hybrid conditions. This suggests that the *perceived* interaction, not just the stated configuration, drives citizen experience.

Qualitative Analysis: In contrast to the quantitative results, qualitative interviews revealed a rich and nuanced picture, yielding five overarching themes that highlight citizens' expectations and concerns:

  1. Interactive Dialogue and Explanation: Citizens felt the ISSK's one-sided dialogue was rigid and inflexible, desiring opportunities to clarify, negotiate, and receive more in-depth, comparative, and counterfactual explanations.
  2. Value of Human Touch with Rising Stakes: For low-stakes tasks, AI alone was accepted for efficiency. However, for high-stakes, emotionally charged decisions, human involvement was demanded for empathy, contextual sensitivity, ethical judgment, and negotiation.
  3. Human Involvement: Symbolic vs. Functional: Oversight was often perceived as superficial and symbolic, lacking visible and tangible human engagement. Citizens suggested measures like showing the civil servant's name/picture or enabling video calls to make human presence meaningful.
  4. Appeals: Where AI Ends and Humans Step In: The right to appeal was crucial, even if unused, especially for high-stakes decisions, seen as a safeguard for empathy and reconsideration. A 'pre-appeal' conversational stage with AI was also desired for clarification.
  5. Balancing Efficiency with Human Needs: Citizens acknowledged the trade-off between AI's efficiency and the need for human control, fairness, and the potential for manipulation with low human oversight.
Aspect Low-Stakes (ID Renewal) High-Stakes (Social Housing)
Citizen Preference Readily accepted AI alone; valued efficiency & speed. Demanded human involvement for empathy & context; opposed AI discretion.
Perceived Oversight Human oversight seen as unnecessary or counterproductive. Human oversight crucial, but often perceived as symbolic/superficial via ISSK.
Dialogue Needs Less demand for complex interactive dialogue. Strong need for interactive dialogue (before & after decision) for clarification and negotiation.
Appeal Importance Less need for formal appeal, if decision was fair. Appeal to human caseworker seen as essential safeguard for reconsideration.

Case Study: Max Mustermann Scenario

The Human Element in Discretionary Decisions

The study used a fictional applicant, Max Mustermann, facing two administrative tasks: renewing an expired ID card (low-stakes, potential fine) and applying for additional social housing space (high-stakes, discretionary decision for his son's visits). Max met standard housing criteria but the additional space required civil servant discretion.

This scenario highlighted the tension between strict rules and individual circumstances. The qualitative findings showed that participants identified much more strongly with Max's situation in the high-stakes social housing application. They felt the AI lacked the capacity to understand the nuances of his personal situation, such as his son's need for a separate room during visits, and strongly desired human empathy and judgment in such discretionary decisions.

For the ID card renewal, however, the majority were less concerned with human involvement, prioritizing the efficiency of the AI system for the straightforward, rule-based task.

The findings underscore that effective AI deployment in public administration necessitates a citizen-centered approach that prioritizes democratic values alongside efficiency. The quantitative results' non-significance and the rich qualitative data highlight the limits of standard metrics in capturing the nuanced human experience with AI.

Dialogue as a Cornerstone of Perceived Fairness

1. Foster Interactive Dialogue: Public-sector AI systems must evolve beyond rigid 'submit and wait' workflows. Implement features for iterative clarification, user input confirmation, expansion on personal circumstances, and post-decision dialogue. This makes human-AI interaction more transparent, engaging, and builds trust by ensuring citizens feel heard and understood.

2. Ensure Tangible Human Involvement: For high-stakes or emotionally significant decisions, human oversight must be visible and meaningful, not merely symbolic. Design systems that support active collaboration and empathetic engagement between AI and civil servants. This could include explicit handovers, optional video calls, or even avatars signaling human presence, reinforcing trust in human expertise.

3. Human-Led Appeals: The right to contest AI-based decisions is a crucial safeguard. Appeals processes should remain human-led to ensure legitimacy and empathy. AI can support a 'pre-appeal' stage, offering explanations and clarifications, but the final appeal mechanism should involve human caseworkers who can exercise discretion and provide personalized attention.

4. Balance Efficiency with Democratic Values: Resist the tendency to sacrifice discretion for pure efficiency. AI systems should augment civil servants' judgment, enhancing empathy, contestability, and citizens' rights. Implement AI incrementally, starting with low-stakes tasks, to build citizen familiarity and confidence before extending to more complex, discretionary domains.

Calculate Your Potential ROI with Our AI Solutions

Estimate the efficiency gains and cost savings your enterprise could achieve by strategically implementing human-centered AI.

Estimated Annual Savings
Annual Hours Reclaimed

Your Path to Trustworthy AI in Public Services

Implementing AI in public administration requires a strategic, citizen-centric approach. Our phased roadmap ensures fairness, transparency, and adoption.

Phase 1: Needs Assessment & Ethical Framing

Comprehensive analysis of existing administrative processes, identification of discretionary touchpoints, and alignment with ethical AI principles and democratic values. Define clear objectives for AI integration that balance efficiency with fairness and accountability.

Phase 2: Pilot Design & Citizen-Centered Prototyping

Develop pilot AI systems for low-stakes tasks, focusing on interactive dialogue, clear explanations, and user control. Engage citizens in co-creation processes to ensure interfaces are intuitive and address their needs and concerns, building familiarity and trust.

Phase 3: Hybrid Implementation & Training

Gradually introduce hybrid human-AI collaboration models, especially for medium-to-high stakes decisions. Train civil servants on new AI tools, emphasizing their augmented role in exercising discretion and providing empathetic judgment. Establish clear human handover points and feedback loops.

Phase 4: Monitoring, Iteration & Appeal Integration

Continuously monitor AI system performance and citizen perceptions. Establish robust, human-led appeal mechanisms and 'pre-appeal' dialogue features. Iterate on designs based on feedback, adapting to nuanced citizen expectations and evolving administrative needs.

Ready to Transform Your Public Services with Human-Centered AI?

Our experts are ready to help you navigate the complexities of AI adoption, ensuring your solutions are efficient, fair, and build citizen trust. Schedule a tailored strategy session today.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking