Amplifying transformative potential while designing augmented deliberative systems

Shu Yang Lin

Executive Summary

What principles can guide us in designing deliberative processes that incorporate AI to help scale deliberation? How should we navigate the risks and opportunities that come with augmenting AI in deliberative processes? What kind of future do we envision for human-AI collective intelligence?

To be effective, augmented deliberation must preserve people's ability to genuinely engage with and be transformed by the deliberative process. AI should support but not replace the human part of deliberation: the collective reasoning and thinking together.

This article proposes the Goldilocks Framework for Augmented Group Intelligence, a new framework for evaluating deliberative processes. It argues that the optimal use of AI in deliberation is about balancing between two critical elements: participants’ agency in steering and explaining deliberative outcomes, and participants’ commitment to those outcomes. A key concept in the framework is opinion spread—the idea that opinions exist on a continuum from surface-level preferences to deeply-held views developed through meaningful engagement with others' perspectives. The framework suggests that deliberative outputs with higher commitments require corresponsive higher human agency to them. The article applies this framework to four real-world deliberative settings, analysing improvements they can make to increase participants' agency and ability to commit to outputs.

Finally, when deliberative participants are disconnected from the decision-makers who implement their recommendations, a loss in transition can take place before they become policies. If augmented deliberation can increase the explainability of deliberative outcomes, this legitimacy gap can be decreased.

From philosophical exploration to empirical experimentation, deliberative democracy serves as a mirror reflecting the ideals we aim to achieve in democratic governance. Centuries of thought experiments and field studies gradually signposted a path that challenges us to approach processes grounded in principles such as reasoned argumentation, transformative preferences through dialogue, inclusive participation, equal distribution of power, and legitimacy in public decision-making. However, despite the abundant endeavors, the core ideals of deliberative democracy remain difficult to fully realise, both institutionally and socially.   


With advances in technology, particularly with AI and digital platforms, we find new tools offering interesting potential in closing our distance to some of these ideals. Technology has been increasingly deployed to expand diversity in participation (Cortico1), enable large-scale deliberation (Frankly2;Stanford Deliberative Polling® Platform3), and enhance mass data summarization and interpretation (Polis4; Remesh5). However, it is important to pay attention to the fact that not everything can or should be outsourced to technology without risking falling out of the definitive realms of deliberative democracy. For instance, delegating the generation of opinions to AI models risks undermining the authenticity and transformative nature of human deliberation, relocating the experimental endeavor from augmented deliberation that generates group intelligence to merely aggregated synthetic representation. This distinction is due to the fact that what one can agree to or stand by isn’t necessarily equivalent to what they genuinely believe or can articulate – which is a critical distinction to maintain in deliberative practices. Therefore, while technological advancements can support deliberative systems and help us approach deliberative ideals, they must be incorporated with care to preserve the authenticity, agency, and transformative potential of human participants.

Similar concerns were also raised over the past decades of research into augmented deliberation. Mediated deliberation refers to the process of deliberation facilitated through forms of mediated communication, such as media and digital platforms. Socio-technical, methodological and empirical initiatives have sought to improve deliberation in various aspects, including increasing opportunities for participant interaction for knowledge acquisition and reflection (Bani, 20126; Habermas, 19847), enhancing the quality of opinion formation through interactive topic onboarding and preference elicitation (Fung, 20068), supporting opinion aggregation (Gastil & Wright, 20199), demonstrating scalability (Mansbridge et al., 201210; Luskin et al., 200211), and help finding common ground (DeepMind, “Habermas Machine”12). Despite these advances, early critics of mediated communication remain relevant.

Critics have warned that such systems may lead to passive participation, fragmented dialogues, and weakened reciprocal discussion. These concerns persist around the recent AI-mediated deliberative works. When deliberative systems outsource the articulation of opinions and reduce human interaction to discrete input (agreement, disagreement or abstention) or selection from pre-generated preferences, the vision deviates from pursuing deliberative ideals to merely generating discrete preferences aggregation. This results in a strong, AI-powered decision-making potential while risking removing one of the most important dimensions in deliberation—that of transformative through participation, which requires human’s active participation as human experience evolves over time (John Dewey, 193813), and which requires participant's active articulation and seeking consensus through discussion (Mansbridge, 198314). Otherwise, if the reasons behind argumentation were generated by machines and if the participation were delegated to generative agents, we would risk building systems that simulate deliberation rather than enabling it.

While AI can support deliberation by enhancing access, moderation and summarization, the unique human capacity for belief formation, reflection and change cannot be outsourced. As the advances of AI present powerful potentials in transforming the way we deliberate, it is important that human participants retain a place at the heart of the process, to actively exercise their civic muscle (Audrey Tang, 202515) and experience the transformative benefits that define deliberation itself. These include the irreplaceable opportunity to empathise with others, integrate others' considerations and perspectives, and to refine one’s articulation with updated opinions. As we navigate through the new AI-powered deliberative horizon, it is more important than ever to develop principles and frameworks for designing deliberative processes that integrate AI in ways that mediate and support (rather than displace) the transformative dimension in deliberation for human participants.

The Goldilocks of Augmented Group Intelligence

As an attempted framework, The Goldilocks of Augmented Group Intelligence (see Diagram 1) offers a conceptual guide for designing augmented deliberative processes and systems that retain human participants’ transformative potential and, therefore, enhances collective intelligence through augmentation, which is referred to here as augmented group intelligence. This framework illustrates how achieving this form of intelligence requires a careful balance between levels of human agency (defined as human participants’ capacity to understand the deliberative output and correct deviations in the according implementation) and the degree of participants’ commitment to the outcomes in deliberation. As participants become more committed to executing and actualising their collective deliberative results, the higher levels of human agency in understanding, explaining, and making adjustments and corrections when implementation goes wrong are required. In the following sections, I unpack the framework’s two key dimensions: human agency and commitment to deliberative outcomes.

Diagram 1: The Goldilocks of Augmented Group Intelligence

Human agency in augmented deliberative processes

When integrating technologies such as AI into deliberative processes, it’s important that the occurring AI-augmented outputs are both explainable and steerable. Explainability means participants can collectively understand, interpret, articulate and rationale behind the outputs even if they are AI-generated. Steerability means participants are able to intervene, correct, or overwrite AI outputs, especially when they detect errors, bias or misalignment. It also encompasses the human capacity to guide, intervene and redirect when their implementation deviates from intended goals.

Currently, the degree of explainability and steerability is often positively correlated with the extent to which AI is embedded in the decision space and influences human decision-making. The more AI we use to support decision making, the less agency we seem to have. Tools such as Polis which uses AI to make ‘group-informed decisions’ while missing opportunities for participants to understand, explain or interrogate the resulting group intelligence, could result in low human agency. However, it is worth recognising that this relationship may evolve as participants’ capacity to understand and guide AI improves. This growing sense of human agency is not without precedent. Throughout history, major digital transformations have prompted shifts in democratic practices, and when technology arrives, democracy evolves. 

As we navigate this transitional phase of technological advancement, it is important to integrate new technology with care. So long as human agency is retained, augmented group intelligence that iteratively exercises our civic muscle can bring about a new horizon for amplifying the human transformative potential in deliberation: to empathise, to care and to integrate others’ perspectives into one’s own decisions.

Diagram 2: Opinion spread (yellow) and ideal iterative deliberation (purple)

Opinion spread on a continuum of commitments

Opinions generated through deliberative processes vary in the level of commitment they garner and elicit from participants. When these opinions are mapped along a continuum of commitment, they range from low-commitment opinions (e.g. initial impressions, personal preferences, or unelaborated views) to high-commitment opinions (e.g. empathic insights, refined perspectives, and group-informed decisions). In this context, commitment refers to how deeply participants are invested in or attached to the opinions. This continuum is illustrated in Diagram 2, where increasing saturation of yellow indicates higher levels of participant commitment. 

An effective deliberative process should ideally facilitate the generation and development of high-commitment opinions, while simultaneously creating space for low-commitment opinions to inspire. This dynamic interplay is also illustrated in the diagram, where small and large purple shapes suggest iterations of engagements, pushing one after another to a set of opinions garnering more and more commitments. Here, it is important to note that, while higher-commitment opinions often emerge later in iterations of engagements, the axis of commitment in this diagram is not intended to represent a temporal sequence, but rather a time-independent spectrum of opinion that varies in the level of commitment they garner.

When considering the role of AI in deliberative processes, it is important to recognise that outputs that garner higher levels of commitment require a corresponding higher level of human agency to tap legitimacy and accountability in implementation. This is particularly due to the fact that meaningful implementation depends on strong understanding of the rationales behind the decisions. In common configurations of public consultation, deliberative outputs are often synthesised into reports intended to inform policymaking. However, the space between such outputs and institutional implementation presents a sharp drop of legitimacy. This is due to a common separation that divides actuators who implement decisions, and the participants in deliberation (who contribute ideas and concerns). The inability to comprehend and track from one end to another pose a risk of misaligned implementation.

One solution to this challenge could be finding ways of capturing and communicating commitments embedded within the deliberative outputs, such as reports that not only identify aligned or conflicting positions, but also render the commitments and rationales behind decisions so they are readily actionable. Another solution to this challenge would be to ensure the participation of actuators in the deliberative process itself, such that they become the proxies who could strengthen the continuity, backing deliberative output with rationales when it comes to implementation. Until such systems are established, we remain reliant on human agency to interpret and implement the augmented group intelligence, which upholds the collective intent through implementation. 

Diagnosing and designing augmented deliberative systems 

When designing augmented deliberative systems, it could be useful to consider where specific  process settings fall within the Goldilocks of Augmented Group Intelligence diagram, and find ways to balance human agency when the outputs of deliberative activities gradually approach high-commitment opinions. For practical application, Diagram 1 identifies four distinct existing deliberative settings, each positioned at different corners of the human agency-commitment space. The following brief diagnoses unpack these settings, not as critiques but rather suggestions of strategic next activities in approaching deliberative ideals through finding the Goldilocks of Augmented Group Intelligence. 


Low human agency, low commitment: Sentiment Analysis

Diagram 3: Conduct active deliberation after sentiment analysis

Sentiment analysis tools gather opinions passively, without engaging participants. This setting offers little opportunity for explanation, intervention, or deep commitment to the resulting interpretations. 

Improvement: After generating a sentiment analysis, run the output in a more active form of deliberation where participants can share their understandings and find ways for landing commitments. 

High human agency, low commitment: Voluntary Polling

Diagram 4: Invite reflection about reasoning after voluntary polling

In this setting, individuals retain full control over their input, introducing high human agency. However, the resulting collective output from polls often garners low commitments, as polls typically capture surface level preferences rather than well-reasoned or deliberated opinions. 

Improvement: After running voluntary polling, either enhance the setting or find another setting that allows opportunities for reflection, dialogue that helps draw out participants' rationales behind their input.

Low human agency, high commitment: Habermas Machine

Diagram 5: Increase explainability and steerability of AI-generated output after running “Habermas Machine”

In the setting of Habermas Machine, developed in Google Deepmind’s research, the AI system generates consensus statements that invite participants to approve or rank. While it may produce outputs that garner high individual commitment through mechanisms like voting and prioritization, the ability for participants to comprehend or argue for underlying reasoning may still be opaque, especially for the raw opinions of others if they haven’t had an opportunity to review or understand. Additionally, participants also have limited capacity to interrogate, steer, or explain the AI-generated outputs, leading to low human agency despite the system’s high yield.

Improvement: To address above issues, I suggest enhancing human agency in two key ways when running Habermas Machine. First, by improving explainability of the AI-generated output, such as through deeply understanding the views from others. Secondly, by re-architecting the system to allow better steerability, such as through enabling participants to refine or revise AI outputs before final consensus is reached.

High human agency, high commitment: Augmented Citizen Assembly

This is an ideal setting where deliberation is deeply participatory. Technologies such as AI are augmented to assist in tasks (such as transcription, elicitation, claims and theme extraction) that can be automated based on human agency, while the core decision making remains with the human participants. In making the core group decisions, it requires participants to think deeply about others’ perspectives and come up with solutions that make sense for all. This is supported by technology but not replaced in a way that would decrease participants’ agency. It exemplifies high explainability, steerability, and collective commitment. To further this model, two design suggestions can be made. First, efforts should be made to scale the size, reach and frequency of such assemblies. Second, to exercise the civic muscle by moving through large and small assemblies along the Goldilocks zone.

Conclusion

This article reminds us of the importance of retaining the transformative potential of deliberation when designing augmented deliberative processes, while arguing that augmented deliberation might even amplify that transformative potential through growing human agency resulting from collectively exercising civic muscle. 

The second part of this article presents a framework, titled The Goldilocks of Augmented Group Intelligence, intended to guide the design and configuration of augmented deliberation processes and systems. 

I hope this framework provides material for reflections and sparks inspiration to process designers, civic technologists and democratic theorists worldwide as we navigate the risks and opportunities posed by emerging technologies. And when we ponder the question of how far we should go in using AI to scale deliberation, perhaps this article could offer a light whisper: just enough. 

Acknowledgements 

This article grew out of multiple meaningful and thoughtful discussions with Colleen McKenzie, Martin King, and Audrey Tang. I am deeply grateful for working with them on various ongoing projects, all of which shaped the ideas and reflections in this write-up.

Notes

  1. Cortico: https://cortico.ai  ↩︎

  2. Frankly: https://frankly.org/  ↩︎

  3. Stanford Deliberative Polling® Platform: https://deliberation.stanford.edu/what-deliberative-pollingr  ↩︎

  4. Pol.is: https://pol.is/  ↩︎

  5. Remesh: https://www.remesh.ai/  ↩︎

  6. Bani, Marco, Crowdsourcing Democracy: The Case of Icelandic Social Constitutionalism (October 22, 2012). Politics and Policy in the Information Age, Springer, 2012, Available at SSRN: https://ssrn.com/abstract=2128531  ↩︎

  7. Habermas, J. (1984). The Theory of Communicative Action: Reason and the Rationalization of Society. Beacon Press.  ↩︎

  8. Fung, A. (2006), Varieties of Participation in Complex Governance. Public Administration Review, 66: 66-75. https://doi.org/10.1111/j.1540-6210.2006.00667.x  ↩︎

  9. Gastil, J., & Wright, E. O. (2019). Legislature by Lot: Transformative Designs for Deliberative Governance. New York: Verso Books.  ↩︎

  10. Mansbridge, J., Bohman, J., Chambers, S., et al. (2012). A systemic approach to deliberative democracy. In J. Parkinson & J. Mansbridge (Eds.), Deliberative systems: Deliberative democracy at the large scale (pp. 1–26). Cambridge University Press.  ↩︎

  11. Luskin, R. C., Fishkin, J. S., & Jowell, R. (2002). Considered opinions: deliberative polling in Britain. British Journal of Political Science, 32(3), 455–487.  ↩︎

  12. Tessler, M. H., Mohan, M., Reich, J., & Rahwan, I. (2024). Artificial intelligence makes opinion dynamics more extreme but not more fragmented. Science, 384(6692), 950–956. https://doi.org/10.1126/science.adq2852  ↩︎

  13. Dewey, J. (1938). Experience and Education. New York: Macmillan  ↩︎

  14. Mansbridge, J. (1983). Beyond adversary democracy. University of Chicago Press.  ↩︎

  15. Tang, A. (2025, April 3). Interview with Polly Curtis [Transcript]. https://sayit.archive.tw/2025-04-03-interview-with-polly-curtis#s621671  ↩︎

Next
Next

From Voluntary Guidelines to Enforceable Standards