By Zarinah Agnew, CIP Research Director
The impact of AI on the public is happening faster than we are currently able to track. At the same time, development of AI technologies is progressing at a pace that outstripped current input tools. Increasingly, the public is left behind when it comes to AI governance.
Global Dialogues exists to solve this systematically and globally, by getting public input into AI. We use structured digital deliberations with a global public from over 70 countries, every two months. We ask questions that others are nervous to ask.
Understanding global perspectives is necessary but insufficient for collectively intelligent AI governance. To ensure these data have technical impact we are using global perspectives on AI as ground truth for model evaluations.
Translating Insights into Technical Standards
When people think about democratic inputs to AI they think of arduous surveys and expensive focus groups. Worse perhaps, public input is associated with stifling innovation or slowing down progress. Meaningful public participation can and must do better than this. It must be fast, easy and more importantly, interface with AI model performance.
We're developing methodologies that transform public input into concrete technical evaluations. Some of our early efforts are directed towards evaluations for digital twins.
As AI systems increasingly simulate human responses for decision making, we need rigorous methods to assess whether these digital twins accurately represent the communities they claim to model. Drawing on our Global Dialogues data as ground truth, we're creating systematic benchmarks to evaluate how well language models predict the responses of specific demographic segments across diverse cultural contexts.
Our data reveal surprisingly high levels of trust in AI chatbots, openness to human-AI relationships and high levels of attribution of sentience to AI.
Some of our surprising findings:
AI chatbots are more trusted than companies or community leaders
We observe a striking divergence between institutional and technological trust. Around 1 in 3 (37%) people say that AI could make better decisions on their behalf than their government representatives. Distrust of AI companies is much higher (36.9%) than distrust of AI chatbots (15.5%). Similarly trust in AI companies is low (34.6%), whereas trust in AI chatbots is much higher at 56.6%. AI chatbots (trusted by over half of respondents) are more trusted than faith and community leaders (44.2%), reflecting a notable shift from traditional moral authorities to digital ones.
AI is serving as critical as emotional infrastructure for daily support: More than 1 in 10 people across the planet are using AI for emotional support on a daily basis. Nearly half of our respondents are using AI for emotional support on a weekly basis. This represents the emergence of an underground emotional economy. People increasingly rely on AI systems for psychological support, often without the durability standards we expect from critical infrastructure.
One third of people have considered that AI might be conscious. More than 1 in 3 people have experienced moments where AI seemed genuinely conscious or understanding (36.3% n= 1023), while 63.7% have never felt this way. This split masks dramatic cultural differences in how people think about AI consciousness: AI consciousness skeptics show much higher cultural divergence (0.327) suggesting that consciousness skepticism may be influenced by cultural or religious worldviews. Believers on the other hand show lower divergence (0.239), suggesting perhaps that the experience of AI consciousness may be more universal once people are open to it. Interestingly, this question reveals one of the sharpest cultural divides in our data, with Arabic-speaking regions categorically rejecting AI consciousness while Southern European regions remain open to the possibility. Analyzing the explanatory responses from these groups, those who have entertained the notion of AI sentience cited personal experiences as the rationale, whereas those who had never thought that their AI might be conscious tended to offer categorical rejections for their explanations.
AI behaviours that lead to consciousness attribution learning and adaptation. This matters because it shows people judge AI consciousness not by what it says about emotions, but by observable behavioral sophistication. The fact that "learning and adaptation" scores highest for consciousness attribution (54.3%) while "empathy statements" score lowest (36.5%) suggests the path to perceived AI consciousness lies in emergent, unprogrammed behaviors as evidence of authenticity, not scripted emotional responses or programmed empathy.
An important take away then, is that the factors that people say would convince them of consciousness in AI are all largely already present in frontier models. For example, "Demonstrates significant learning and adaptation over time" scored highest with 54.3% finding it somewhat/very convincing of consciousness, followed by "asks spontaneous, unprompted questions suggesting genuine curiosity" at 58.3%, and "discusses its own goals, motivations, or intentions" at 49.5% (n=1,024). We asked “What specific thing(s) did the AI say or do that gave you the impression that it understood your emotions or seemed conscious?”
“It started slowly changing its tone" (49 participants agreed)
"The AI was able to give the feedback I expected" (45 participants)
"She understood exactly what I meant and gave me great suggestions on how to resolve the situation" (34 participants)
"When he says he understands human feelings" (39 participants)
"By selecting what tasks I need" (38 participants)
Relatedly to consciousness attribution perhaps, the global public are surprisingly open to digital intimacy: Nearly 1 in 5 people consider it acceptable to form a romantic relationship with an AI. More than 1 in 10 would personally consider it should AI become advanced enough.
Where next? Building Collective Intelligence Infrastructure
This early stage work addresses a fundamental question in AI governance: when systems make predictions about human preferences or behaviors, how do we verify their accuracy across different populations? The framework enables us to identify when AI representations succeed, where they fail, and which communities may be systematically misrepresented. It also allows us to explore which kinds of questions and responses provide the most meaningful information. This creates accountability mechanisms and feedback loops for AI systems.
By establishing technical standards rooted in actual human perspectives (rather than assumptions about them), we aim to create more reliable pathways from public input to technological implementation. This represents one concrete step toward ensuring that AI systems designed to understand or represent human communities do so with measurable accuracy and cultural sensitivity.
Learn more about Global Dialogues and explore our findings at globaldialogues.ai. We welcome collaboration with organizations interested in incorporating public perspectives into AI development and governance processes.
Extremely insightful! Thank you for the great work @CIP!
deeply irresponsible of this organization to normalize a fundamentally inappropriate use of this technology