Frontier AI Agents with collective input
What does the world want from future AI agents? Announcing our partnership with the Industry-Wide Deliberative Forum, and our latest Global Dialogues round.
When autonomous systems begin to act on behalf of people, hundreds of governance questions follow. As we debate the relative merits of granting autonomy to agents, deployment is happening right now, and decisions are de facto being made. What leeway and limits should agents have when they act on behalf of users? How can we trade off between competing values: convenience, privacy, speed, efficacy, oversight?
At CIP, we work on bringing diverse, plural inputs into adjudicating these questions, quickly.
That’s why we’re excited to share that we’ve joined Stanford’s Deliberative Democracy Lab, as well as Meta, Microsoft, Cohere, DoorDash, Oracle, and PayPal, in convening people around the world to better understand public perspectives on how autonomous AI agents should evolve and be governed.
Our latest Global Dialogues round was developed in close partnership with the Industry-Wide Deliberative Forum to explore the level of trust people are willing to place in agents, and what types of tradeoffs they are willing to make. The line between 'acting for you' and 'acting as you' will become blurred, and so we need to better understand the boundaries of acceptable delegation handoffs.
In our work with Global Dialogues and Weval, we emphasize two things when it comes to agents. First, trust. For instance, Asia's high trust in AI companies (31.9%) coincides with lower government trust, while Europe shows the inverse pattern. We’ve found that the level of oversight that people ask for often correlates with whether their society leans toward institutional or technological trust.
Second, tradeoffs. We emphasize tradeoffs in our work to try to determine preference hierarchies. While some tradeoffs are unnecessary, others are inherent. For example, while scalable oversight methods can increase the level of accountability with less time spent, the reality is that level of oversight does trade off with effort. With the Forum, we’re bringing our understanding of preference dimensions into supporting the design and deployment of agents.
The Forum goes live this fall, with findings released publicly and discussed in open webinars. We invite our community to follow along, share the call for participants when it launches, and keep pushing for AI that enhances our collective capacity to reason and decide together.
users already don’t trust the institutions & organizations you’ve partnered with. leaning on their precarious power & influence isn’t something to brag about