InclusiveAI: Equitable AI Governance via DAOs and Plural QV

$38.77 crowdfunded from 0 people

Implementing and testing Democratic AI Governance tools using DAO, plural quadratic voting, and empirical validation with diverse user groups to enhance equitable AI decision-making and address power imbalances.

tl;dr

We are actively pursuing funding to implement and empirically validate AI governance tools equipped with DAO mechanisms with decision-making mechanisms such as Plural quadratic voting. Our approach involves rigorous testing with end users to assess the effectiveness of Democratic AI Governance tools and to assess the practicality of applying DAO/decentralized mechanisms in the context of AI governance.

Description

We are introducing InlcusiveAI, a platform initially tested with 235 users from underserved groups (global south, people with disability) and built to asses decentralized governance methods, particularly Decentralized Autonomous Organization (DAO) mechanisms coupled with various voting methods (e.g., quadratic and plural quadratic voting), voting power distributions (equal or differential), and coordination mechanisms. We have already achieved novel results in Phase 1 of this project with OpenAI as a part of the Democratic Input to AI Grant. Since this is phase 1 of a research project originally funded by OpenAI, much research infrastructure and methodology have already been developed, meaning that we can put the requested funding to immediate use by producing high-quality, peer-reviewed governance research. Now, we will be working on the following research advances:

  1. Evaluation of governance mechanisms, plural QV

  2. Lack of understanding of how pro-social behavior affects governance outcomes in general, not just for AI governance

  3. Mechanisms to be run in the wild - providing more realistic outcomes

  4. Improved Sample representation with proof-of-personhood to avoid distortion of outcomes

Why this is important?

A concrete use case of recent public outcry over the integration of GPT-powered AI in Be My Eye has resulted in the removal of the image description feature when detecting human bodies or faces in images or videos. This decision was made by AI actors without involving blind users, who have long relied on this tool to manage their privacy of visual content in case of sharing in social media, friends/family, etc. This highlights the practical relevance of decentralized AI governance to address the existing power imbalances in technology design.

More Details of the Project Outcome

  1. First, implement a new voting method called Plural Quadratic Voting within the context of AI governance. Plural Quadratic Voting has shown promise as a socio-technical tool for fostering agreement among diverse groups of people. Testing plural QV in the context of AI governance will allow us to answer the following questions –

-Does AI governance benefit when proposals with diverse bases of support are prioritized, and if so, how can we implement algorithms that effectively prioritize such projects? If not, what other insights can we gain to improve AI governance?

-Output: an empirically validated set of insights for understanding the use cases for plural QV and the strength of the improvements it brings to governance processes and outcomes.

  1. Prosocial behavior (or, the tendency of individuals to make decisions that benefit their social networks as well as themselves) has been shown to alter the outcomes of decision-making mechanisms. If ignored, prosocial effects can negatively impact outcomes by distorting a mechanism’s view of the world – however, plural QV is uniquely suited to account for prosocial effects.

-How do prosocial utilities/ prosocial behaviors manifest in the context of AI governance, and how can AI governance tools like plural QV use pro-social trends to improve the quality of governance outcomes?

-Output: Experimenting with (and getting feedback from) different groups of underserved populations with preexisting social connections will allow us to rigorously explore how prosocial behavior manifests in practice and how AI governance tools can best support decision-making in this context. Prosocial behavior is a relatively under-studied area of economics, but understanding it is vital to calibrating effective governance mechanisms. Plural QF already shows promise as a mechanism well-suited for governance in the presence of prosocial behavior.

--Output: Quality of Governance: Quantification of users’ satisfaction towards this new voting method in the AI Governance context with measures from political science.

  1. Run the experiment of InclusiveAI Iteration 2 in the wild taking a grassroots approach to facilitate crowdsourced reporting of unexpected AI model behaviors of real-world user experiences, and make proposed solutions publicly available for broader deliberation via a community-driven red teaming approach.

-Output: A taxonomy for AI Governance with real-life reporting of AI model behavior issues faced by end users and possible solutions they look for.

  1. To avoid issues related to representation, InclusiveAI will integrate primitives such as proof of personhood based on zero-knowledge proofs.

-Output: Result of its’ applicability in real-world scenarios to verify legit participation, particularly when it comes to the decision-making process and voting power involved.

Overall Output

  1. User satisfaction metrics– are key to assessing the effectiveness of AI governance tools, because ultimately, users must be able to understand and interact with such tools for them to make any meaningful difference.

  2. Critical insights from underserved communities, can significantly expand the domain of AI governance and push it in a more equitable direction – considerations often overlooked in both academia and industry.

  3. A public repository of user-reported AI issues – and potential solutions can serve as a valuable resource for academic researchers looking into various challenges and for industry professionals seeking innovative solutions.

Broader Impact

  1. In the context of both AI and DAOs, discussions and public discourse often overlook underserved groups, particularly in the global south, where our team has several years of research outreach and built a trust relationship to easily embed and run these experiments. In addition, certain groups, like individuals with disabilities often early adopters of AI technology. This project can make a meaningful impact on these communities and make a bridge between AI governance and the DAO community.

  2. The open-source nature of this project can foster a collaborative environment that encourages innovation and ongoing research agenda from the human-centered approach.

  3. Our team uniquely combines expertise in DAO research, HCI, economics, and political science, enabling us to approach this project considering a broader landscape.

Additional Information

  1. Read the technical report from InclusiveAI Phase 1 here: https://socialcomputing.web.illinois.edu/images/Report-InclusiveAI.pdf
  2. Plural QF: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4311507

Past Funding Sources

Previous funding for Phase 1 from OpenAI.

Team Size

4

Team Members

Tanusree Sharma: Ph.D. Candidate at the University of Illinois at Urbana Champaign and an expert in DAO Governance, Responsible AI, and Usable Security. She led the development of InclusiveAI V1 funded by OpenAI Democratic Input to AI which involves evaluating the applicability of decentralized governance in a centralized AI context. Her work involves designing and evaluating interventions to address ethical concerns in data-driven and AI systems with human-in-the-loop.

Joel Miller: Joel is a PhD student at the University of Illinois, Chicago and an expert in plural quadratic voting and funding – he has done research on the theory of plural QV with Glen Weyl at Microsoft Research and has architected the practical implementation of plural QF for use at Gitcoin. Joel’s work also touches on mechanism design, algorithmic fairness, and science and technology studies, with a focus on understanding the material impacts of technology.

Chirs Kanich: Chris Kanich’s research interests include algorithmic fairness, socio-technical cybersecurity, and web security. He also serves as the Director of Undergraduate Studies for the UIC CS Department.

Yang Wang: Yang Wang's interests focus on privacy and security, and public policy issues, especially regarding privacy. His current project involves Teaching High School Students about Cybersecurity and AI Ethics.

InclusiveAI: Equitable AI Governance via DAOs and Plural QV History

People donating to InclusiveAI: Equitable AI Governance via DAOs and Plural QV, also donated to

Decentralized exchange on Mantle Network offering fast, transparent, and user-controlled crypto trading with developer-friendly integration capabilities.
PDX DAO, an Ethereum community in Portland, aims to foster local public goods through pluralism and economic democracy, support web3 experiments, onboard diverse groups into crypto, and document protocols for other city DAOs.
Open-source, snapshot-style dapp leveraging pairwise voting to enhance community engagement and decision-making in Web3 through intuitive, fun micro-decisions and quantitative recommendations.
Exploring place-based reputation in tokenomics through literature review, defining its applications, examining data ethics, and prototyping with SourceCred for a sample community to create fairer access systems.
Armitage focuses on simplifying impact metrics for public goods, specializing in quantifying engineering impact through effortless, data-integrated tools without superfluous features.