Have you ever been in a situation where your DAO or online community struggled to make decisions because coordinating across time zones was a nightmare? Imagine trying to finalize an important proposal, but every time a meeting is scheduled, half the team can’t attend, and the discussions get stuck in endless back-and-forth messages. Ultimately, crucial voices are missed, and reaching alignment feels impossible.
Harmonica was created for precisely this challenge. It runs async workshops using GenAI to gather input from participants, synthesize responses, and helps communities move forward faster and more efficiently. No more missed voices or delays—just smooth, transparent coordination.
About Harmonica
Harmonica is an open-source software (SaaS) designed to enhance sensemaking within teams and communities by facilitating async workshops. Using GenAI and conversational UX, Harmonica engages participants into 1:1 dialogues to gather and synthesize responses, thereby speeding up the co-creation of artifacts like proposals, OKRs, RFPs, roadmaps and other strategy documents without the need for sync workshops, surveys or forum discussions. This reduces conflict and improves engagement by allowing participants to express candid opinions, identify tensions and achieve alignment more efficiently.
Who's the targeted user group and what's the need/problem being addressed?
The project primarily targets OSS communities and DAOs 🏛️, where efficient sensemaking is crucial for self-governance. By providing async facilitation tools, Harmonica enables teams and communities to collaborate more effectively, ensuring that all participants can contribute meaningfully to discussions, proposals, and strategic planning.
Ensuring that all stakeholders can participate effectively is a major challenge in online collaboration and governance, and making sense of reality, as well as identifying tensions and outliers, is a critical pre-requisite to making good decisions with high legitimacy.
Our research for the RnDAO CoLab Fellowship funded by Arbitrum (written between Jan-Mar 2024) showed that core teams, delegates and gov leads all desperately seek alignment but often struggle to generate essential strategic artefacts that articulate their intentions such as proposals, RFPs, OKRs, roadmaps, etc. These artifacts are essential because they serve as a scaffolding for community management and governance. Today, they are usually created by small groups of people with high context, often based on surveys or workshops. This makes their sensemaking (1) heavily dependent on their facilitation skills, (2) extremely time-consuming. Facilitation of sensemaking takes too much time and effort before and after submitting proposals – normally 1-2 months of hard work, to engage members and process everyone's inputs.
What will we use the funds for?
The funding from this round will be used to enhance our facilitation experience with AI agents, which would allow us get much closer to our original vision:
- Reasoning Capabilities: Incorporates advanced reasoning algorithms to analyze inputs, derive insights, and propose action items based on the context of discussions.
- Sentiment Analysis: Leverages AI to identify points of agreement and disagreement among team members, enabling deeper sense-making.
- Real-Time Listening: Detects when users have finished speaking or need additional information, prompting non-linear follow-ups or clarifications if necessary.
Currently Harmonica's UX is very basic / similar to ChatGPT: each participant chats with it individually, and responses from each dialogue are added to the final report / summary. By adding AI agents, we want to make it much more flexible and non-linear, e.g. to update facilitation prompt on-the-fly and ask participants to comment on ideas submitted by others (within one session and across multiple sessions).
Meet the Team!
- Artem Zhiganov (Product Lead): Artem is a governance expert with a background in product marketing. Previously governance lead at Protein Community (SC02 cohort), researched DAOs and co-ops for his MSc thesis, consulted tech companies, and led product marketing of a music streaming app. LinkedIn
- Anton Mikhailov (Backend + LLM Ops): With 15 years of experience, Anton has a strong background in full-stack development and team leadership. He is the founder and CTO at Codee Studio and holds an MSc in Computer Science. Recently developed a passion for enhancing LLM performance using RAG and multi-agent systems. LinkedIn
- Jonas Kuhn (Frontend): Developer with a strong interest in deliberation, governance, voting - because that's what can enable us to align our incentives with our values, and that's what our civilization needs to thrive and not destroy itself.
- Andrea Gallagher (Advisor): Andrea brings 25 years of expertise in information architecture, interaction design, and UX strategy, making her a vital resource for guiding Harmonica's design and user experience strategies. LinkedIn
Milestones: Harmonica 2.0 with AI agents
Month 1: Develop Core Reasoning Capabilities
Reasoning Algorithms (Week 1-2):
- Design and implement basic reasoning algorithms to analyze inputs, derive insights, and propose action items.
- Integrate LlamaIndex for contextual data retrieval
Context Retrieval with RAG (Week 3-4):
- Implement RAG to fetch real-time, relevant data during discussions, enriching the AI's contextual understanding.
- Build out the ability for the AI to suggest action items based on this retrieved information and reasoning algorithms.
Month 2: Implement Sentiment Analysis
Sentiment Detection (Week 5-6):
- Integrate sentiment analysis to classify team inputs (positive, neutral, negative) using conversational data.
- Use LlamaIndex to pull in relevant previous discussions, documents, or notes to enrich sentiment understanding.
Consensus & Conflict Detection (Week 7-8):
- Develop logic for identifying moments of agreement and disagreement, helping teams navigate points of friction or alignment.
- Combine real-time sentiment insights with RAG to provide deeper contextual understanding from integrated platforms.
Month 3: Build Real-Time Listening & Testing
Real-Time Listening & Follow-Ups (Week 9-10):
- Implement algorithms for detecting when users have finished speaking or need additional information.
- Use RAG to provide dynamic, non-linear follow-up questions or clarifications based on gaps in the conversation and real-time data retrieval.
Testing & Refinement (Week 11-12):
- Make internal testing on reasoning, sentiment analysis, and real-time listening.
- Optimize response times for real-time conversation tracking, ensuring smooth LlamaIndex integration.
- Collect feedback and fine-tune the system for performance and accuracy.
How did we use the funds from the past Gitcoin Round?
We participated in GG21 CollabTech round organised by RnDAO and Harmonica was the 2nd highest funded project with $2,894 in matching funding and $858 in direct donations. That grant surpassed our expectations and enabled us to move beyond manual prompt engineering (that only experienced facilitators would be able to use) and build AI-powered template generator that allows anyone to create a new session with customized facilitation template in just 2 minutes: https://app.harmonica.chat/create
Harmonica History
-
accepted into GG22 OSS - dApps and Apps 3 weeks ago.
-
accepted into CollabTech Round and Thresholds Experiment 3 months ago.
-
applied to the OpenCivics Consortium Round 02 6 months ago which was rejected