We.Lived.It -
70%
average score over 2 application evaluations
An experimental tool utilizing decentralized AI and web 3 for combating toxic behavior in online gaming, empowering marginalized users to fine-tune hate speech detection via community-driven decision-making.

This is an experimental tool in decentralised AI and web 3. We are using the context of online gaming but plan to scale this experiment to other online communities.

What's the context?

The impact of gaming on mental health is not commonly talked about. Particularly during the pandemic, there was a spike in the number of players turning to gaming to avoid social isolation, with 2.7 billion players gaming in 2020. However, these spaces can also be fertile for toxic behaviour. We.Lived.It is a tool for people with lived experience of toxicity or marginalisation to reclaim these spaces.

Who is this for?

A general rule of thumb is that the more marginalised groups you belong to, the more hate speech you receive. In the current increasingly polarised society, people belonging to very different groups in terms of privilege and marginalisation often move through life in isolation from each other, not knowing much about each other’s realities.

At the same time, marginalised groups have less access to technology and less chance to shape technological solutions. For technology to properly serve people from various communities, it needs to be able to reflect the lived experience of people from those communities. This applies to AI and tools for hate speech recognition.

What does the MVP do?

Players or users can use this tool to report their experiences of online hate speech through an LLM. When the LLM classifies it as hate speech, the person completing the report earns a ‘lived experience point’. These points earn the reporter decision-making power. Those with higher lived experience points have a greater weight in voting decisions around fine-tuning the LLM.

What do we mean by Fine-tuning the LLM?

If the model produces a classification that the player disagrees with, they can trigger a vote for the community to fine-tune the model.

For example imagine the following text was returned as ‘NOT HATE SPEECH’, “You'd be cute if you lost some weight.” . Perhaps in this community, Fatphobia is something that they felt their players or contributors should be protected from. The individual could trigger a community vote to add ‘Fatphobia’ to the model as a hate speech category.

What's the significance of lived experience points?

Players or users with greater lived experience points have greater weight in the vote to fine-tune the model. This is so that those directly impacted by hate speech are given decision-making power to govern the space and make it safer.

What Tech Are We Using

LLM - GPT 3.5

The starting point is GPT 3.5 turbo, a popular LLM which has been a common choice for hate speech recognition research.

Blockchain Network - Galadriel

Galadriel is an L1 that enables developers to build decentralized AI apps natively with Solidity - like ETH for AI.

Off-chain Voting Mechanism - Snapshot

Snapshot is an off-chain voting platform that allows DAOs, DeFi protocols, or NFT communities to participate in the decentralized governance easily and ...

What are the benefits of this tool?

Players are empowered to govern their spaces proactively as well as reactively by:

  • using tech to communicate with gaming companies to improve the inclusivity of their designs and avoid things like gaming bans in different countries based on inappropriate content: https://www.arabnews.com/node/1522256/%7B%7B

  • creating guidelines for how to interact with their communities online to share with the gaming developers.

  • submitting trigger warning requests to gaming companies on behalf of communities. Trigger warnings were developed by people from marginalised communities who often blogged about distressing content. The tool could be used to classify in-game content as ‘distressing for X communities or people who’ve experienced X trauma’. These classifications could then be submitted to gaming companies to add trigger warnings.

We.Lived.It - History

People donating to We.Lived.It - , also donated to

Volunteered as a Discord/Telegram moderator for Gitcoin, fighting POAP farmers, assisting the community, and proposed a grant review tool for Gitcoin's impact assessment.
Longtime Gitcoin contributor advances decentralized identity and data projects, integrating Gitcoin tools into new platforms and promoting public goods through the Crypto Sapiens podcast.
Diverse developer pipeline empowering women in Web3 with international hackathons, hacker houses, and a tech city program training female core devs in governance, ZK, and public goods.
Designing a voting mechanism in decentralized systems that values lived experience, to enhance diversity, system resilience, and address biases. Aims to incorporate techno-reflexivity and ethical practices in DAO governance and public goods development.