Round 3 Voting Rationale

Badgeholders are not required to disclose how they’ve voted but may share their voting rationale here if they choose. You can see examples from Round 2 here.

8 Likes

I’m still working on my voting framework but I believe that sharing it here will be valuable for others and also for myself to receive feedback from everyone!
Here’s my whole document, organized by sections:

My Voting framework (WIP)

Key points

  • Impact = Profit: RPGF is here to uphold this axiom and foster a valuable ecosystem.
  • This is retroactive public goods funding, not proactive funding.
  • My main duty is to determine which public goods receive funding by voting on my RPGF allocation
  • 30M OP tokens | 195 badgeholders | 600 projects | 4 categories

Goals

  1. RPGF: To implement impact=profit in practice
  2. Voting framework: Evaluate projects with a system that gets us closer to impact = profit

Defining Impact

  • The value a contributor has created for the Optimism Collective
  • Outcomes created for the collective, not thinking about the inputs.
  • Only past impact is considered in RPGF | Should not consider anything about the future

Defining Profit

  • The economical value extracted from the Collective.
  • Funding can come from: Grants missions, Council, Partner fund, past rpgf, foundation payment.

Allocation

  • 80% allocated to projects evaluated with our scoring system | 24M OP
  • 10% allocated to curated lists endorsed by trusted badgeholders (TBD) | 3M OP
  • 10% allocated to innovative/alternative projects that don’t fit our scoring system | 3M OP

*If we were to equally allocate all of your votes between all RPGF projects, each project would receive 47K OP tokens (Wow, that is a lot, we need to think this through)

My Scoring system

Objective

Evaluate projects with an objective, quantitative method

How it works

Projects receive a score from 1 (low) to 3 (high) across multiple criteria, the adjusted multiplication of these scores provides the overall project score and will determine OP allocation. Highest score: 256 | Average: 81 | Lowest score is: 16

Evaluation criteria and scores

*This is the area where we still need to define the ranges for each of the criteria (ie: what is a “high extraction” or a “low alignment”?). This is the hardest and most subjective part, but it’s also great because it’s where we show our opinion and values.

  • Money extracted from the collective
    • 1: High Extraction
    • 2: Medium Extraction
    • 3: Low Extraction
  • Metric strength/quality | Based on Metrics Garden for each category
    • 1: Low
    • 2: Medium
    • 3: High: Verifiable, robust
  • Metric quantity | Subjectively assessed by us
    • 1: Low
    • 2: Medium
    • 3: High
  • Alignment with Collective’s Values | Open access, long-term, innovation | Subjectively assessed by us
    • 1: Low: Not clearly aligned with values.
    • 2: Moderate: Somewhat aligned but may prioritize other values.
    • 3: High: ****Fully aligned with values, demonstrating them in outcomes.

Our rules

  • Projects must be related to Ethereum, Optimism, L2’s, or blockchain technology.
  • Projects must fit within a category of the current round.
  • Filter question: What important problem is this project solving?

Open questions we still need to solve

  • How to handle the bias voting only for well-known projects? | Blind evaluation, random sampling?
  • How deep are we going to evaluate each project? How much due diligence?
  • Are we going to evaluate all or just some projects? Why?
  • Are we going to prioritize a category? Which and why?
  • Possible idea: Divide into subcategories and allocate a percentage to each subcategory
  • How are we going to use lists?

*Foot notes

Multiplying scores ensure a complete evaluation to make sure all projects meet a minimum standard across all areas. Plus, we use Adjusted Multiplication Factor: we add a constant of +1 to each score before multiplying them together. This is done to reduce the extreme penalty for a low score in one category.

If we believe a project should receive funding but won’t evaluate it, our vote matters to pass the threshold
Voting 0 OP = Project has made no impact | Abstaining from voting: I don’t know | I am indifferent


Throughout my document, you’ll notice the use of “we” — this is because I’ve teamed up with @pilar, a friend whose judgment and perspective I value immensely. Together, we’re committed to have the best quality of for our vote allocation on this RPGF round.
I’d love to hear all of your thoughts, specially on the open questions we need to solve, as this is what it’s all about, iterating to improve!
I encourage you to also post any thoughts/ideas/drafts regarding this topic, as it might help other badgeholders:)

Diego,

18 Likes

For RPGF #3, I’ll be focusing on Governance - tools, research & education. There are 104 projects in the Collective Governance category, and I feel like that’s the most I can realistically evaluate anyway, and closer to the entirety of RPGF #2. Like governance itself, this is a rather subjective category, so I’m going to rely on my knowledge and experience from being a delegate here since day 1, and writing about the topic extensively here and on my blog, to gauge how valuable a project’s/individual’s contributions are to the field of Governance.

11 Likes

link to your blog, please.

2 Likes

After talking with various people and thinking more through it, I’ve made significant improvements to my rationale. I believe it is almost ready and I’d love your feedback on it.

Purpose

  1. RPGF: To implement impact=profit in practice
  2. Evaluate projects rigorously, using a balanced scoring system and lists from reputable badgeholders

Impact = Value contributed to the Optimism Collective
Profit: Economical value extracted (grants, rpgf, fund, payments)

Vote Allocation Strategy

  • 40% (12M OP) for projects assessed through our scoring system.
    • We’'re thinking of quantitatively evaluate 378 projects in 6 sub-categories. Each category has a specific budget, and projects are compared within their categories.
  • 40% (12M OP) allocated to curated lists endorsed by trusted badgeholders.
    • We’ll use lists to vote for projects across the other 10 categories we won’t evaluate ourselves. Regardless, we will keep an eye on projects in each list to ensure quality.
  • 15% allocated to TOP projects | 4.5M OP
    • We’ll use this portion of our allocation to reward the most impactful and aligned projects of the ecosystem. We will be very selective and generous with these projects. Trying to overcompensate for their amazing work.
  • 5% allocated to alternative projects | 1.5M OP
    • This part of our allocation will go to projects that don’t fit within our voting framework or others lists but that we believe they should receive an allocation.

Scoring system: 12M OP

The 6 sub-categories we are considering evaluating with this scoring system: applications, governance contributions, developer education, evangelism, wallets, events.

Objective

Evaluate projects with an objective, quantitative method

How it works

Projects are divided and evaluated into sub-categories. Then, within it, they receive a score from 1 (low) to 3 (high) across 4 criteria, the adjusted multiplication of these scores provides the overall project score and will determine OP allocation.

Highest score: 256 | Average: 81 | Lowest score is: 16

Evaluation criteria and scores

  • Metric strength | Based on Metrics garden for each sub-category
    • 1: Low
    • 2: Medium
    • 3: High:
  • Alignment with Collective’s Values | Open access, long-term, innovation | This criteria stays the same across all sub-categories.
    • 1: Low: Not clearly aligned with values.
    • 2: Moderate: Somewhat aligned but may prioritize other values.
    • 3: High: Fully aligned with values, demonstrating them in outcomes.
  • Money extracted from the collective | This criteria stays the same across all sub-categories.
    • 1: High Extraction: +125k OP
    • 2: Medium Extraction: 10-125k OP
    • 3: Low Extraction: 0-10k OP
  • Metric quantity | Benchmarked against projects of the same sub-category. Will evaluate a max of 2 metrics per project.
    • 1: Low
    • 2: Medium
    • 3: High

Additional considerations

  • We know this is not the perfect scoring system. We know there is lots of room for improvement. We are doing our best and will improve as we do, please reach out for feedback, it is greatly appreciated.
  • Projects must be related to Ethereum, Optimism, L2’s, or blockchain technology.
  • Filter question: What important problem is this project solving?
  • We should vote 0 OP for a project that has made no impact. If we abstain from voting, we are expressing our indifference and that doesn’t penalize the project.
  • If we believe a project should receive funding but won’t evaluate it, our vote matters to pass the threshold
  • Our scoring system uses an Adjusted Multiplication Factor: each score gets a +1 before multiplication, reducing the penalty for low scores in one category.

Sub-categories where we will use other Badgeholder lists: dev services, dev tooling, research, gov research, gov tooling, ethereum development, op stack tooling, op stack research, discovery tooling, portfolio tracker.

Special thanks to:

@LauNaMu For her amazing work on the metrics garden and the Re-categorization of projects, these 2 resources have been key to develop my rationale.
@Jonas: For List-pilling me (I’ve decided to allocate 12M OP to valuable lists of trusted badgeholders!
@ccerv1: For creating and sharing their insights of OSO and his help with some spreadsheets:)
@Michael and @ethernaut For our brief but insightful conversations on their ideas/thoughts

Final comment

I am very excited for this, it’s being very interesting. I’d love to hear more insightful comments/ideas/questions.

10 Likes

Not sure where to put this and happy to move the conversation to another thread but for time being I want to gauge collective feedback on a thought ( https://x.com/wmitsuda/status/1728415708969644483?s=20 ) shared by Willian. Initially, I could not bring myself to purposefully vote 0 on an application I deemed to be acting in bad faith.
However, Willian has a point. What if the logic behind median fund allocation is to prevent bad actors from gaming the system by bringing the median close to 0? Rationally, it’s a positive tool, and @dmars300 also shares a similar thought above. We already have a plethora of research articles providing evidence that rewarding good behavior and punishing bad behavior is a go-to approach to achieve a desired outcome. Thoughts ?

2 Likes

gm

Hey, after many hours of thinking, iterating and having fun, I’ve done it, my voting rationale for RPGF 3 is finalized and I’ve started evaluating projects. I will create a list for all the sub-categories in my scoring system, to make it easy to vote on, my intention is that this will be useful and help other badgeholders:)

You can find my completed voting rationale HERE
In case you’re interested, you can also find my allocations and scoring (WIP) HERE
And a short walkthrough loom video into both documents HERE

4 Likes

Crossposting my RPGF3 list methodology: Applied Governance Efficacy Allocation Framework (Methodology and NOFM Matrix); thanks for highlighting @lavande !!

This methodology focuses on characterizing the relative constituent size of those communities providing and receiving benefit re the collective, applying more governance tokens to larger and more inter-operative existing governance communities, that will participate in Optimism governance. I welcome feedback and challenge!

3 Likes