Conditional Funding Markets

Or how a futarchy can allocate treasury.

Retro PGF is one of the most successfully implemented mechanisms for rationally allocating treasury to grow ecosystem value.

However, it comes with challenges, as deciding which project should receive funding typically relies on a vote, thus depending on individual jurors’ preferences rather than on eliciting actual information about which projects contributed most to ecosystem growth.

To overcome this, we need measurable objectives (metrics) that can be assessed for each project after the fact. Examples include the number of smart contract calls, fees generated (for an L2), or order flow (for a DEX).

Optimism has already taken steps in this direction with Retro Funding Round 4, where Citizen House badge holders vote on the weighting of metrics, which are then used to evaluate projects objectively.

In this post, we define a mechanism that extends retro-funding using measurable objectives and prediction markets, enabling project proactive funding.

Motivation for Prediction Markets

Let’s consider a retro-funding round that relies solely on metrics, similar to Optimism Retro-funding Round 4.

Such a mechanism can only efficiently fund projects with sufficient runway to reach a retro-funding round.

Additionally, relying on a deterministic retro-funding rule only incentivizes projects to increase metrics but doesn’t guarantee whether the funds are allocated most effectively.

Instead, we propose a funding mechanism for future efforts and introduce prediction markets that forecast how each project will impact future metrics.

These forecasts will allow:

  • Unlocking project funding beforehand enables funding a more significant number of projects, including those that lack the necessary runway.
  • Avoiding funding projects where the funding provided does not directly produce a change in metrics, e.g., projects funded via other means, projects farming for rewards for work already completed.
  • Continuously eliciting information through forecasts, thereby learning about the efficacy of the allocation mechanism. For example, suppose forecasters assign the same value to conditional estimates (funding vs not funding). In that case, this strongly hints at funding not being a predictor of project success in terms of this metric, and the DAO would update the set of metrics to evaluate projects on.

Model

A DAO organizes a round of funding with an overall budget b and a set P of projects that apply for funding.

We require a curation mechanism with crypto-economic guarantees to ensure that the set P doesn’t contain spam.

Each project p comes with a single investment ask i_p > 0 that it expects to receive if selected. The mechanism can easily be extended to multiple, variable-sized asks per project, but we keep it single here for simplicity.

The DAO defines a set of measurable objectives M, or “metrics,” such as:

  • “Active verified users in 6 months” for an app.
  • “Attributed order flow in 1 year” for a DEX front-end.
  • “Gas fees generated in 2 years” for contracts running on an L2.

We denote m(p) as the metric value for a given project, aggregated from today until the specified date.

Each metric has an associated positive weight w_m. The value the DAO assigns to each project is defined by the weighted sum of metrics \sum_{m \in M} w_m m(p). Each w_m accounts both for:

  • The significance of the metric as a proxy for projects’ success, expressing the DAO’s preferences over which metric is more or less important.
  • Normalizing the metric depending on its type (e.g., normalize a TVL per month metric by 2x the highest protocol’s TVL per month, etc.).

We assume a crypto-economic oracle (such as UMA or reality.eth) gathers metrics on-chain and makes them available to smart contracts.

The mechanism will output a set of actual investments \hat{i}_p with \hat{i}_p \in \{0, i_p\}.

Objective

The DAO wants to allocate funding in a way that:

  • Maximizes its overall ROI on Metrics defined by \frac{\sum_{p \in P} \sum_{m \in M} w_m m(p)}{\sum_{p \in P} \hat{i}_p} or \frac{\sum_{p \in P} s(p)}{\sum_{p \in P} \hat{i}_p} with s(p) the weighted sum of metrics.
  • Respects the budget \sum_{p \in P} \hat{i}_p \leq b.

Since funding is allocated proactively, we expect this mechanism to elicit accurate forecasts via prediction markets for the weighted sum of metrics of each candidate project.

Conditional Funding Market (CFM) Mechanism

The mechanism is essentially a Decision Market:

Decision markets both predict and decide the future. They allow experts to predict the effects of each of a set of possible actions, and after reviewing these predictions a decision maker selects an action to perform.

(Chen et al., 2011)

The mechanism operates as follows:

  1. It runs prediction markets to forecast the expected value of conditional outcome tokens, which reflect the metrics-based results of funding or not funding a project.
  2. It then applies a decision rule based on these forecasts.

Prediction Markets for Project Metrics

For each project, we create a prediction market with a single outcome: the weighted sum of metrics s(p) = \sum_{m \in M} w_m m(p).

A caveat of this design is not having per-metric forecasts, which would enable more granular feedback: if a metric appears to be hard to predict by market participants, the DAO will have difficulty learning from it and adjusting. Another approach enabled by the Logarithmic Scoring Rule (LMSR) (Hanson, 2002) would be to create prediction markets with multiple base events (one per metric). But for simplicity’s sake, we will not develop it in this post.

For each project, create a contract that takes (e.g.) sDAI deposits and, for 1 sDAI deposited, returns a pair of outcome tokens (\textsf{Short}, \textsf{Long}). Additionally, we assume there is substantial certainty that the weighted sum will be in the value range [v^{\text{min}}, v^{\text{max}}].

Note the usage of sDAI, which is interest-bearing. This is key to mitigating traders’ opportunity costs. Any yield-bearing stablecoin could replace it.

Short and Long tokens have a typical scalar token design. At resolution time, if:

  • s(p) \leq v^{\text{min}}, only \textsf{Short} tokens redeem for 1 sDAI.
  • s(p) \geq v^{\text{max}}, only \textsf{Long} tokens redeem for 1 sDAI.
  • If v^{\text{min}} < s(p) < v^{\text{max}}:
    • ~~\textsf{Long}~~ tokens redeem for \frac{s(p) - v^{\text{min}}}{v^{\text{max}} - v^{\text{min}}} sDAI.
    • ~~\textsf{Short}~~ tokens redeem for \frac{v^{\text{max}} - s(p)}{v^{\text{max}} - v^{\text{min}}} sDAI.

A market scoring rule can guarantee truthful reports and incentive compatibility. The Logarithmic Market Scoring Rule (LMSR) is the most widely studied and is a strong choice.

Current prices represent the market’s prediction of s(p)

Conditional Tokens

We also need to make the forecast dependent on whether funding occurs for a given project. For this, we rely on conditional tokens (as introduced by Gnosis):

  • A pair (\textsf{Short}^{\text{yes}}, \textsf{Long}^{\text{yes}}) which redeems for 1 sDAI if funding happens.
  • A pair (\textsf{Short}^{\text{no}}, \textsf{Long}^{\text{no}}) which redeems for 1 sDAI if no funding is provided.

Two corresponding Yes and No prediction markets are created (per project).

Decision Rule

At any point, both markets’ prices will represent the aggregate forecast about the weighted sum of metrics in their respective Yes or No worlds.

The decision rule can be applied once markets have converged on a forecast. This highly depends on new information being released (especially by projects themselves) that can influence bets.

Assuming projects release all relevant information at the start of the funding round, we expect markets to converge quickly. Also, the longer markets run, the more bettors must try accounting for future information (see Hanson, 2013).

Hence, the decision rule is applied around one week after creating the markets. To prevent manipulation, the precise time the decision rule is applied can be randomized over several days.

The most straightforward decision rule is the max decision rule: if \text{price}(\textsf{Yes}) > \text{price}(\textsf{No}), fund the project; otherwise, don’t fund it.

It has been shown that this rule isn’t incentive-compatible with truthful reporting and creates manipulation opportunities for traders (Othman, Sandholm, 2010). Namely, this could result in Yes odds being greater than No odds, thus funding the project, when truthful reporting would have recommended the contrary.

However, we expect to experimentally observe the effects of manipulation and adjust accordingly. Multiple possible mitigations have already been researched:

  • Picking the last trader from a reputable set, as indicated by (Othman, Sandholm, 2010).
  • Using mixed strategy rule to pick the decision (Chen et al, 2011). This would leave room for making sub-optimal decisions but might be an acceptable trade-off in aggregate.

In any case, a practical mitigation is to run enough small rounds to limit the potential downside of any such sub-optimal decision.

Curation Mechanism

We require a mechanism to elicit curated projects to prevent spam and instantiate the CFM mechanism only for relevant projects. Specifically, we would like a mechanism that favors projects with a higher chance of achieving funding.

For this, we use a repeated auction, where projects bid for their inclusion in a slot, as this has some interesting properties for bootstrapping prediction markets. Other approaches include stake-based curation or curation from a reputable jury.

A slot’s duration, e.g. a week, can be modulated by the DAO. Whenever a slot starts, a new auction is launched. Projects compete by posting bids together with some metadata.

This auction can be a first-price auction, a second-price auction with commit-reveal, or a Dutch auction.

[EDIT 2024-09-19: add multiple auction winners] The first k auction winners earn the right to submit their projects to the CFM mechanism for the given slot. The auction revenue is used to bootstrap prediction market liquidity.

Each auction winner must then define the initial price of the Yes market (see below) by defining the ratio by which 1 sDAI is split in \textsf{Long}^{\text{Yes}} and \textsf{Short}^{\text{Yes}}. The project owner is incentivized to input the most accurate ratio, which will limit her impermanent loss as a liquidity provider. Prediction markets will benefit from the project owners revealing private information through these starting prices.

A limitation of this mechanism is that only projects with enough financial backing might compete in the auction and get funded. A solution is to enable all projects to produce project tokens, enabling funding by selling tokens and then rewarding investors with a cut of the retroactive funding if it happens (see Retroactive Public Goods Funding. Note: The Optimism team has long been… | by Optimism | Optimism PBC Blog | Medium).

Liquidity Subsidies

Sufficient liquidity must be available to ensure the proper functioning of prediction markets, especially until the decision rule is applied. However, LPs face impermanent loss whenever the Short/Long price moves away from the initial price when the AMM pool was launched.

A key element of this mechanism is the expectation that the DAO will subsidize this liquidity by rewarding liquidity providers (akin to liquidity mining). These rewards must be sufficient so that, when added to AMM fees, they compensate for impermanent loss.

This subsidy is justified as a payment for the information the DAO gains from operating the prediction markets.

Additionally, a key means for the DAO to limit the total cost of this subsidy is to define an initial market price (e.g., based on past project data) and incentivize liquidity provision at that price. The curation mechanism described above already achieves this.

Funding Algorithm

For each project selected by the CFM mechanism, the funding algorithm then:

  • Computes the ROI.
  • Distributes the budget b in a manner that maximizes aggregate ROI.

As long as all project funding requirements are relatively small compared to the budget, we expect a simple greedy algorithm to work: distribute funding to selected projects with the highest ROI first.

Additionally, since projects are funded proactively, we assume that the DAO wants to retain some control over project funding to prevent misuse of funds. This can be implemented through the gradual delivery of funding, along with a backstop mechanism that can halt financing at any time if a project is observed to be non-compliant with predefined guidelines. However, this backstop mechanism must cancel the winning prediction market as its forecasts become irrelevant. Hence, it must be used sparingly, otherwise threatening forecasters’ long-term participation in the mechanism.

3 Likes

I was thinking a potential risk of allowing auction winner to set DM start price is that it could allow them to intentionally cause high IL, resulting in low liquidity, hence reducing the cost of for them to manipulate their market’s price, unless a liquidity floor is enforced by some other means.

The causal link between low liquidity and low manipulation cost is that low liquidity means other speculators have a reduced incentive to notice and correct mispricings.

2 Likes

Agreed!

2 factors are at play against this:

  1. As long as there is enough competition in the curation auction, the auction winner will have to commit a sizeable amount of liquidity. This means they will themselves suffer the IL if the attack doesn’t succeed.
  2. Even with relatively low liquidity, rational bettors will still show up and (imprecisely, as liquidity is low) start adjusting the price. Whenever the price gets closer to bettors’ beliefs, the more liquidity providers will be confident in showing up and starting depositing, nullifying the issue after some time.

On point 1, the slot duration size can be increased whenever bids are few to ensure enough competition. Inversely, whenever the DAO commits more funds to distribute, more projects should show up, increasing bidding competition and thus permitting the slot duration to be reduced.

On point 2, additionally, setting LP fees high enough can make it rational for LPs to start depositing even if they suffer some IL, as long as there is enough volume. In the manipulation scenario, the manipulator acts like a consistent noise trader, inducing counteracting trades and, thus, volume.

2 Likes

This makes sense. The higher value of liquidity required to be committed : proposal funding value, the less manipulation risk/incentive exists, as the cost of failing is higher. However the higher this ratio is, the more retro-pgf-esque this is (in the sense that more initial capital is required), and hence the less benefit is being derived from using decision markets. So it is important that the mechanism is secure even if this ratio is quite far from 1, in order to maximally benefit from decision markets.

Perhaps a simple solution is to apply some bounds to the initial price so that it can’t be so extreme as to e.g. cause 99% IL, as a sanity check, while still mostly leaving the initial price up to the proposal creator.

On point 2, additionally, setting LP fees high enough can make it rational for LPs to start depositing even if they suffer some IL, as long as there is enough volume. In the manipulation scenario, the manipulator acts like a consistent noise trader, inducing counteracting trades and, thus, volume.

I agree with this in principle. I imagine though that if the ratio between initial liquidity and proposal funding ask value is sufficiently low, that this could still lead to issues. But this is likely only an issue if the IL is absurdly high, due to no sanity check (referred to above) being in place.

My current mental model for this is that the speed and efficiency with which incorrect/manipulated prices are corrected is a function of the liquidity (among other things ofc). Hence if the initial liquidity is sufficiently low, the manipulation effort may largely go through unnoticed (within the relevant time frame) by rational informed traders, due to the low incentive they have.

I agree that more the participation of informed traders will attract LPs to provide additional liquidity, however I do not think this fundamentally alters the dynamic, given that in order for the informed traders to initially show up, they need an incentive.

You’re mostly right about the retro-pgf-esque part, but there are still some differences:

  1. this mechanism improves the metrics-ROI for the DAO (which is not the cast of retro-pgf, at last not in a myopic way)
  2. this mechanism requires liquidity provision which has a different structure from regular VC funding for retro-pgf projects: risk/reward profile, lock-up duration and amounts are very different.

To elaborate a bit on point 2, this mechanism basically requires project owners to find liquidity, and the more predictable the project is, the easier it will be. As a project is more predictable, IL can be assumed to be lower, and thus, initial liquidity provisioning will appear as a more valuable, short-to-medium-term investment.

How can we figure out whether the IL will be large or not? We could say: the initial price can’t be set outside some n standard deviations around the historical metrics measurement, together with making the market bounds wide enough. This could work with projects which have such past measurements.

This deserves further modeling. My current mental model is that there will always be a slight value to trade even with very low liquidity, so some random walks will happen away from the price the manipulator is trying to fix. Whenever the price gets closer to bettors’ beliefs, some liquidity should jump in the market, even if a tiny bit. Repeat this, and a form of mirror effect to why puddles evaporate happens.

Makes sense, good points. thx

I suppose though liquidity provision will generally be unprofitable, so as to make speculation +EV? Hence it kind of has to be a bad investment, in order to work?

Perhaps a simpler solution than this which doesn’t require knowledge of the historical metrics is to just enforce that the results of the market are only accepted if during some observation window, the average $liquidity was > X% of the initial required $liquidity. This has the effect of leaving exactly how to achieve this up to the market creator, hence avoiding having to micromanage them.

This is interesting. Regarding the amount of liquidity, a fixed value could be a start but seems a bit dangerous in the longer term (a manipulator might have the incentive to add liquidity to ensure the decision is taken and this depends on the extractible value).
Also, from the point of view of bettors, the larger the liquidity, the more incentive there is to participate. And we prefer that all information is revealed before the measurement is performed to avoid a race during the measurement window. This suggests that participation should be incentivized through higher liquidity before the measurement window and that the measurement window should rather have less liquidity.

Yes but it might be profitable from the pov of liquidity providers (first of which is the curator/auction winner/market creator) to participate as long as:

  • LP fees are high enough, which can be only justified if there are enough noise traders
  • subsidies are high enough.

Basically, subsidies are there to compensate for LP negative EV.

1 Like

Ah yeah I completely forgot about the liquidity subsidy section of your article.

yeah I think it makes sense for liquidity requirements to be a function of the magnitude of the ask amount, so that manipulation costs increase as manipulation incentives increase.

1 Like

Yeah I think it is worthwhile for the liquidity observation window to start before the price/outcome observation window, so the market has time to equilibrate and factor in all information before measurement begins. However I also think that it is worthwhile to continue to monitor liquidity during the price observation window, so that liquidity doesn’t dry up to such a degree that manipulation becomes relatively easy while the price is being measured.

So interesting, thanks for this! It seems like a good way to make funding/grant allocation more efficient, and this is the Holy Grail in many industries.

DAOs seem to be the perfect targets to start with, as some of them have large treasuries and need to allocate them for grants to the ecosystem. I am no expert in prediction markets, but just want to share some thoughts and questions I have:

Comparative Analysis between Grant Allocation Methods:

It would be great to compare the efficiency of Futarchy vs. RPGF vs. Milestone-based grants. My understanding is that this can’t be done with the same projects… I guess a DAO could run two different programs in parallel and see which one yields a higher ROI, for example. Even this could vary a lot based on the projects; the goal would be to prove empirically and practically that one methodology is superior to the others.

Subsidies from the DAO:

My understanding is that subsidizing LP will be a cost for the DAO. Can it be measured upfront with a max amount, so the DAO can know what to expect?

Is there a risk of having too few bettors for a PM? I guess if this is the case, it could make sense for the DAO to not only subsidize LP but also to incentivize bettors by dedicating a specific budget for the best bettors with a final Leader Board.

Curation Method:

I really like the auction mechanism by slot you described here. However, I feel like there is a risk of “plutocratic behaviors” that some actors with huge budgets are willing to take some costs to win the auction, in order just to state they have a “partnership” with a specific DAO. Do you see this as a risk? Can it be mitigated?

Also imo this system could make sense not only for DAOs but also for regular companies, incubators, and VCs at some point. Eager to see what’s next !

1 Like

Thanks for the comments.

There is a key difference between CFM and retro-funding (aka RPGF): CFM favors projects with a high ROI (metrics-denominated returns depending on the amount funded), whereas retro-funding only produces rewards based on metrics. This difference comes from conditional markets enabling a (counterfactual) A/B test comparing “funded” and “unfunded,” whereas a retroactive mechanism has no means to achieve such a comparison.

I agree with your remark. Experimentally, we would like to observe that metrics-based results are higher for the same overall spending. To achieve this, we can run the two programs in parallel, and as long as the sample size (number of projects) is large enough and picked at random, we can observe which group is doing better.

Correct. The max amount depends on liquidity. For instance using LMSR, we define a liquidity parameter to which corresponds a maximum loss: the larger the liquidity, the larger the max loss. The DAO would need to allocate the max loss in each market to ensure its proper functioning. Then, there are mitigations to help prevent the DAO from losing the whole max loss amount (e.g. by modulating the liquidity parameter through time).

As long as there are bettors with relevant private information and that liquidity is large enough, such bettors are naturally incentivized to participate through the prediction market mechanism itself (that’s what MSRs are good at). We don’t want to blindly incentivize participation; otherwise, we risk favoriting wash traders of sorts.

But you’re right that convincing bettors to participate takes time and effort. A relevant approach can be to start with play money markets and complementary rewards (like leaderboards).

If the “partnership” is bad for the DAO, the conditional markets should decide not to fund the project. The opposite is equally true, so I don’t see this as a potential attack.

I agree, though, that richer actors can make the auction competitive, ruling out less-funded ones. However, rich actors doing this in a repetitive way would end up costing them a lot.

1 Like

Here are some additional remarks received independently (thanks to the commenters!) and my responses.

This means picking dominated solutions with positive probability: fund a bad project or not fund a good project.
The tradeoff might be unacceptable in some situations. Still, we believe it is reasonable in project funding when (i) curation is delegated to another mechanism as suggested in CFM, (ii) individual funding amounts are small enough, and (iii) the probability of picking a wrong decision is guaranteed to be small.

A possible mitigation is Decision Scoring Rules (Oesterheld, Conitzer 2020). This mechanism relies, instead of on MSRs, on aligning an expert through granting assets whose payoff depends on the success of the metrics in question.

1 Like

I had some thoughts re: the curation mechanism.

  • notes
    • this mechanism
      • maximises number of proposals able to be assessed by our mechanism, within the given liquidity incentive budget
        • I think this is the only way it improves upon the original design
      • ensures a fixed liquidity incentive per dollar of proposal ask value
      • prioritises proposals according to proposer’s confidence that they will be accepted
        • and hence how much they are willing to bid, relative to their proposal’s askValue
      • guarantees that if a proposal is accepted in the auction, speculators do not need to worry about it later being rejected due to insufficient liquidity, as would be the case if we enforced that proposals must meet liquidity thresholds to be eligible for consideration by the curation algorithm
    • we can customise allocation of our funds between rewardBudget and liqBudget based on trial and error
      • i.e. a greater % of total budget allocated to $liquidityIncentiveBudget increases the diversity of proposals, by lowering the cost to be selected via the auction (see below for details), but at the expense of less total incentive to participate due to correspondingly lower $rewardBudget. so $rewardBudget vs $liquidityIncentiveBudget is a diversity vs quality tradeoff
  • params
    • $rewardBudget
      • budget to allocate to funding proposals based on output of funding algorithm + decision market prices
    • $liquidityIncentiveBudget
      • budget to allocate to liquidity provision incentives for the decision markets
    • liquidityIncentivePerAskValue
      • liquidity incentives required to be allocated to each market as a fraction of the “ask value” of the market’s associated grant proposal
    • $minimumBidValue
      • to prevent spam/wasting of speculator’s attention/time on trivial low value proposals
  • algorithm
    • remove bids with bidValue below $minimumBidValue
    • sort bids by ($bidValue/$proposalAskValue) (highest to lowest)
    • accept each successive bid while ($totalBidValue + $liquidityIncentiveBudget) / $totalProposalAskValue > liquidityIncentivePerAskValue
    • then distribute liquidityIncentivePerAskValue * $askValue to each market, over the course of its lifetime, according to the incentive distribution schedule (which is not necessarily constant, as discussed previously)

Interested in your thoughts @lajarre. I do not think I had a super clear goal in mind tbh when coming up with this, it was kind of to make an elegant curation mechanism that is efficient in a certain sense, but perhaps it is not suitable due to not allowing for the number of proposals per epoch to be capped. If so, feel free to ignore.

1 Like

@lajarre probably the most potentially relevant idea from this is preferring auction submissions not with the highest total bid, but with the highest ratio of bid value to “proposal ask value”.

The effect of this is to not bias in favour of large proposals, but rather in favour of proposals the author of which expect to have the highest probability of being accepted for funding, and hence is willing to bid the most per “dollar of proposal ask value”.

An implication though of employing such a mechanism as this is that > 1 proposal needs to be able to be selected within each epoch, so as to ensure a reasonable utilisation rate of the allocated funding budget.

I think it also will require a floor on the ask value of a proposal, as without this, it would be much more at risk of low value spam proposals than the original auction design which does not “control for” the proposal’s ask amount.

I think this relates to a comment you made on the original CFM post as well @noturhandle.

As far as the curation mechanism is concerned, yes.

The curation mechanism increases the chances of liquidity being sufficient so that the mechanism is not prone to issues like the one you mentioned, but it doesn’t guarantee it.

The subsidy mechanism, though, plays that role (as a necessary complement to the auction).

You make some really good points, but I don’t think this is the right way to define the trade-off.

The overall objective is to maximize ROI on metrics. If the DAO had infinite liquidity, the ideal mechanism would be pure CFM, where conditional markets run for each project that might be funded so the funding algorithm can make the most educated allocation possible to optimize for ROI. Limitations on liquidity create the need for a curation mechanism.

The overall mechanism can be seen as \text{funded projects} = \text{cfm} (\text{curation}(\text{projects firehose})), with:

  • \text{curation} being revenue-making (produces liquidity) but not maximizing for the objective, rather maximizing for a mix of well-fundedness and project owner’s confidence in being able to make an accurate initial prediction
  • \text{cfm} being costly (requires liquidity) but maximizing for the objective.

For now, I am not sure if we can expect a closed-form to the incentives versus funding (rewards) ratio, which maximizes the objective. This ratio could be controlled either through :hear_no_evil: governance :see_no_evil: or a PID-style feedback mechanism that adapts the ratio across slots by observing the measured ROI.

Agreed that \text{bid}/\text{funding ask} looks like a good proxy for the probability of funding success from the author’s point of view. Using this to pick auction winners should align the curation mechanism better to the objectives (your remarks below notwithstanding). But let’s also not forget that bids also have a component of confidence (or “predictability”), as the bidder must define the initial prediction market price.

I think some kind of liquidity incentive is immensely valuable and necessary ofc, be it a subsidy from the dao or sourced from the bid amount. All I am basically arguing here is against my original idea of enforcing minimum liquidity thresholds and liquidity observation windows, as I think doing so has more costs (confounding risk + trader uncertainty) than benefits.

Yeah this makes sense. The way I see curation is

  • primarily serving the purpose of allocating the scarce resource of “liquidity incentives” and by extension “decision market speculator attention”
  • secondarily, the function of reducing IL risks for LPs by giving them greater confidence in the market price due to the market creator having skin in the game

Very much agreed.

Yeah I was thinking about this as well… There is certainly a bit of a trade-off between merely forcing the proposal creator to “invest” their bid in LP’ing, and it accurately reflecting their expectation of their proposal being accepted. I.e. forcing them to LP kind of adds a bit of noise to the signal of how much they are willing to bet, as you point out. But some amount of noise in this regard might be worth the cost.

Some notes on a comment from Sebastien Zany (thanks to him!):

This mechanism allocates funding not only to projects but to 3rd parties with a financial stake in the project hitting metrics or not. What is the risk of this creating a project assassination market?

A typical assassination market setting is one where the prediction market incentive results in the assassination being carried out. This remark also holds for milder actions, like hacks and threats carried on a project leader that have consequences on the project health and its metrics.

It looks like this would boil down to a “money vote”: if “assassination” is cheap enough for most traders (weighted by capital) (“cheap” contains the moral cost together with the cost of performing it), there might be market conditions where enough traders will make the bet and perform the “assassination”.

But this seems non-trivial: it requires coordination of capital (the more market liquidity the harder) and opportunity is probably not sustainable, as, if such a risk exists, the cost of assassination would shoot up (more competition in dark markets, more protection around project owners identity…). So it seems that as long as “Assassination Extractible Value” is forecasted to be low enough by most traders (weighted by capital), the system will hold.

Thinking about whale attacks (like Humpy), this could be a problem though… We probably want to ensure that AEV appears low enough, too risky, and not worth pursuing.

1 Like

A more concrete example of an assassination-market-like situation is sabotage from a project contributor having privileged access to, e.g., the project’s technical infrastructure: the actor would make a short bet against the project, harm it, wait for markets to adjust, then take home the profits.

Counter-measures against such attacks could take the form of both:

  • aligning contributors with Long tokens (akin to a bonus)
  • disallowing them to buy Short tokens (which is harder to achieve).
1 Like