From: Zack M. Davis Date: Sun, 10 Nov 2024 04:57:11 +0000 (-0800) Subject: drafting "Prediction Markets Are Not ..." X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=be40b537326ae50cd42587d5d27345d4322473e1;p=Ultimately_Untrue_Thought.git drafting "Prediction Markets Are Not ..." --- diff --git a/content/drafts/prediction-markets-are-not-a-drop-in-replacement-for-concepts.md b/content/drafts/prediction-markets-are-not-a-drop-in-replacement-for-concepts.md index 3e146cc..1488ca0 100644 --- a/content/drafts/prediction-markets-are-not-a-drop-in-replacement-for-concepts.md +++ b/content/drafts/prediction-markets-are-not-a-drop-in-replacement-for-concepts.md @@ -4,7 +4,9 @@ Category: commentary Tags: Eliezer Yudkowsky, literary critcism, worldbuilding, prediction markets Status: draft -Eliezer Yudkowsky [comments on "Comment on a Scene from _Planecrash_: 'Crisis of Faith'"](http://unremediatedgender.space/2022/Jun/comment-on-a-scene-from-planecrash-crisis-of-faith/#isso-583): +In ["Comment on a Scene from _Planecrash_: 'Crisis of Faith'"](http://unremediatedgender.space/2022/Jun/comment-on-a-scene-from-planecrash-crisis-of-faith/), I critiqued a scene in which Keltham (a [magical universe-teleportation victim](https://en.wikipedia.org/wiki/Isekai) from alternate Earth called dath ilan) plays dumb about why a pre-industrial Society would only choose males for military conscription. + +[Eliezer Yudkowsky comments](http://unremediatedgender.space/2022/Jun/comment-on-a-scene-from-planecrash-crisis-of-faith/#isso-583): > Keltham wouldn't be averse to suggesting that there be prediction markets about military performance, not just predictions based on height. If there's more information to be gained from measuring biological sex than by just measuring height, it'll show up in the military prediction markets. But Keltham knows the uneducated soul to whom he is speaking, does not know what a prediction market is. > @@ -12,27 +14,27 @@ Eliezer Yudkowsky [comments on "Comment on a Scene from _Planecrash_: 'Crisis of Good question. I'd like to generalize it: in the absence of a reason why "creating a law" and "sapient beings" would change the answer, we can ask: when making a decision about some entities Xi, after you've added in prediction markets to get all of the information that anybody has, what remains to be gained by using a decision procedure P that asymmetrically treat the Xi in a way based not on predicted outcomes? -The answer is: nothing—albeit with two caveats having to do with how the power of prediction markets is precisely that they're agnostic about how traders make decisions: we assume that whatever the winning decision is, greedy traders have an incentive to figure it out. +The answer is: nothing—with two caveats having to do with how the power of prediction markets is precisely that they're agnostic about how traders make decisions: we assume that whatever the winning decision is, greedy traders have an incentive to figure it out. + +Nothing is gained—_if_ you already happen to have sufficiently liquid prediction markets covering all observables relevant to the decisions you need to make. This is logistically nontrivial, and almost certainly much more computationally intensive. (If there are a hundred traders in your market, each of them using their own decision procedure which is on average as expensive as P, then delegating the decision to the market costs Society a hundred times as much as just using P once yourself.) -Nothing is gained—_if_ you already happen to have sufficiently liquid prediction markets covering all the decisions you need to make. This is logistically nontrivial, and almost certainly much more computationally intensive. (If there are a hundred traders in your market, each of them using their own decision procedure which is on average as expensive as P, then delegating the decision to the market costs Society a hundred times as much as just using P once yourself.) +Nothing is gained—_but_ this can't be an argument against P being a good decision procedure, as the reason we can assume that the market will never underperform P is _because_ traders are free to use P themselves if it happens to be a good procedure. (It would be misleading to say that Society doesn't need to compute P because it has prediction markets, if, "under the hood", the traders in the market are in fact computing P all the time.) -Nothing is gained—_but_ this can't be an argument against P being a good decision procedure, as the reason we can assume that the market will never underperform P is _because_ traders are free to use P themselves if it happens to be the best procedure. (It would be misleading to say that Society doesn't need to compute P because it has prediction markets, if, "under the hood", the traders in the market are in fact computing P all the time.) - To figure out whether these caveats matter, we can imagine some concrete scenarios. ------ (Okay, this one is a little bit silly, but it's illustrative.) -Imagine being a programmer needing to implement a sorting algorithm for some application: code that takes a list of numbers, and rearranges it to be ordered smallest to largest. You're thinking about using [quicksort](https://en.wikipedia.org/wiki/Quicksort), which involves recursively selecting a special "pivot" element and then partitioning the list into two sublists that are less than and greater than (or equal to) the pivot, respectively. +Imagine being a programmer needing to implement a sorting algorithm: code that takes a list of numbers, and rearranges the list to be ordered smallest to largest. You're thinking about using [quicksort](https://en.wikipedia.org/wiki/Quicksort), which involves recursively designated an arbitrary "pivot" element and then partitioning the list into two sublists that are less than and greater than (or equal to) the pivot, respectively. -Your teammate objects to the idea of moving elements based on whether they're greater or less than the pivot, which isn't obviously related to the ultimate goal of the list being sorted. "Why are you writing code that asymmetrically treats different numbers differently in a way not based on predicted outcomes?" he asks. +Your teammate Albert objects to the idea of moving elements based on whether they're greater or less than the arbitrary pivot, which isn't obviously related to the ultimate goal of the list being sorted. "Why are you writing code that asymmetrically treats different numbers differently in a way not based on predicted outcomes?" he asks. "What would you suggest?" you ask, regretting the question almost as soon as you've finished saying it. "Well, we have a library that interacts with the [Manifold Markets API](https://docs.manifold.markets/api) ..." -```` +```python from math import log2 import prediction_markets @@ -50,12 +52,12 @@ def prediction_market_sort(my_list): while is_sorted_market.probability < 0.95: next_comparison_markets = { (i, j): prediction_markets.create( - f"Will the list be sorted with no more than {op_budget} comparisons, if the" + f"Will this list be sorted with no more than {op_budget} comparisons, if the" f"next comparision is between indicies {i} and {j}?", static_data=my_list, ) for i in range(n) - for j in range(i, n) + for j in range(i+1, n) if i != j } @@ -72,8 +74,8 @@ def prediction_market_sort(my_list): i, j = next_comparison should_swap_market = prediction_markets.create( - f"Will the list be sorted with no more than {op_budget} comparisons, if we swap" - f"the elements at indices {i} and {j}?", + f"Will this list be sorted with no more than {op_budget} comparisons if " + f"the next operation is to swap the elements at indices {i} and {j}?", static_data=my_list, ) if should_swap_market.probability > 0.5: @@ -86,6 +88,7 @@ def prediction_market_sort(my_list): should_swap_market.cancel() op_count += 1 + op_budget -= 1 is_sorted_market.resolve(True) @@ -97,5 +100,53 @@ def prediction_market_sort(my_list): "No," you say. -"What do you mean, No?" +"What do you mean, No? Is there a bug in the code? If not, then it should work, right?" + +"In _principle_, I suppose, but ..." You're at a loss for words. + +"Then what's the problem?" says Albert. "Surely you don't think you're smarter than a prediction market?" He scoffs at the notion. + +You open a prediction market asking about the company's profits next quarter conditional on Albert being fired. + +----- + +Or suppose you've been looking forward to going out to dinner with your friend Barbara at your favorite restaurant, Vinnie's. Unfortunately, Vinnie's is closed. You pull out your phone to look for alternatives. "OK Google, Italian restaurants near me." + +Barbara objects. "Stop! What are you doing?" + +"Well," you explain, "If we can't go to the exact restaurant we were expecting to, we can probably find something similar." + +"Then start a Manifold market asking which restaurants we'll enjoy! If there's more information to be gained by classifying cuisine by national origin than by just looking at the menu, it'll show up in the culinary prediction markets. And after you've added in prediction markets to actually get all of the information that anybody has, what remains to be gained by choosing where to eat in a way that asymmetrically treats different restaurants in a way based not on predicted outcomes?" + +I'm writing this scenario as an illustrative example in a blog post, but I want you to imagine how you would react if someone _actually said that to you in real life_. That would be pretty weird, right? + +It's not that there's anything particularly wrong with the idea of using prediction markets to get restaurant suggestions. I can easily believe that you might get some good suggestions that way, even in a sparsely-traded play-money market on Earth. (In a similar vein, Yudkowsky has [a "What books will I enjoy reading?" market](https://manifold.markets/EliezerYudkowsky/what-book-will-i-enjoy-reading).) + +The weird part is the suggestion that the form of reasoning you would use to make a decision in the absence of a prediction market can be dismissed as "a way based not on predicted outcomes" and regarded as obviated by the existence of the market. + +I don't think anyone actually believes this, as contrasted to [believing they believe](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief) it in order to ease the cognitive dissonance of trying to simultaneously adhere to the religious committments of both _Less Wrong_ "rationalism" and American progressivism. + +The `prediction_market_sort` code doesn't obviate standard sorting algorithms like quicksort, because if you run `prediction_market_sort`, the first thing the traders in the market are going to do is run a standard sorting algorithm like quicksort to decide which comparisons to bet on. + +The restaurant market doesn't obviate the concept of Italian food, because if you post a market for "Where should we go for dinner given that Vinnie's is closed?", the first thing traders are going to do is search for "Italian restaurants near [market author's location]"—not because they're fools who think that "Italian food" is somehow eternal and ontologically fundamental, but because there contingently do happen to be [approximate conditional indpendence relationships](https://www.readthesequences.com/Conditional-Independence-And-Naive-Bayes) between the properties of meals served by different restaurants. + +[TODO: it's not "a way based not on predicted outcomes", because a compression of the statistical properties is relevant to outcomes] + +To be sure, there are all sorts of nuances and caveats that one could go into here about exactly when and why categorization works as a cognitive algorithm—whether categories are [used for coordination and not just predictions](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), how categories should [change when the distribution of data in the world changes](https://www.lesswrong.com/posts/WikzbCsFjpLTRQmXn/declustering-reclustering-and-filling-in-thingspace), whether categories distort the territory to fit the map via self-fulfilling prophecies (although this is _also_ a potential problem for prediction markets [and other cognitive systems](https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic)) ... + +But having spent [several](http://unremediatedgender.space/2023/Jul/a-hill-of-validity-in-defense-of-meaning/) [years](http://unremediatedgender.space/2023/Dec/if-clarity-seems-like-death-to-them/) of [my life](http://unremediatedgender.space/2024/Mar/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles/) writing about the [nuances and caveats](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [in excruciating detail](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), I've come to the sad conclusion that no one really cares about the nuances and caveats. + +[TODO: people bring up the nuances adversarially to avoid inferences that they don't like] + +[TODO: "creating a law that asymmetrically treats different sapient beings in a way based not on predicted outcomes" is not a good faith question] + +[TODO: you might say that the market is superior because it can account for exceptions—which can indeed exist— +some restraurants or books or solidier defy classification +(self-fulfilling prophecies are also a concern with prediction markets, not just categories) +but the original context was Ketlham proposing that the law _cannot_ take categories into account, which is different from having an exception-handling procedure. Real-world conscription systems don't just take everyone with a penis, they also have a draft board which issues nuanced classifications on an individual basis, replacing the draft board with a prediction market doesn't seem like it would satisfy the thing Keltham was trying to do +https://en.wikipedia.org/wiki/Selective_Service_System#Classifications +] + +[TODO: The original context was +If there's more information to be gained from measuring biological sex than by just measuring height, it'll show up in the military prediction markets]