>
> And after you've added in prediction markets to actually get all of the information that anybody has, what remains to be gained by creating a law that asymmetrically treats different sapient beings in a way based not on predicted outcomes?
-Good question. I'd like to generalize it: in the absence of a reason why "creating a law" and "sapient beings" would change the answer, we can ask: when making a decision about some entities X<sub>i</sub>, after you've added in prediction markets to get all of the information that anybody has, what remains to be gained by using a decision procedure P that asymmetrically treat the X<sub>i</sub> in a way based not on predicted outcomes?
+Good question.[^good-question] I'd like to generalize it: in the absence of a reason why "creating a law" and "sapient beings" would change the answer, we can ask: when making a decision about some entities X<sub>i</sub>, after you've added in prediction markets to get all of the information that anybody has, what remains to be gained by using a decision procedure P that asymmetrically treat the X<sub>i</sub> in a way based not on predicted outcomes?
+
+[^good-question]: Not really. I'm being polite.
The answer is: nothing—with two caveats having to do with how the power of prediction markets is precisely that they're agnostic about how traders make decisions: we assume that whatever the winning decision is, greedy traders have an incentive to figure it out.
Nothing is gained—_if_ you already happen to have sufficiently liquid prediction markets covering all observables relevant to the decisions you need to make. This is logistically nontrivial, and almost certainly much more computationally intensive. (If there are a hundred traders in your market, each of them using their own decision procedure which is on average as expensive as P, then delegating the decision to the market costs Society a hundred times as much as just using P once yourself.)
-Nothing is gained—_but_ this can't be an argument against P being a good decision procedure, as the reason we can assume that the market will never underperform P is _because_ traders are free to use P themselves if it happens to be a good procedure. (It would be misleading to say that Society doesn't need to compute P because it has prediction markets, if, "under the hood", the traders in the market are in fact computing P all the time.)
+Nothing is gained—_but_ this can't be an argument against P being a good decision procedure, as the reason we can assume that the market will never underperform P is _because_ traders are free to use P themselves if it happens to be a good procedure. (It would be lying to claim that Society doesn't need to compute P because it has prediction markets, if, "under the hood", the traders in the market are in fact computing P all the time.)
To figure out whether these caveats matter, we can imagine some concrete scenarios.
"Well," you explain, "If we can't go to the exact restaurant we were expecting to, we can probably find something similar."
-"Then start a Manifold market asking which restaurants we'll enjoy! If there's more information to be gained by classifying cuisine by national origin than by just looking at the menu, it'll show up in the culinary prediction markets. And after you've added in prediction markets to actually get all of the information that anybody has, what remains to be gained by choosing where to eat in a way that asymmetrically treats different restaurants in a way based not on predicted outcomes?"
+"Then start a Manifold market asking which restaurants we'll enjoy! If there's more information to be gained by classifying cuisine by national origin than by just looking at the specific dishes on the menu, it'll show up in the culinary prediction markets. And after you've added in prediction markets to actually get all of the information that anybody has, what remains to be gained by choosing where to eat in a way that asymmetrically treats different restaurants in a way based not on predicted outcomes?"
-I'm writing this scenario as an illustrative example in a blog post, but I want you to imagine how you would react if someone _actually said that to you in real life_. That would be pretty weird, right?
+----
-It's not that there's anything particularly wrong with the idea of using prediction markets to get restaurant suggestions. I can easily believe that you might get some good suggestions that way, even in a sparsely-traded play-money market on Earth. (In a similar vein, Yudkowsky has [a "What books will I enjoy reading?" market](https://manifold.markets/EliezerYudkowsky/what-book-will-i-enjoy-reading).)
+I wrote the restaurant-choice scenario as an illustrative example for this blog post, but I want you to imagine how you would react if someone _actually behaved that way in real life_: stopped you when you searched for Italian restaurants and insisted you start a prediction market instead. That would be pretty weird, right?
-The weird part is the suggestion that the form of reasoning you would use to make a decision in the absence of a prediction market can be dismissed as "a way based not on predicted outcomes" and regarded as obviated by the existence of the market.
+It's not that there's anything particularly wrong with the idea of using prediction markets to get restaurant suggestions. I can easily believe that you might get some good suggestions that way, even in a sparsely-traded play-money market on Earth. (In a similar vein, Yudkowsky has [a "What books will I enjoy reading?" market](https://manifold.markets/EliezerYudkowsky/what-book-will-i-enjoy-reading).)
-I don't think anyone actually believes this, as contrasted to [believing they believe](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief) it in order to ease the cognitive dissonance of trying to simultaneously adhere to the religious committments of both _Less Wrong_ "rationalism" and American progressivism.
+The weird part is the implication that the form of reasoning you would use to make a decision in the absence of a prediction market can be dismissed as "a way based not on predicted outcomes" and regarded as obviated by the existence of the market. I don't think anyone really believes this, as contrasted to [believing they believe](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief) it in order to ease the cognitive dissonance of trying to simultaneously adhere to the religious committments of both _Less Wrong_ "rationalism" and American progressivism.
The `prediction_market_sort` code doesn't obviate standard sorting algorithms like quicksort, because if you run `prediction_market_sort`, the first thing the traders in the market are going to do is run a standard sorting algorithm like quicksort to decide which comparisons to bet on.
-The restaurant market doesn't obviate the concept of Italian food, because if you post a market for "Where should we go for dinner given that Vinnie's is closed?", the first thing traders are going to do is search for "Italian restaurants near [market author's location]"—not because they're fools who think that "Italian food" is somehow eternal and ontologically fundamental, but because there contingently do happen to be [approximate conditional indpendence relationships](https://www.readthesequences.com/Conditional-Independence-And-Naive-Bayes) between the properties of meals served by different restaurants.
+The restaurant-enjoyment market doesn't obviate the concept of Italian food, because if you post a market for "Where should we go for dinner given that Vinnie's is closed?", the first thing traders are going to do is search for "Italian restaurants near [market author's location]"—not because they're fools who think that "Italian food" is somehow ontologically fundamental and eternal, but because there contingently do happen to be [approximate conditional indpendence relationships](https://www.readthesequences.com/Conditional-Independence-And-Naive-Bayes) between the features of meals served by different restaurants. A decision made on the basis of a [statistical compression](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length) of meal features is based on predicted outcomes insofar as and to the extent that meal features predict outcomes.
-[TODO: it's not "a way based not on predicted outcomes", because a compression of the statistical properties is relevant to outcomes]
+To be sure, there are all sorts of nuances and caveats that one could go into here about exactly when and why categorization works or fails as a cognitive algorithm—how categories are sometimes [used for coordination](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [and not](http://unremediatedgender.space/2019/Oct/self-identity-is-a-schelling-point/) [just predictions](http://unremediatedgender.space/2019/Dec/more-schelling/), how categories should [change when the distribution of data in the world changes](https://www.lesswrong.com/posts/WikzbCsFjpLTRQmXn/declustering-reclustering-and-filling-in-thingspace) (_e.g._, [fusion cuisines](https://en.wikipedia.org/wiki/Fusion_cuisine) becoming popular), whether categories might perversely distort the territory to fit the map via self-fulfilling prophecies[^self-fulfilling] (_e.g._, entrepreneurs only opening restaurants in established ethnic categories because that's what customers are used to, thereby stifiling culinary innovation) ...
-To be sure, there are all sorts of nuances and caveats that one could go into here about exactly when and why categorization works as a cognitive algorithm—whether categories are [used for coordination and not just predictions](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), how categories should [change when the distribution of data in the world changes](https://www.lesswrong.com/posts/WikzbCsFjpLTRQmXn/declustering-reclustering-and-filling-in-thingspace), whether categories distort the territory to fit the map via self-fulfilling prophecies (although this is _also_ a potential problem for prediction markets [and other cognitive systems](https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic)) ...
+[^self-fulfilling]: Although this is _also_ a potential problem for prediction markets [and other cognitive systems](https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic).
-But having spent [several](http://unremediatedgender.space/2023/Jul/a-hill-of-validity-in-defense-of-meaning/) [years](http://unremediatedgender.space/2023/Dec/if-clarity-seems-like-death-to-them/) of [my life](http://unremediatedgender.space/2024/Mar/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles/) writing about the [nuances and caveats](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [in excruciating detail](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), I've come to the sad conclusion that no one really cares about the nuances and caveats.
+But, bluntly? The kind of person who asks what use there is in "creating a law that asymmetrically treats different sapient beings in a way based not on predicted outcomes" is not interested in the nuances and caveats. (I know because [I used to be this kind of person](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism).) It's not an honest question that expects an answer; it's a rhetorical question asked in the hope that the respondent doesn't have one.
-[TODO: people bring up the nuances adversarially to avoid inferences that they don't like]
+"If there's more information to be gained from measuring biological sex than by just measuring height, it'll show up in the military prediction markets," Yudkowsky writes. I agree, of course, that that sentence [is literally true](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), but the conditional mood implies such a bizarre prior. "If"? "Just measuring _height_"? Are we pretending to be uncertain about whether a troop of 5'6" males (15th percentile) or 5'6" females (80th percentile) would prevail in high medieval warfare? Does Yudkowsky want to bet on this?
+
+-----
-[TODO: "creating a law that asymmetrically treats different sapient beings in a way based not on predicted outcomes" is not a good faith question]
+Perhaps at this point the advocate of prediction markets will complain that I'm the one performatively missing the point.
+
+[TODO: The claim isn't that ethnic or sex categories are useless, but that prediction markets are better, because you can scoop up the exceptions. I've already agreed that if you ignore efficiency, prediction markets can do anything you can do by any other procedure—by construction! So my reply here is that efficiency actually does matter. (If efficiency didn't matter, there would be no gounds to object to the prediction market sort—in principle, it should work.) anti-discrimination law prohibits not just stupid forms of discrimination, but also subtler systems that use categories but also have exception handling. Actually existing militaries don't literally go "anyone with a penis, that's our only criterion"! But they also don't bother drafting women, which was the point under dispute. The choice is "category only" versus prediciton market, you can also search for Italian with good reviews, and that's _doing most of the work of the market_]
+
+[TODO: "just go by the presence of penises, don't use anything else" would be stupid, but it's also _not what real-world militaries do, replacing the draft board with a prediction market doesn't seem like it would satisfy the principled antisexist thing Keltham was trying to do ]
[TODO: you might say that the market is superior because it can account for exceptions—which can indeed exist—
some restraurants or books or solidier defy classification
-(self-fulfilling prophecies are also a concern with prediction markets, not just categories)
-but the original context was Ketlham proposing that the law _cannot_ take categories into account, which is different from having an exception-handling procedure. Real-world conscription systems don't just take everyone with a penis, they also have a draft board which issues nuanced classifications on an individual basis, replacing the draft board with a prediction market doesn't seem like it would satisfy the thing Keltham was trying to do
+
+but the original context was Ketlham proposing that the law _cannot_ take categories into account, which is different from having an exception-handling procedure. Real-world conscription systems don't just take everyone with a penis, they also have a draft board which issues nuanced classifications on an individual basis,
https://en.wikipedia.org/wiki/Selective_Service_System#Classifications
]
-[TODO: The original context was
+------
-If there's more information to be gained from measuring biological sex than by just measuring height, it'll show up in the military prediction markets]
+[TODO: the original context was about liteary flaws in dath ilan, fictional aliens who "just happen" to share the author's predjudices break suspension of disbelief. (And there should be a careful way to word this principle that's agnostic about whether the author's pet beliefs are true.) Anti-discrimination makes sense as game-theoretic Earth-craziness, but that's not supposed to be dath ilan's M.O.! (It sounds like Keltham actually believes in anti-discrimination as a principle, it's not just a pragmatic bias canceler like NGDP targeting, which dath ilan isn't suppose to have; "cheap hacks to route around other people's irrationality" is not supposed to be what they're going for); Keltham should be able to say "women own property in dath ilan and it works fine _on the empirical merits_" and concede re: conscription; the fact that he can't marks the text as being from Berkeley, not another world]