-> If we can't even get a public consensus from our _de facto_ leadership on something _so basic_ as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's _no point in pretending to have a rationalist community_, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?
+> If we can't even get a public consensus from our _de facto_ leadership on something _so basic_ as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's _no point in pretending to have a rationalist community_, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?[^my-price-for-joining]
+
+[^my-price-for-joining]: The Sequences post referenced here, ["Your Price for Joining"](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining), argues that the sort of people who become "rationalists" are too prone to "take their ball and go home" rather than tolerating imperfections in a collective endeavor. To combat this, Yudkowsky proposes a norm:
+
+ > If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile.
+
+ I claim that I was meeting this standard: I _was_ willing to personally fix the philosophy-of-categorization issue no matter how long it took, and the issue _did_ arise from outright bad faith.