-So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _systematically correct reasoning_. Sure, anyone will _say_ that their beliefs are true, but you can tell most people aren't being very serious about it. _We_ were going to be serious: starting with the shared canon of knowledge of cognitive biases, reflectivity, and Bayesian probability theory bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/).
-
-[TODO: find a better way to summarize the core minds-as-engines that construct maps that reflect the territory]
+So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _systematically correct reasoning_. Starting with the shared canon of knowledge of cognitive biases, reflectivity, and Bayesian probability theory bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/)—and not just out of a duty or passion for some philosophical ideal of Truth, but as a result of _understanding how intelligence works_, the _reduction_ of "thought" to _cognitive algorithms_. Intelligent systems that construct predictive models of the world around them—that have "true" "beliefs"—can _use_ those models to compute which actions will best achieve their goals.