X-Git-Url: http://unremediatedgender.space/source?p=Ultimately_Untrue_Thought.git;a=blobdiff_plain;f=notes%2Fmemoir-sections.md;h=a46f77685f2c6a3632bba4633841c96228ddf4ce;hp=cfeeb40599d029a8068f2b2352e8dc0aab1153eb;hb=b0c5f366404ab4e1d2f99a2c4056e6f07f6f64ec;hpb=a913d9743d768cc21725146fce27075e47587260 diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index cfeeb40..a46f776 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -2815,3 +2815,6 @@ In particular, I think the conspiracy theory "Yudkowsky sometimes avoids nuanced https://twitter.com/ESYudkowsky/status/1708587781424046242 > Zack, you missed this point presumably because you're losing your grasp of basic theory in favor of conspiracy theory. + +https://www.lesswrong.com/posts/qbcuk8WwFnTZcXTd6/thomas-kwa-s-miri-research-experience +> The model was something like: Nate and Eliezer have a mindset that's good for both capabilities and alignment, and so if we talk to other alignment researchers about our work, the mindset will diffuse into the alignment community, and thence to OpenAI, where it would speed up capabilities.