
Saved by khushi Mittal and
Inadequate Equilibria
Saved by khushi Mittal and
There’s a toolbox of reusable concepts for analyzing systems I would call “inadequate”—the causes of civilizational failure, some of which correspond to local opportunities to do better yourself. I shall, somewhat arbitrarily, sort these concepts into three larger categories: Decisionmakers who are not beneficiaries; Asymmetric information; and abo
... See moremost of the time systems end up dumber than the people in them due to multiple layers of terrible incentives,
the frustrating parts of civilization are the times when you’re stuck in a Nash equilibrium that’s Pareto-inferior to other Nash equilibria. I mean, it’s not surprising that humans have trouble getting to non-Nash optima like “both sides cooperate in the Prisoner’s Dilemma without any other means of enforcement or verification.” What makes an equil
... See moreThis brings me to the single most obvious notion that correct contrarians grasp, and that people who have vastly overestimated their own competence don’t realize: It takes far less work to identify the correct expert in a pre-existing dispute between experts, than to make an original contribution to any field that is remotely healthy.
ELIEZER: The concept of a “minimum viable product” isn’t the minimum product that compiles. It’s the least product that is the best tool in the world for some particular task or workflow. If you don’t have an MVP in that sense, of course the users won’t switch. So you don’t have a testable hypothesis. So you’re not really learning anything when the
... See moreStereotypically, the startup world is supposed to consist of heroes producing an excess return by pursuing ideas that nobody else believes in. In reality, the multi-stage nature of venture capital makes it very easy for the field to end up pinned to traditions about whether entrepreneurs ought to have red hair—not because everyone believes it, but
... See moreThis fits into a very common pattern of advice I’ve found myself giving, along the lines of, “Don’t assume you can’t do something when it’s very cheap to try testing your ability to do it,” or, “Don’t assume other people will evaluate you lowly when it’s cheap to test that belief.”
So a realistic lifetime of trying to adapt yourself to a broken civilization looks like: 0-2 lifetime instances of answering “Yes” to “Can I substantially improve on my civilization’s current knowledge if I put years into the attempt?” A few people, but not many, will answer “Yes” to enough instances of this question to count on the fingers of both
... See moreOne data point is a hell of a lot better than zero data points. Worrying about how one data point is “just an anecdote” can make sense if you’ve already collected thirty data points. On the other hand, when you previously just had a lot of prior reasoning, or you were previously trying to generalize from other people’s not-quite-similar experiences
... See more