Discussion about this post

User's avatar
Adam Shumays's avatar

Thanks for the straightforward explanation and connection to current AGI discourse. You suggest that one should "avoid self-coercion," any suggestions on how one is to do that?

Expand full comment
Nicolás Samprón's avatar

Great. Very useful ideas here “But utility theory assumes that an agent's utility function is fixed or dependent only on existing options” I had used same idea in discussing clinical decisions

Expand full comment
2 more comments...

No posts