4 Comments

Thanks for the straightforward explanation and connection to current AGI discourse. You suggest that one should "avoid self-coercion," any suggestions on how one is to do that?

Expand full comment
author

Firstly, it may be that it is impossible to perfectly avoid self-coercion and that we can at most avoid it arbitrarily well.

The most difficult part in solving self-coercion tends to be in discovering that self-coercion is happening and being mindful enough to try to solve that problem.

Also, it should help that self-coercion isn't efficient: it's motivating to know that, if we manage to find a solution to self-coercion, we're bound to get rid of a lot of suffering and be a lot more productive as a result.

Expand full comment

Great. Very useful ideas here “But utility theory assumes that an agent's utility function is fixed or dependent only on existing options” I had used same idea in discussing clinical decisions

Expand full comment

« For I realized that so much memory and desire swirl about in the hearts of men on this planet that, just as we can look at Neptune and say it is covered with liquid nitrogen, or Venus and see a mantle of hydrochloric acid, so it seemed to me that were one to look at earth from afar one would say it is covered completely in Ignorance. » (Andrew Holleran in ‘Nights in Aruba’, 1983)

Expand full comment