Enjoyed this analysis Sam! I also think it’s interesting to dive into the the validity of the problem. Why would an AGI, capable of universal understanding, want to hurt humans? And, if it does, it will have reasons which we could have a conversation about, right?
Enjoyed this analysis Sam! I also think it’s interesting to dive into the the validity of the problem. Why would an AGI, capable of universal understanding, want to hurt humans? And, if it does, it will have reasons which we could have a conversation about, right?