You have to believe that a malevolent AI will give enough of a damn about you to bother simulating anything at all, let alone infinite torture, which is useless for it to do once it already exists. Everyone on LessWrong has a well-fed ego so I get why they were in a tizzy for a while.
You have to believe that a malevolent AI will give enough of a damn about you to bother simulating anything at all, let alone infinite torture, which is useless for it to do once it already exists. Everyone on LessWrong has a well-fed ego so I get why they were in a tizzy for a while.