First off, if there’s a better place to ask this, I’d appreciate a nudge in that direction.

I’ve seen a lot of chatter on YouTube with Newcomb’s paradox lately (MinutePhysics Veritasium Wikipedia) and I’ve been dwelling on it more than I probably should.

To explain the problem briefly for the uninitiated: there is a super intelligent being that knows you to the core and can accurately (with 99.99+% accuracy) predict your actions/decisions. It has 2 boxes. You have the option to take either just the first box, or both boxes. In the first it always puts $1,000. In the second it will put either $1 million if it thinks you’ll take just the first box; or $0 if it thinks you’ll take both.

The apparent contradiction is explained in the videos.

So the solution to the problem I’ve come to is that you should remove your own ability to decide from your “decision” on whether to take the second box.

That is, you walk in the room, you flip a coin (or some similar random chooser) and on heads take both; on tails just take the first.

I think I’m failing to imagine all the consequences of this, but I can’t decide on what this would imply about the super intelligence’s choose of wether to put the $1 million into the box.

Any thoughts on this?

  • ji59@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 days ago

    As other comment said, coin flips aren’t random. And it is really hard to find something truly random. For example computer random generators are based on current time, temperature,… which is chaotic enough to give us seemingly random numbers.

    If you would have truly random generator, I think, it would be really bad idea to use it. Let’s assume, you would listen to it (otherwise the AI could predict you wouldn’t). Because now you have 50% chance the AI is correct, you would in average get 0.5 * 1000000 + 0.5 * 0 = 500000 if choosing only one box and 0.5 * 1001000 + 0.5 * 1000 = 501000 if choosing both. Each could occure with 50% probability, giving 500500 on average. Which is for one boxes (and their main argument of average case) just over half of their expected outcome. And for two boxers, choosing both is always better then choosing at random, which is better than chossing one.

    This case with random generator is similar to the case Minute Physics made about someone else choosing for you.

    So, overall, it is interesting strategy, but I think it is worse than choosing deterministically.

    PS: I think this strategy would make the situation worse for everyone, because you undermine the trust in the AI making correct choices. It’s measured 99.99% correctness would fall towards 50%, based on how many people would do this. And trusting in the AI with 99.99% prediction rate is reasonable for one boxers. But the lower the measured probability falls, more people would switch to two boxes, which would worsen their outcome, even though AI could be almost flawless with people choosing on their own. So imagine going in as a first person, doing this, choosing two boxes based on coin, and winning big. The next person would probably choose 2 too and lose.

    • DahGangalang@infosec.pubOP
      link
      fedilink
      arrow-up
      3
      ·
      9 days ago

      That’s an interesting take. I think I’ll need to chew over the full implications of my choices affecting everyone else’s choice.