• 11 Posts
  • 69 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle






  • First, your post is probably missing a link so we don’t have any context on what you’re asking (even if we can guess some stuff from the post text)

    Second, you mention a website being sketchy/a honeypot without providing any technical reason to believe so

    Third, this has nothing to do with computer security (maybe more of a privacy issue), and it does not look like a news piece, so this is definitely the wrong community











  • You’ve probably read about language model AIs basically being uncontrollable black boxes even to the very people who invented them.

    When OpenAI wants to restrict ChatGPT from saying some stuff, they can fine tune the model to reduce the likelihood that it will output forbidden words or sentences, but this does not offer any guarantee that the model will actually stop saying forbidden things.

    The only way of actually preventing such an agent from saying something is to check the output after it is generated, and not send it to the user if it triggers a content filter.

    My point is that AI researchers found a way to simulate some kind of artificial brains, from which some “intelligence” emerges in a way that these same researchers are far from deeply understanding.

    If we live in a simulation, my guess is that life was not manually designed by the simulation’s creators, but rather that it emerged from the simulation’s rules (what we Sims call physics), just like people studying the origins of life mostly hypothesize. If this is the case, the creators are probably as clueless about the inner details of our consciousness as we are about the inner details of LLMs