Okay, your output is different given the same input… So what? It’s a well known fact that these LLMs are non deterministic. Theres a guy on youtube that asks chatgpt everyday to count to 200 until it doesn’t fuck up. Your output does not prove or disprove the authenticity of the original post.
Tbh them being nondeterministic is a big part of why they’re so unreliable. Like, maybe it’ll work fine for 9/10 people, but then there will be that one person whose home directory gets wiped for whatever reason. Or maybe it’ll do math right for those nine people, but then for that one person it’ll say 1 + 1 = 11.
Okay, your output is different given the same input… So what? It’s a well known fact that these LLMs are non deterministic. Theres a guy on youtube that asks chatgpt everyday to count to 200 until it doesn’t fuck up. Your output does not prove or disprove the authenticity of the original post.
Tbh them being nondeterministic is a big part of why they’re so unreliable. Like, maybe it’ll work fine for 9/10 people, but then there will be that one person whose home directory gets wiped for whatever reason. Or maybe it’ll do math right for those nine people, but then for that one person it’ll say
1 + 1 = 11.