• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: July 11th, 2023

help-circle




  • This may be controversial, but trying to collect the data of your free users to offset the costs of the infrastructure/resources needed to support the free users is not a bad thing - especially when you give those users an option to opt-out.

    You make it sound like their goal is to do bad things. That’s not true. Corporations are not good or evil, they are amoral. They don’t care if what they are doing is good or bad - it just matters if they make money.

    they’re free to just do the right thing completely

    What exactly would that entail?




  • Language parsing is a routine process that doesn’t require AI and it’s something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don’t see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response (even the nonsensical ones) before returning the best answer. They’re better described as a greedy algorithm than a brute force algorithm.

    I’m not going to get into an argument about whether these AIs understand anything, largely because I don’t have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you’re making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and the relevant philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.