• 76 Posts
  • 740 Comments
Joined 3 years ago
cake
Cake day: June 24th, 2023

help-circle





  • The big deal is that it’s on the heels of age verification bullshit that fascists are pushing through with the help of tech bros, so that they can eventually push all of us into a scenario where we have zero privacy.

    That’s a bit difficult to argue in a world where the most prominent of such laws was passed in California, where Democrats control the entire legislative process.

    I have not looked up the voting record for it, but would suspect that, like most of the worst laws in the US, it was enthusiastically supported by both parties? Am I wrong about that?






  • So when the entire style of government and bureaucracy is dissolved you think the country continues?

    Yes, normally as long as there is legal continuity, countries retain their identity through changes of systems of government.

    For example today’s Germany is generally considered to be the same country as the North German confederation founded in the 19th century.

    Likewise the end of communism in Eastern Europe didn’t cause Bulgaria, Poland, or Hungary to cease existing, just change their form of government.









  • whoever employs LLM

    incumbent upon the handler to assume liabillity

    I agree. If you make any kind of real-world decision based on the output of AI, you should be liable for it as if you’d made that decision yourself.

    But I remember reading some news stories about cases where people (often minors) chatted with chatbots and managed to get those chatbots into states where the chatbots encouraged that the users harm themselves (in some cases even commit suicide?). As tragic as that is, I don’t see how it’s morally right to hold the AI companies responsible for that unless it can be shown they did this on purpose. All the AI did in such cases was what it was advertised and understood to do: generate plausible-sounding text based on user input. Those are the cases I’m talking about.


  • I don’t, not in general.

    There are good and bad uses of AI. For example I used AI to generate my profile picture here on Lemmy (would you have noticed?). In general the creation of art is one of the best uses of AI I can think of; it doesn’t have serious consequences if it goes wrong, and it can easily be reviewed by a human whether it looks as it should.

    But using AI to make actually meaningful business decisions without any human review at all? Using AI for customer service? Any company that does that deserves VERY negative consequences.

    I don’t agree with talking points like “AI companies should be required to pay copyright holders of their training data” or “AI is bad because of the environmental impact” or “AI is bad because of RAM prices” or “AI companies should be legally responsible for any mistakes the AI makes (such as libel or encouraging users’ suicide)” or such things; I think all of these are nonsense.

    I believe in general that AI gets too much attention in the media. It’s really not that impactful.