• 8 Posts
  • 107 Comments
Joined 2 years ago
cake
Cake day: December 18th, 2023

help-circle

  • “They” is the copyright industry. The same people, who are suing AI companies for money, want the Internet Archive gone for more money.

    I share the fear that the copyrightists reach a happy compromise with the bigger AI companies and monopolize knowledge. But for now, AI companies are fighting for Fair Use. The Internet Archive is already benefitting from those precedents.







  • It’s a bit of a split among libertarians. Some very notable figures like Ayn Rand were strong believers in IP. In fact, Ayn Rand’s dogmas very much align with what is falsely represented as left-wing thought in the context of AI.

    It’s really irritating for me how much conservative capitalist ideals are passed off as left-wing. Like, attitudes on corporations channel Adam Smith. I think of myself as pragmatic and find that Smith or even Hayek had some good points (not Rand, though). But it’s absolutely grating how uneducated that all is. Worst of all, it makes me realize that for all the anti-capitalist rhetoric, the favored policies are all about making everything worse.




  • For fastest inference, you want to fit the entire model in VRAM. Plus, you need a few GB extra for context.

    Context means the text (+images, etc) it works on. That’s the chat log, in the case of a chatbot, plus any texts you might want summarized/translated/ask questions about.

    Models can be quantized, which is a kind of lossy compression. They get smaller but also dumber. As with JPGs, the quality loss is insignificant at first and absolutely worth it.

    Inference can be split between GPU and CPU, substituting VRAM with normal RAM. Makes it slower, but you’ll probably will still feel that it’s smooth.

    Basically, it’s all trade-offs between quality, context size, and speed.


  • It’s also funny how Lemmy is buying up this narrative.

    The entire US economy is currently being propped up by growth in the AI/tech sector.

    What’s happening is that Dementia Don is curb-stomping the US economy. AI investments, mainly in data centers, are the only thing that still seems promising. When you are on a trek and someone leads you through Death Valley, while pouring out all the water, you shouldn’t blame the last horse that still keeps going.

    Putting the blame in the right place would certainly help, with a view toward the mid-terms.

    Financially: Diversify. Make sure that you are not completely dependent on what happens in the US. But mind that Europe comes with its own imponderable risks (ie Putin). Same with China. Maybe some old leader dies and the new crew runs everything into the ground; they go to war with Taiwan, that sort of thing.


  • Ethical meaning : “private”, "anonymous, “not training with your data”, “no censured”, “open source”…

    Yes. You have to be careful with the meaning of “ethical”. Most often, people write about “ethical AI” to demand money for copyright owners.

    Case in point: Some people say that AI is only open source if the training data can also be shared freely. That means the training data has to be public domain or that permission by the copyright owner was obtained. If that’s what you mean by “open source”, then your options are extremely limited. EG some offerings from AllenAI.

    Uncensored is also tricky. Many say that ethical AI does not output bad content. Of course, what bad content is depends very much on who you ask. The EU or China have strict legal requirements but not the same, of course. In any case, when you train an AI, you steer it to generate a certain kind of output. Respectable businesses don’t want NSFW stuff. Some horny individuals out there want exactly that. So it depends on what you want.

    Check out the SillyTavernAI subreddit (and also LocalLlama). There you find people who value private, uncensored LLMs, though not necessarily copyright. It’s also where the above-mentioned horny individuals hang out for related reasons.

    Duckduckgo offers free, anonymous access to major Chatbots. Maybe worth checking out.


  • Only if the medication doesn’t work. The evidence is that placebos don’t work. Mostly, the placebo effect is a statistical illusion.

    It is plausible that the body will expend more energy to combat a disease if you are (sub-)consciously convinced that you are cared for and don’t need to stress. Stress hormones down-regulate the immune response. Cortisol, used for treatment of autoimmune disorders like asthma and allergies, is a stress hormone.

    But a sham treatment could also have the opposite effect. If your subconscious understands that as a signal that you must get back into action, you may end up releasing stress hormones. These psychological effects are just too idiosyncratic and fickle to be used reliably.

    Stuff like broken bones or cancer doesn’t respond to psychology at all. The body is already doing all it can.

    ETA: https://pmc.ncbi.nlm.nih.gov/articles/PMC7156905/