

Right because that totally worked the last time, and the time before that
Don’t DM me without permission please


Right because that totally worked the last time, and the time before that


Oh, well I mean, arson isn’t normally cute is it?


I’m not sure I understand the question


I used to watch Lunduke’s linux Sucks speech every year it came out. What happened to that guy? Was he always the worst and I didn’t notice til 2020?


Well that’s embarscing


deleted by creator


This is encouraging!


I suppose you make a point, I’m not sure how my school would feel about me open sourcing my project code though 😅
Once I have more time for Personal projects I plan to open source everything.


You know we don’t like corn syrup right?


From what I understand Yugioh suffers from an extreme lack of keywording, meaning every mechanic used is explained on every card which uses it, because no keywords.
I’m not sure ‘That’s A Lot If Words’ could win a game of Magic, but it could certainly win a Yugioh comp.


Cleaning isn’t the goal, appearing clean and absorbing excess oil is the goal, which it achieves easily.


That’s why they call it cornhole 😉


I prefer offpunk.


You don’t need to be rude.
My original comment was in reply to someone looking for this type of information, the conversation then continued.
Disengage: I don’t want to deal with it today frankly, I don’t have time for rude people.


I like when it insists I’m using escape characters in my text when I absolutely am not and I have to convince a machine I didn’t type a certain string of characters because on its end those are absolutely the characters it recieved.
The other day I argued with a bot for 10 minutes that I used a right caret and not the html escape sequence that results in a right caret. Then I realized I was arguing with a bot, went outside for a bit, and finished my project without the slot machine.


Yes, precisely.
If you’re trying to use large models, you need more RAM than consumer grade nvidia products can supply. Without system ram sharing, the models error out and start repeating themselves or just crash and need to be restarted.
This can be fixed with CPU inferencing but would be much slower.
An 8b model will run fine on an RTX30 series, a 70b model will absolutely not. BUT you can do cpu inferencing with the 70b model if you don’t mind the wait.
Today I made vegetarian salisbury steaks using impossible patties, store bought broth, and fresh veggies and herbs (and some stuff I had laying around). I spent less than $15 total (costco, price per unit) on the ingredients. It took 2 hours of cooking.
Assuming a wage of $25/hr, lower than adequate but relatively high in service fields in the US (those who work enough that delivery is super tempting), my meal cost me $65 including my labor. That’s less than it’d cost for delivery of a similar meal, is higher quality than I could get for delivery, and I’ve got leftovers for tomorrow, which I wouldn’t get with delivery.
Delivery is a scam. Gig economy Based delivery doubly so.


8b parameter models are relatively fast on 3rd gen RTX hardware with at least 8gigs of vram, CPU inferencing is slower and requires boatloads of ram but is doable on older hardware. These really aren’t designed to run on consumer hardware, but the 8b model should do fine on relatively powerful consumer hardware.
If you have something that would’ve been a high end gaming rig 4 years ago, you’re good.
If you wanna be more specific, check huggingface, they have charts. If you’re using linux with nvidia hardware you’ll be better off doing CPU inferencing.
Edit: Omg y’all I didn’t think I needed to include my sources but this is quite literally a huge issue on nvidia. Nvidia works fine on linux but you’re limited to whatever VRAM is on your video card, no RAM sharing. Y’all can disagree all you want but those are the facts. Thays why AMD and CPU inferencing are more reliable, and allow for higher context limits. They are not faster though.
Sources for nvidia stuff https://github.com/NVIDIA/open-gpu-kernel-modules/discussions/618
https://forums.developer.nvidia.com/t/shared-vram-on-linux-super-huge-problem/336867/
https://github.com/NVIDIA/open-gpu-kernel-modules/issues/758
I’ve been following the sm64-psx project.
Yesterday I even got the game to compile, AND show the SM64 splash screen on real PS1 hardware