Instead of destroying the universe, can we destroy prior, failed shuffle/check iterations to retain o(1)? Then we wouldn’t have to reload all of creation into RAM.
Instead of destroying the universe, can we destroy prior, failed shuffle/check iterations to retain o(1)? Then we wouldn’t have to reload all of creation into RAM.
It’s hard to explain. A lot of it is about vibes and focus over the last several years.
Bernie’s being a bit harsh in saying Dems didn’t try. Republicans blocked their efforts. But there’s also a feeling that they didn’t care all that much. At the end of the day, they’re career politicians, padding their pockets with corporate donations while demanding starving citizens vote for them because the other guy would be somewhat less palatable. And I guess Trump’s honesty about being apathetic and money-grubbing is more appealing than Dems’ feigned innocence and solidarity.
Reminds me of the Reboot hotel offices.
For LLMs, I’ve had really good results running Llama 3 in the Open Web UI docker container on a Nvidia Titan X (12GB VRAM).
For image generation tho, I agree more VRAM is better, but the algorithms still struggle with large image dimensions, ao you wind up needing to start small and iterarively upscale, which afaik works ok on weaker GPUs, but will gake problems. (I’ve been using the Automatic 1111 mode of the Stable Diffusion Web UI docker project.)
I’m on thumbs so I don’t have the links to the git repos atm, but you basically clone them and run the docker compose files. The readmes are pretty good!
deleted by creator
As a highly sensitive person, what I’ve learned for me is:
Don’t bee that way!
Some servers have a c/NoStupidQuestions
For people like me who need visual aids.
Can we call communities “lemlets?”
There’s also the alternative “grills” vs. bouys" pair.