Got one more for you: https://gossip.ink/
I use it via a docker/podman container I’ve made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general
Got one more for you: https://gossip.ink/
I use it via a docker/podman container I’ve made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general
I got cancelled too and chose Hetzner instead. Will not do business with a company that can’t get their filters working decently.
That’s wonderful to know! Thank you again.
I’ll follow your instructions, this implementation is exactly what I was looking for.
Absolutely stellar write up. Thank you!
I have a couple of questions.
Imagine I have a powerful consumer gpu card to trow at this solution, 4090ti for the sake of example.
- How many containers can share one physical card, taking into account total vram memory will not be exceeded?
- How does one virtual gpu look like in the container? Can I run standard stuff like PyTorch, Tensorflow, and CUDA stuff in general?
DS1 to DS3, I lost count of the hours.
Just figured out there are 10 places called Lisbon dotted around the US, according to the search.