That I agree with. Microsoft drafted the recommendation to use it for local networks, and Apple ignored it or co-opted it for mDNS.
That I agree with. Microsoft drafted the recommendation to use it for local networks, and Apple ignored it or co-opted it for mDNS.
Macs aren’t the only thing that use mDNS, either. I have a host monitoring solution that I wrote that uses it.
Yeah, that’s why I started using .lan.
I was using .local, but it ran into too many conflicts with an mDNS service I host and vice versa. I switched to .lan, but I’m certainly not going to switch to .internal unless another conflict surfaces.
I’ve also developed a host-monitoring solution that uses mDNS, so I’m not about to break my own software. 😅
Coincidentally, I just found this other thread that mentions EasyEffects: https://programming.dev/post/17612973
You might be able to use a virtual device to get it working for your use case.
It depends on the model you run. Mistral, Gemma, or Phi are great for a majority of devices, even with CPU or integrated graphics inference.
They added a video player with version 3, I think.
Now the question is - are they open sourcing the original Winamp, or the awful replacement?
We all mess up! I hope that helps - let me know if you see improvements!
I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
Good luck! I’m definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)
It should be split between VRAM and regular RAM, at least if it’s a GGUF model. Maybe it’s not, and that’s what’s wrong?
Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)
I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?
Unfortunately, I don’t expect it to remain free forever.
No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
My go-to solution for this is the Android FolderSync app with an SFTP connection.
I’m not familiar with creating fonts specifically, but you’ll want to commit any resources necessary to recreate the font file, including any build scripts to help ease the process and instructions specifying compatible versions of tooling (FontForge in this case). Don’t include FontForge in the repository, of course.
The compiled font files should be under releases in GitHub for the repository.
Git isn’t generally meant for binary resources but as long as they’re not too large, they’ll be fine. You just may not have meaningful ways to compare changes easily.
I’ve been disappointed in general with the XPS line in recent years. Dell has made some keyboard changes that I am not a fan of:
I’ve been purchasing the XPS line of laptops since 2013, but I stopped as soon as those changes landed and the Developer Edition of their laptops shipped with inferior hardware compared to the Windows ones.
Of course!
It would be extremely barebones, but you can do something like this with Pandoc.