Hi,

I wanted to run some Large Language Models locally. Something like Private GPT or Medium Article on my local Apple Silicon to enhance my privacy but also get some additional help.

Does anyone have recommendations or guides I could follow?

Thank you very much.

  • moonpiedumplings@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their “vram” as well, allowing you to run bigger models on smaller devices.

    Llama.cpp was the software that users did this with originalky. I can’t find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:

    https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0

    • Guenther_Amanita@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Alright, interesting… As I said, I’m no expert or anything and this was just my noob optinion.

      Thank you for the correction and further resources!

      • plsnotracking@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Thank you for the discussion folks, I’ll try out Llama.cpp and report back.

        I also saw the Neural Engine Stuff hasn’t been merged into main kernel yet but it’s available as a separate out of tree patch. Hopefully merging that will help with more model support? (Pure guesswork)

        I also saw that PCL stuff isn’t ready yet and u/marcan42 said it’s WIP that also might be helpful in getting better model support because read somewhere that metal isn’t going to be a part of the Asahi kernel ever(?)

        I’m no expert at any of this but hopefully we’ll be able to run some sort of GPTs locally someday.

        Good luck.