n7gifmdn@lemmy.ca to Memes@lemmy.mlEnglish · 7 months agoMeta AIimage.nostr.buildimagemessage-square3fedilinkarrow-up10arrow-down10
arrow-up10arrow-down1imageMeta AIimage.nostr.buildn7gifmdn@lemmy.ca to Memes@lemmy.mlEnglish · 7 months agomessage-square3fedilink
minus-squarem-p{3}@lemmy.calinkfedilinkarrow-up0·edit-27 months agoThe quantized model you can run locally works decently and they can’t read any of it, which is nice. I use that one specifically https://huggingface.co/lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf If you’re looking for a relatively user-friendly software to use it, you can look at GPT4All (open source) or LM Studio.
The quantized model you can run locally works decently and they can’t read any of it, which is nice.
I use that one specifically https://huggingface.co/lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf
If you’re looking for a relatively user-friendly software to use it, you can look at GPT4All (open source) or LM Studio.