Jesus@lemmy.world to Political Memes@lemmy.world · 2 months agoWhat could possibly go wronglemmy.worldexternal-linkmessage-square76fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkWhat could possibly go wronglemmy.worldJesus@lemmy.world to Political Memes@lemmy.world · 2 months agomessage-square76fedilink
minus-squareEven_Adder@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·2 months agoThe answer I got out of DeepSeek-R1-Distill-Llama-8B-abliterate.i1-Q4_K_S
minus-squaretaiyang@lemmy.worldlinkfedilinkarrow-up0·2 months agoSo a real answer, basically. Too bad your average person isn’t going to bother with that. Still nice it’s open source.
minus-squarefelixwhynot@lemmy.worldlinkfedilinkarrow-up0·2 months agoSeems like the model you mentioned is more like a fine tuned Llama? Specifically, these are fine-tuned versions of Qwen and Llama, on a dataset of 800k samples generated by DeepSeek R1. https://github.com/Emericen/deepseek-r1-distilled
minus-squareEven_Adder@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·edit-22 months agoYeah, it’s distilled from deepseek and abliterated. The non-abliterated ones give you the same responses as Deepseek R1.
The answer I got out of DeepSeek-R1-Distill-Llama-8B-abliterate.i1-Q4_K_S
So a real answer, basically. Too bad your average person isn’t going to bother with that. Still nice it’s open source.
Seems like the model you mentioned is more like a fine tuned Llama?
https://github.com/Emericen/deepseek-r1-distilled
Yeah, it’s distilled from deepseek and abliterated. The non-abliterated ones give you the same responses as Deepseek R1.