Big day for people who use AI locally. According to benchmarks this is a big step forward to free, small LLMs.

  • just another dev@lemmy.my-box.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    I haven’t given it a very thorough testing, and I’m by no means an expert, but from the few prompts I’ve ran so far, I’d have to hand it to Nemo concerning quality.

    Using openrouter.ai, I’ve also given llama3.1 405B a shot, and that seems to be at least on par with (if not better than) Claude 3.5 Sonnet, whilst being a bit cheaper as well.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Llama 70B is probably where its at, if you go the API route. It’s distilled from 405B, and its benchmarks are pretty close.