• Not_mikey@slrpnk.net
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    15 hours ago

    Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

    If the llm already knows the full sentence it’s going to output from the first word it “guesses” I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 hours ago

      interestingly, too, this is a technique when you’re improvising songs, it’s called Target Rhyming.

      The most effective way is to do A / B^1 / C / B^2 rhymes. You pick the B^2 rhyme, let’s say, “ibruprofen” and you get all of A and B^1 to think of a rhyme

      Oh its Christmas time
      And I was up on my roof when
      I heard a jolly old voice
      Ask me for ibuprofen

      And the audience thinks you’re fucking incredible for complex rhymes.

    • angrystego@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      I don’t think it knows the full sentence, it just doesn’t search for the words in the order they will be in the sentence. It finds the end-words first to make the poem rhyme, than looks for the rest of the words. I do it this way as well just like many other people trying to create any kind of rhyming text.