• abruptly8951@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 hours ago

    Can you go into a bit more details on why you think these papers are such a home run for your point?

    1. Where do you get 95% from, these papers don’t really go into much detail on human performance and 95% isn’t mentioned in either of them

    2. These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply

    3. These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)

    4. These papers only consider finite size datasets, and relatively small ones at that. I.e. How many “tokens” would a 4 year old have processed? I imagine that question should be somewhat quantifiable

    5. These papers do not consider multimodal systems.

    6. You talked about permeance, does a RAG solution not overcome this problem?

    I think there is a lot more we don’t know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic