Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.

  • simon574@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    The headline is misleading. By “real-world use” they mean using ChatGPT and Claude for street navigation in New York. Which is one very specific use-case.

    • 14th_cylon@lemm.eeOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      there is nothing misleading about the headline. street navigation is quite primitive use case compared to what some others were suggesting (like firing stuff of suicide hotline and replacing them with chatbots).

      while machine learning can no doubt be useful tool for many of narrowly specified specific tasks, where all you need to do is evaluate lot of data and find pattern in it, the business behind it acts as if it already had invented GAI and unfortunately will keep pretending that and probably cause lot of damage in hunt for money.

      • simon574@feddit.org
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 month ago

        I agree there is a lot of marketing BS around LLMs right now. But I would argue that they are quite useful for e.g. basic language and coding tasks and at least for me these are real-world use cases too.