• wholookshere@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    edit-2
    3 months ago

    Literally anything that requires knowing facts to inform writing. This is something LLMs are incapable of doing right now.

    Just look up how many R’s are in strawberry and see how chat gpt gets it wrong.

    • boonhet@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Okay what the hell is wrong with it

      It took me three times to convince it that there’s 3 r’s in strawberry…

      • wholookshere@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        Because that’s not how LLMs work.

        When you form a sentence you start with an intent.

        LLMs start with the meaning you gave it, and tries to express something similar to you.

        Notice how intent, and meaning aren’t the same. Fact checking has nothing to do with what a word means. So how can it understand what is true?

        All it did was take the meaning of looking for a number and strawberries and ran it’s best guess from that.