• Alphane Moon@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Thanks for the reply.

    I guess we’ll see what happens.

    I still find it difficult to get my head around how a decrease in novel training data will not eventually cause problems (even with techniques to work around this in the short term, which I am sure work well on a relative basis).

    A bit of an aside, I also have zero trust in the people behind current LLM, both the leadership (e.g. Altman) or the rank and file. If it’s in their interests do downplay the scope and impact of model degeneracy, they will not hesitate to lie about it.

    • Warl0k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      Yikes. Well. I’ll be over here, conspiring with the other NASA lizard people on how best to deceive you by politely answering questions on a site where maaaaybe 20 total people will actually read it. Good luck getting your head around it, there’s lots of papers out there that might help (well, assuming I’m not lying to you about those, too).

      • Alphane Moon@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        This was a general comment, not aimed at you. Honestly, it wasn’t my intention to accuse you specifically. Apologies for that.