• DoPeopleLookHere@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    As apposed to the nothing you’ve cited that context tokens actually improve reasoning?

    I love how you keep going further and further away from the education topic at hand, and now brining in police survalinece, which everyone knows is 100% accurate.

    • pinkapple@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      You’re less coherent than a broken LLM lol. You made the claim that transformer-based AIs are fundamentally incapable of reasoning or something vague like that using gimmicky af “I tricked the chatbot into getting confused therefore it can’t think” unpublished preprints (while asking for peer review). Why would I need to prove something? LLMs can write code, that’s an undeniable demonstration that they understand abstract logic fairly well that can’t be faked using probability and it would be a complete waste of time to explain it to anyone who is either having issues with cognitive dissonance or less often may be intentionally trying to spread misinformation.

      Are the AIs developed by Palantir “fundamentally incapable” of their demonstrated effectiveness or not? It’s a pretty valid question when we’re already surveilled by them but some people like you indirectly suggest that this can’t be happening. Should people not care about predictive policing?

      How about the industrial control AIs that you “critics” never mention, do power grid controllers fake it? You may need to tell Siemens, they’re not aware their deployed systems work. And while on that, we shouldn’t be concerned about monopolies controlling public infrastructure with closed source AI models because they’re “fundamentally incapable” to operate?

      I don’t know, maybe this “AI skepticism” thing is lowkey intentional industry misdirection and most of you fell for it?

      • DoPeopleLookHere@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        My larger point, AI replacing teachers is at least a decade away.

        You’ve given no evidence that it is. You’ve just said you hate my sources, while not actually making a single argument that it is.

        You said well it stores context, but who cares? I showed that it doesn’t translate to what you think, and you said you don’t like, without providing any evidence that it means anything beyond looking good on a graph.

        I’ve said several times, SHOW ME ITS CLOSE. I don’t care what law enforcement buys, because that has nothing to do with education.

        • pinkapple@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 hours ago

          I never said its going to replace teachers or that it “stores context” but your sloppily googled preprints to support your “fundamentally can’t reason” statement were demonstrably garbage. You didn’t say even once “show me it’s close” but you think you said several times. Either your reading comprehension is worse than an LLM and you wildly confabulate, which means an LLM could replace you or you’re a bot. Anyway, so far you proved nothing and already said they can write code, it’s a non trivial cognitive task that you can’t perform without several higher order abilities so cope and seethe I guess.

          So, what about Palantir AI? Is that also “not close”? Why are you avoiding surveillance AI? They’re both neural networks. Some are LLMs.

          • DoPeopleLookHere@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            13 minutes ago

            I said AI isn’t close in education. That was my entire claim

            I never said anything about any other company. I said AI in education isn’t happening soon. You keep pulling in other sectors.

            I’ve also had several comments in this thread before you came in saying that.

            EDIT: give me a citation that LLMs can reason for code. Because in my experience as someone that professionally codes with AI (copilot) it’s not capable at that. It’s guess what it thinks I want to write in small segments.

            https://x.com/leojr94_/status/1901560276488511759

            Espcially when it has a nasty habit of leaking secrets.

            EDIT2 forgot to say why I’m ignoring other fields. Because we’re not talking about AI in those fields. We’re talking education and search engines at best. My original comment was that AI generated educational papers still serve their original purpose.

            What the fuck does that have to do with anything to do with plaintair?