This specific GPU is… Kind of a mixed bag. It’s supposed to be built on a 6nm process, and the G100 is, according to Lisuan, the first domestic chip to genuinely rival the NVIDIA RTX 4060 in raw performance, delivering 24 TFLOPS of FP32 compute. It even introduced support for Windows on ARM, a feature even major Western competitors had not fully prioritized.

It appears to fall short of its marketing promises, though. An alleged Geekbench OpenCL listing revealed the G100 achieving a score of only 15,524, a performance tier that effectively ties it with the GeForce GTX 660 Ti, a card released in 2012. This places the “next-gen” Chinese GPU on par with 13-year-old hardware, making it one of the lowest-scoring entries in the modern database. The leaked specifications further muddied the waters, showing the device operating with only 32 Compute Units, a bafflingly low 300 MHz clock speed, and a virtually unusable 256 MB of video memory. We’ll likely see more benchmarks as the GPU makes its way to the hands of customers.

These “anemic” figures might represent an engineering sample failing to report correctly due to immature drivers—a theory supported by the test bed’s configuration of a Ryzen 7 8700G on Windows 10. But still, if true, the underlying silicon may still be fundamentally incapable of reaching the promised RTX 4060 performance targets, regardless of the actual specifications that are being reported.

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    device operating with only 32 Compute Units, a bafflingly low 300 MHz clock speed, and a virtually unusable 256 MB of video memory

    Oh, so they’re going to sell like all those 2TB pendrives and 32GB RAM octa-core tablets

  • ben@lemmy.zip
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    I honestly feel like our best route to competition at this point is the big players being forced to license technology to eachother and smaller companies.

    The reason CPUs don’t suffer from these issues nearly as badly as graphics is that Intel and AMD are effectively stuck having to share technology with eachother.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 hours ago

      And China is unable to sell in the west due to patents on x86 architecture.

      Would be interesting to see how their CPUs fare against the Intel/AMD offerings

    • Earthman_Jim@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      How funny would it be if those dummies accidentally train an LLM that can explain to everyone else how to do it. lol

  • bobalot@lemmy.world
    link
    fedilink
    English
    arrow-up
    136
    arrow-down
    1
    ·
    1 day ago

    More competition for AMD and NVIDIA, the better.

    I wouldn’t expect the first domestic Chinese GPU to be great but hopefully they keep iterating and get better and better.

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      1 day ago

      Name one industry the Chinese haven’t beaten sooner or later. When they apply to a problem , they typically lead the world.

    • orclev@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      5
      ·
      1 day ago

      Sounds like it’s about equivalent to Intel’s latest GPU. Both are running about a little over a generation behind AMD and Nvidia. Meanwhile Nvidia is busy trying to kill their consumer GPU division to free up more fab space for data center GPUs chasing that AI bubble. AMD meanwhile has indicated they’re not bothering to even try to compete with Nvidia on the high end but rather are trying to land solidly in the middle of Nvidia’s lineup. More competition is good but it seems like the two big players currently are busy trying to not compete as best they can, with everyone else fighting for their scraps. The next year or two in the PC market are shaping up to be a real shit show.

      • glimse@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        ·
        1 day ago

        Sounds like it’s about equivalent to Intel’s latest GPU. Both are running about a little over a generation behind AMD and Nvidia.

        Sounds like it’s more than “a little over one generation behind” if it benchmarks near an Nvidia card released 14 years ago??

          • glimse@lemmy.world
            link
            fedilink
            English
            arrow-up
            18
            arrow-down
            1
            ·
            1 day ago

            According to the article, the actual performance is on par with a GTX 660 Ti

            • orclev@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              1 day ago

              Eh, maybe. The actual performance seems to be unknown. They’re assuming the geekbench score is legitimate, but there’s no way to really know exactly how well it will do when it actually ships. It’s probably safe to assume somewhere between the two, but either way it’s not competing with current gen AMD or Nvidia cards, and might not even be competing with current Intel GPUs.

          • Anivia@feddit.org
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            3
            ·
            1 day ago

            Maybe you should read more than 1 paragraph before commenting. And in general.

            • orclev@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              1 day ago

              Maybe you should stop assuming things before commenting. And in general. You might also want to reread the article you seem to have skipped some important details.

      • Bobby Turkalino@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        Nvidia is busy trying to kill their consumer GPU division to free up more fab space for data center GPUs chasing that AI bubble

        Which seems wildly shortsighted, like surely the AI space is going to find some kind of more specialized hardware soon, sort of like how crypto moved to ASICs. But I guess bubbles are shortsighted…

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          edit-2
          1 day ago

          The crazy part is outside LLMs the other (actually useful) AI does not need that much processing power, more than you or I use sure but nothing that would have justified gigantic data centers. The current hardware situation is like if the automobile first got invented and a group of companies decided to invest in huge mortal engines style mega-vehicles.

          • DacoTaco@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 day ago

            Debatable. The basics of an llm might not need much, but the actual models do need it to be anywhere near decent or usefull. Im talking minutes for a simple reply.
            Source: ran few <=5b models on my system with ollama yesterday and gave it access to a mcp server to do stuff with

            Derp, misread. sorry!

              • DacoTaco@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 day ago

                Oh derp, misread sorry! Now im curious though, what ai alternatives are there that are decent in processing/using a neural network?

                • CheeseNoodle@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 day ago

                  So the two biggest examples I am currently aware of are googles AI for unfolding proteins and a startup using one to optimize rocket engine geometry but AI models in general can be highly efficient when focussed on niche tasks. As far as I understand it they’re still very similar in underlying function to LLMs but the approach is far less scattershot which makes them exponentially more efficient.

                  A good way to think of it is even the earliest versions of chat GPT or the simplest local models are all equally good at actually talking but language has a ton of secondary requirements like understanding context and remembering things and the fact that not every gramatically valid bannana is always a useful one. So an LLM has to actually be a TON of things at once while an AI designed for a specific technical task only has to be good at that one thing.

                  Extension: The problem is our models are not good at talking to eachother because they don’t ‘think’ they just optimize an output using an intput and a set of rules, so they don’t have any common rules or internal framework. So we can’t say take an efficient rocket engine making AI and plug it into an efficient basic chatbot and have that chatbot be able to talk knowledgably about rockets, instead we have to try and make the chatbot memorise a ton about rockets (and everything else) which it was never initially designed to do which leads to immense bloat.

  • kadu@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    1 day ago

    Given how much Intel struggled even though they had been working with GPUs for decades, I’m actually impressed with how fast and competent China’s attempts have been so far. Though I have to say, from the article:

    The leaked specifications further muddied the waters, showing the device operating with only 32 Compute Units, a bafflingly low 300 MHz clock speed, and a virtually unusable 256 MB of video memory. We’ll likely see more benchmarks as the GPU makes its way to the hands of customers.

    If none of the specs match what they’re supposed to be, and are weirdly out of date, are they sure this is the same GPU? It could very well be an early prototype being tested for stability. There are some engineering sample CPUs that run at 1/100th the intended speed of the final product, for instance.

  • Annoyed_🦀 @lemmy.zip
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    1 day ago

    While it was a historic milestone as the first domestic gaming card with PCIe 5.0, it struggled with immature drivers and inconsistent performance, and it failed to run modern titles smoothly.

    An alleged Geekbench OpenCL listing revealed the G100 achieving a score of only 15,524, a performance tier that effectively ties it with the GeForce GTX 660 Ti, a card released in 2012.

    This is the issue i have with new chinese brand(and also a lot of existing one), they always have great spec on paper but really fall short on real world use. From phone to car to bike part to computer hardware, they always love to hype up the spec for the sales, but fumbled on the user long term experience.

    On the other hand, as long as you expecting Mushu when they sell you a dragon, it’s a good alternative from the expensive stuff, just know what you’re getting into.

  • eleijeep@piefed.social
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 day ago

    This article is pretty light on details, and the only number they have comes from a “leaked GeekBench score” which could be literally anything. We’ll have to wait for a real tech outlet to pick up a sample and benchmark it properly.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Okay, let’s say you have a fuel injected car, but instead of using the 02 sensor to decide the fuel air mixture, it just squirts the same amount of gas every time.

          The hardware might be able to achieve 400 hp, but the software means it only ever achieves 50 hp

          It’s like that. The software drives the hardware. It doesn’t matter how good the hardware is, the software is the brain of the operation - if the software doesn’t know how to utilize the hardware properly, you’re going to have piss poor performance

  • Alaknár@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    1 day ago

    Hey, OP, why did you write that this GPU is “the first (…) to rival the Nvidia RTX 4060 in raw performance”?

    It appears to fall short of its marketing promises, though. An alleged Geekbench OpenCL listing revealed the G100 achieving a score of only 15,524, a performance tier that effectively ties it with the GeForce GTX 660 Ti, a card released in 2012

    It’s nowhere near that, as tests show.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 day ago

        It’s not clear that the except is a quote. No quotation marks. No vertical bar denoting quotation. The ellipses at the very start of the first sentence.

        • Jrockwar@feddit.uk
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          I think it’s ok, the comment literally says “according to Lisuan”. Which I see as factually correct - that’s the marketing claim, or the performance according to them, just like Teslas have been self-driving according to Tesla since 2012.