Yeah, that’ll happen.

While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.

During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale “the next 1000x in 4-5 years.”

While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”

  • plyth@feddit.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 hours ago

    Gaslighting. Processor capacity doubles every 2 years according to Moore’s law which at best can be energy neutral. All other increases can only come from proportional energy increases. So instead of needing 1000x the energy, at best it is 250x.

    • mushroommunk@lemmy.today
      link
      fedilink
      arrow-up
      5
      ·
      41 minutes ago

      Moore’s “law” died back in 2016. It’s not held for a while now. The only way they can scale the way they want without a major breakthrough is more power and larger machines