Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.

    • whodatdair@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      ·
      8 months ago

      Gold rush you say?

      Shovels for sale!

      Get your shovels here! Can’t strike it rich without a shovel!

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      8 months ago

      I feel like a pretty big winner too. Meta has been quite generous with releasing AI-related code and models under open licenses, I wouldn’t be running LLMs locally on my computer without the stuff they’ve been putting out. And I didn’t have to pay a penny to them for it.

    • fluxion@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      Was wondering why my stock was up. AI already improving my quality of life.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    8 months ago

    Who isn’t at this point? Feels like every player in AI is buying thousands of Nvidia enterprise cards.

    • 31337@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      The equivalent of 600k H100s seems pretty extreme though. IDK how many OpenAI has access to, but it’s estimated they “only” used 25k to train GPT4. OpenAI has, in the past, claimed the diminishing returns on just scaling their model past GPT4s size probably isn’t worth it. So, maybe Meta is planning on experimenting with new ANN architectures, or planning on mass deployment of models?

      • qupada@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        The estimated training time for GPT-4 is 90 days though.

        Assuming you could scale that linearly with the amount of hardware, you’d get it down to about 3.5 days. From four times a year to twice a week.

        If you’re scrambling to get ahead of the competition, being able to iterate that quickly could very much be worth the money.

  • Deceptichum@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    8 months ago

    I really hope they fail hard and end up putting these devices on the consumer second hand market because the v100’s while now affordable and flooding the market are too out of date.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      8 months ago

      Meta is the source of most of the open source LLM AI scene. They’re contributing tons to the field and I wish them well at it.

  • Wanderer@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Anyone got a graph of ai spending over time globally?

    I’m starting to feel more confident about AGI coming soon (relatively soon).

    Knowing absoultely nothing about it though it seems like it needs to be more efficient? What’s the likelihood rather than increasing the bulk power of these systems that there is a breakthrough that allows more from less?

    • 31337@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Spending is definitely looks exponential at the moment:

      Most breakthroughs have historically been made by university researchers, then put into use by corporations. Arguably, including most of the latest developments,. But university researchers were never going to get access to the $100 million in compute time to train something like GPT-4, lol.

      The human brain has 100 trillion connections. GPT-4 has 1.76 trillion parameters (which are analogous to connections). It took 25k GPUs to train, so in theory, I guess it could be possible to train a human-like intelligence using 1.4 million GPUs. Transformers (the T in GPT) are not like human brains though. They “learn” once, then do not learn or add “memories” while they’re being used. They can’t really do things like planning either. There are algorithms for “lifelong learning” and planning, but I don’t think they scale to such large models, datasets, or real-world environments. I think there needs to be a lot theoretical breakthroughs to make AGI possible, and I’m not sure if more money will help that much. I suppose AGI could be achieved by trial and error (i.e. trying ideas and testing if they work without mathematically proving if or how well they’d work) instead of rigorous theoretical work.

  • elgordio@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    total expenditures potentially reaching $9 billion

    I imagine they negotiated quite the discount in that.

    • DdCno1@kbin.social
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      Agreed. There’s volume discount, and then there is “Facebook data center with an energy consumption of a small country volume discount”.