Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

  • glimse@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Based on the other comments, it seems like this needs 4x as much ram than any consumer card has

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      It hasn’t been quantized, then. I’ve run 70B models on my consumer graphics card at a reasonably good tokens-per-second rate.