Big day for people who use AI locally. According to benchmarks this is a big step forward to free, small LLMs.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    At long context (close to the full 128K), Nemo is way better than llama 8B in my testing.

    Turns out they are both very sensitive to quantization though.

    TBH I didn’t know people here were running LLMs. Seems like most of Lemmy is very broadly anti AI?

    • ObsidianZed@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      My impression is the general consensus is we don’t want huge corporations stealing data to train their AI models only to turn around and cram it down our throats anywhere they can with increasingly negative experiences. That being said, while I would generally agree with that, I still find it interesting and especially if I can host it myself.