About the only time I find myself using regular Wikipedia these days is if I need to know if someone died since August 2025 when this ZIM dump was created.

  • kali_fornication@lemmy.world
    link
    fedilink
    arrow-up
    28
    ·
    16 days ago

    I have all of wikipedia in a single 156 GB text file. in my .zshrc i have fastWikiLookup() { cat ~/wikipedia.txt | grep "$@" }

    • Badabinski@kbin.earth
      link
      fedilink
      arrow-up
      27
      ·
      15 days ago

      If you want a free and massive performance optimization, remove the cat:

      fastWikiLookup() { grep "$@" ~/wikipedia.txt }
      

      Reading and piping 156 GB of data to another process every time you want to look something up is a somewhat nontrivial action. Grep can directly read the file, which should result in a pretty damn good speed up.

        • kautau@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          15 days ago

          lol database engineers who’ve built very complex systems and ingest and query mechanisms across the world all the sudden got very mad at your comment and they’re not sure why