Was using my SO’s laptop, I had been talking (not searching, or otherwise typing) about some VPN solutions for my homelab, and had the curiosity to use the new big copilot button and ask what it can do. The beginning of this context was actually me asking if it can turn off my computer for me (it cannot) and I ask this.

Very unnerved, I hate to be so paranoid to think that it actually picked up on the context of me talking, but again: SO’s laptop, so none of my technical search history to pull off of.

  • SzethFriendOfNimi@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    9 months ago

    There’s a real risk of survivorship bias here. Somebody asking about a car gets that and thinks nothing of it. A privacy minded person, however, would find it odd. And being the kind of person concerned about what could have been the cause considered the prior conversation.

    I’m not saying its an unreasonable concern or technically not feasible. It’s just not how the LLM’s tend to work.

    Id consider it more likely to be a bug, or general inquiries like you said, or that SO had a bunch of documents locally that reference privacy or browsing history (anytime really) that MS could have used as a kind of “here’s more about the person asking you a question”

    • ipkpjersi@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      9 months ago

      A privacy minded person probably wouldn’t use these tools to begin with tbh, they would likely run their own LLM instead.

      • BreakDecks@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        9 months ago

        I guess that’s why OP brought up that they were using someone else’s computer.

        Also, a truly privacy-minded person wouldn’t refuse to use a hosted AI product at all. We generally just make ourselves aware that we don’t have privacy when using it, and never type anything sensitive into it. Also, have you seen what it costs to run a capable LLM?

        • bbuez@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          9 months ago

          Just don’t pull a samsung

          I’ve just started messing with GPT4all for CPU based language models which can run relatively well on older gaming hardware, and a coral accelerator module for my NVR presence detection with Frigate only cost 30$