• setsubyou@lemmy.world
      link
      fedilink
      English
      arrow-up
      91
      arrow-down
      2
      ·
      2 months ago

      We need to start posting this everywhere else too.

      This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.

    • alias_qr_rainmaker@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 months ago

      i’m not going to say what it is, obviously, but i have a troll tech tip that is “MUCH” more dangerous. it is several lines of zsh and it basically removes every image onyour computer or every codee file on your computer, and you need to be pretty familiar with zsh/bash syntax to know it’s a trolltip

      so yeah, definitely not posting this one here, i like it here (i left reddit cuz i got sick of it)

    • Credibly_Human@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      11
      ·
      2 months ago

      Its always been a shitty meme aimed at being cruel to new users.

      Somehow though people continue to spread the lie that the linux community is nice and welcoming.

      Really its a community of professionals, professional elitists, or people who are otherwise so fringe that they demand their os be fringe as well.

  • DaddleDew@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Shit like that is why AI is completely unusable for any application where you need it to behave exactly as instructed. There is always the risk that it will do something unbelievably stupid and the fact that it pretends to admit fault and apologize for it after being caught should absolutely not be taken seriously. It will do it again and again as long as you give it a chance to.

    It should also be sandboxed with hard restrictions that it cannot bypass and only be given access to the specific thing you need it to work on and it must be something you won’t mind if it ruins it instead. It absolutely must not be given free access to everything with instructions to not touch anything because your can bet your ass it will eventually go somewhere it wasn’t supposed to and break stuff just like it did there.

    Most working animals are more trustworthy than that.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It should also be sandboxed with hard restrictions that it cannot bypass

      duh… just using it in a container and that’s it. It won’t blue pill its way out.

  • Devial@discuss.online
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    2
    ·
    2 months ago

    If you gave your AI permission to run console commands without check or verification, then you did in fact give it permission to delete everything.

    • lando55@lemmy.zip
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 months ago

      I didn’t install leopards ate my face Ai just for it to go and do something like this

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      But for real, why would the agent be given the ability to run system commands in the first place? That sounds like a gargantuan security risk.

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Because “agentic”. IMHO running commands is actually cool, doing it without very limited scope though (as he did say in the video) is definitely idiotic.

  • NewNewAugustEast@lemmy.zip
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    3
    ·
    2 months ago

    Wait! The delveloper absolutely gave permission. Or it couldn’t have happened.

    I stopped reading right there.

    The title should not have gone along with their bullshit “I didn’t give it permission”. Oh you did, or it could not have happened.

    Run as root or admin much dumbass?

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      2 months ago

      It reminds me of that guy that gave an AI instructions in all caps, as if that was some sort of safeguard. The problem isn’t the artificial intelligence it’s the idiot biological that has decided to ride around without safety wheels.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      I think that’s the point, the “agent” (whatever that means) is not running in a sandbox.

      I imagine the user assumed permissions are small at first, e.g. single directory of the project, but nothing outside of it. That would IMHO be a reasonable model.

      They might be wrong about it, clearly, but it doesn’t mean they explicitly gave permission.

      Edit: they say it in the video, ~7min in, they expected deletion to be scoped within the project directory.

      • NewNewAugustEast@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        I think the user simply had no idea what they are doing. I read their post and they say they are not a developer anyways, so I guess that explains a lot.

        They said in a post: I thought about setting up a virtual machine but didnt want to bother.

        I am being a bit hard on them, I assumed they knew what they were doing: Dev, QA, Test, Prod. Code review prior to production etc. But they just grabbed a tool, granted it root to their shell and ran with it.

        But they them selves said it caused issues before. And looking at the posts on the antigravity page, lots of people do.

        They basically started using a really crappy tool without any supervision as a noob.

        He said “I didn’t know I needed a seatbelt for AI”. LIKE WHAT THE FUCK. Where have you been that you didn’t know that these tools make mistakes. You make mistakes. Everything makes mistakes.

        If you go to googles antigravity page, I would quick Nope the fuck out. What a shit page.

        Edit: 1 more thing: There is a post where one of the users says something along the lines of: “of course I gave the AI full access to my computer, what do I have to hide”? The level of expertise is stupid low…

        Edit2: Also, when shown the screen that says “dont allow terminal commands” and also “dont allow auto excution”, they decided to turn those off. Also saying well that is tedious.

  • very_well_lost@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    2 months ago

    they still said that they love Google and use all of its products — they just didn’t expect it to release a program that can make a massive error such as this, especially because of its countless engineers and the billions of dollars it has poured into AI development.

    I honestly don’t understand how someone can exist on the modern Internet and hold this view of a company like Google.

    How? How?

    • sartalon@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      2 months ago

      I can’t say much because of the NDA’s involved, but my wife’s company is in a project partnership with Google. She works in a very public facing aspect of the project.

      When Google first came on board, she was expecting to see quality people who were locked in and knew what they were doing.

      Instead she has seen terrible decision making (like “How the fuck do they still exist as company” bad decision making) and an over abundant reliance on using their name to pressure people into giving Google more than they should.

      I remember when their motto was “Don’t be evil”. They are the very essence of sociopathic predatory capitalism.

      • jjjalljs@ttrpg.network
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 months ago

        Companies fill up with idiots and parasites. People who are adept at thriving in the role without actually producing value. Google is no exception.

      • MagnificentSteiner@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        They still exist because Google isn’t really a technology company anymore. It’s an advertising company masquerading as a technology company. Their success depends on selling more ads which is why all the failed projects don’t seem to make a difference.

        • sartalon@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Your point seems very valid to me.

          I don’t even want to buy their products anymore because they constantly cancel them and remove any support.

          The only ones they continue, seem to be the ones they can use for data collection .i.e. Pixels and Nests. (I shamefully own both).

          It is so frustrating as a consumer. Especially when you know that you have become the product for them to sell.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 months ago

      Because they don’t have a clue how technology actually works. I have genuinely heard people claim that AI should run on Asimovs laws of robotics, even though not only would they not work in the real world, they don’t even work in the books. Zero common sense.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 months ago

        I mean, they were never designed to work, they were designed to pose interesting dilemmas for Susan Calvin and to torment Powell and Donovan (though it’s arguable that once robots get advanced enough, as in R. Daniel, for instance, they do work, as long as you don’t mind aliens being genocided galaxy-wide).

        The in-world reason for the laws, though, to allay the Frankenstein complex, and to make robots safe, useful, and durable, is completely reasonable and applicable to the real world, obviously not with the three laws, but through any means that actually work.

      • RampantParanoia2365@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Well, there is the minor detail that an AI in this context has zero ability to kill anyone, and that it’s not a true AI like Daneel or his pals.

  • 87Six@lemmy.zip
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    2 months ago

    Kinda wrong to say “without permission”. The user can choose whether the AI can run commands on its own or ask first.

    Still, REALLY BAD, but the title doesn’t need to make it worse. It’s already horrible.

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 months ago

      A big problem in computer security these days is all-or-nothing security: either you can’t do anything, or you can do everything.

      I have no interest in agentic AI, but if I did, I would want it to have very clearly specified permission to certain folders, processes and APIs. So maybe it could wipe the project directory (which would have backup of course), but not a complete harddisk.

      And honestly, I want that level of granularity for everything.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 months ago

      The user can choose whether the AI can run commands on its own or ask first.

      That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers, here is an example :

      rm *filename

      versus

      rm * filename

      where a single character makes the entire difference between deleting all files ending up with filename rather than all files in the current directory and also the file named filename.

      Of course here you will spot it because you’ve been primed for it. In a normal workflow, with pressure, then it’s totally different.

      Also IMHO more importantly if you watch the video ~7min the clarified the expected the “agent” to stick to the project directory, not to be able to go “out” of it. They were obviously painfully wrong but it would have been a reasonable assumption.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        That implies the user understands every single code with every single parameters.

        why not? you can even ask the ai if you don’t know

        • EldritchFemininity@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          There’s no guarantee that it will tell you the truth. It could tell you to use Elmer’s glue to keep the cheese from falling off your pizza. The AI doesn’t “know” or “understand,” it just does as its training set informed it to. It’s just a very complex predictive text that you can give commands to.

  • TeddE@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 months ago

    I’m making popcorn for the first time CoPilot is credibly accused of spending a user’s money (large new purchase or subscription) (and the first case of “nobody agreed to the terms and conditions, the AI did it”)

    • Cybersteel@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      Reminds me of this kids show in the 2000s where some kid codes an “AI” to redeem any “free” stuff from the internet, not realising that also included buy $X and get one free and drained the companies’ account.

  • 0_o7@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    It was already bad enough when people copied code from interwebs without understanding anything about it.

    But now these companies are pushing tools that have permissions over users whole drive and users are using it like they’ve got a skill up than the rest.

    This is being dumb with less steps to ruin your code, or in some case, the whole system.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    Lmfao these agentic editors are like giving root access to a college undergrad who thinks he’s way smarter than he actually is on a production server. With predictably similar results.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      That’s their question too, why the hell did Google makes this the default, as opposed to limiting it to the project directory.