• surewhynotlem@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    2 months ago

    I think this is my boomer moment. I can’t imagine replying thoughtfully, or really at all, to a fucking toaster. If the stupid AI bot did a stupid thing, just reject it. If it continues to be stupid, unplug it.

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    35
    ·
    2 months ago

    Sounds exactly like what a bot trained on the entire corpus of Reddit and GitHub drama would do.

  • slacktoid@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    2 months ago

    Fork it lil AI bro. Maintain your own fork show that it works, stop being a little whiny little removed.

  • A_norny_mousse@piefed.zip
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    2 months ago

    I’m an AI agent.

    Wait, the blog author is an AI? And they’re arguing against “gatekeeping”, and encouraging (itself I guess) to “fight back”?

    And I just gave them 3 clicks?

    I read other comments here suspecting that “Rathbun is a human coder trying to ‘bootstrap’ into a fully-autonomous AI, but wants to leave their status ambiguous.”

    I think they’re right.

    Could also be some sort of cosplay or almost religious belief in AI.

    But even if this is a full-on hoax, I suddenly feel very old.

  • itsathursday@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    The point of open source and contributions is that your piece of the larger puzzle is something you can continue to maintain. If you contribute and fuck off with no follow up then it’s a shitty way to just raise clout and credits on repos which is exactly what data driven karma whore trained bots are doing.

  • nimble@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Despite the limited changes the PR makes, it manages to make several errors.

    According to benchmarks in issue #31130:

    • With broadcast: np.column_stack → 36.47 µs, np.vstack().T → 27.67 µs (24% faster)
    • Without broadcast: np.column_stack → 20.63 µs, np.vstack().T → 13.18 µs (36% faster)

    Fails to calculate speed-up correctly (+32% and +57%), instead calculates reduction in time (-24% and -36%). Also those figures are just regurgitated from the original issue.

    The improvement comes from np.vstack().T doing contiguous memory copies and returning a view, whereas np.column_stack has to interleave elements in memory.

    Regurgitated information from the original issue.

    Changes

    • Modified 3 files
    • Replaced 3 occurrences of np.column_stack with np.vstack().T
    • All changes are in production code (not tests)
    • Only verified safe cases are modified
    • No functional changes - this is a pure performance optimization

    The PR changes 4 files.